Microservices – A key to Digital Transformation

Introduction

Microservices

Rapid digitization causes tectonic shifts in customer expectations across industries. Many are struggling to adapt to this accelerated change. As a result, many businesses are aggressively pushing IT to cut delivery time, reduce cost and improve quality. Microservices architecture (MSA) has emerged one way to address these challenges. MSA explains the new way of designing software applications as suites of small, loosely coupled independent services.

The Microservices pattern represents a new approach creating applications, combining the concepts of service-oriented architecture (SOA), containerization and DevOps. This pattern combines the principles of SOA with bounded contexts to create small, autonomous Microservices. This builds by small agile teams, using continuous integration and delivery, often leveraging containerization for rapid deployment.

As a result, the Microservices pattern enables scalability, speed, efficiency, and agility. Talking about the leading trends in enterprise architecture, the Microservices pattern presents a bucket of opportunities as well as challenges. Implementing Microservices architecture can rethink several enterprise architectural concepts, thought processes and workflows.

Digital transformation refers to new, all-digital ways of doing business and making decisions. The digital on-ramps to transformation arise in emerging highly scalable, very reliable converged infrastructure. Both Microservices and containers are destined in this enterprise re-platforming, for different reasons. Containers hold significant benefits both for developers and the development effort as well as for the organization itself. Let’s get a clear sense of how Microservices play a major role in Digital Transformation.

Microservices Role in Digital Transformation

Microservices become one of the enablers in the digital transformation journey. It has some basic characteristics:

  • Always focused on “Business Need”.
  • “Single Responsibility” — Focused task and hence lightweight.
  • De-centralized API (every Microservice exposes its own set of APIs).
  • De-centralized data (every Microservice manages its own data).
  • Independently deploy, manage, scale and monitor.
  • Inter-communication between Microservices happens through patterns – HTTP REST or Message Brokers.
  • Technology is agnostic.
  • The focused team handles the Microservice end-to-end.

To implement a Microservices architecture, organizations require an organizational shift, such as close collaboration between developers and operations.

Areas to Focus

  • Rapid deployment: With many services to manage, organizations need to deploy them, both to test and production environments.
  • Fast provisioning: With continuous delivery, organizations must have automated mechanisms for machine provisioning, as well as for deployment, testing, etc.
  • Basic monitoring: More moving parts must coordinate in production, basic monitoring should in place to detect problems quickly. This monitoring can detect technical issues such as service availability and business issues.

Benefits of Microservice

  • As the Microservices are relatively small, easier for a developer to understand and the development time is faster when compared with traditional monolithic applications.
  • Loosely coupled and follows communication between Microservices (REST/Message Brokers).
  • Deployment and start-up times are faster due to lightweight in nature. In addition, the deployment time improves using a combination of containerization (Docker) and Continuous Integration.
  • Each service develops and deploys independently of other services – easier to deploy new versions of services frequently.
  • High availability of the application – With this approach, the applications become more reliable. The reason is, even if a specific Microservice goes down, only the feature provided by the concerned Microservice becomes unavailable and the rest of the application is still accessible and functional.
  • Scalable – Automatic horizontal scaling of the Microservices achieve using the auto-scale features by a few cloud providers.
  • Cloud adoption becomes easier.
  • Improved fault isolation – If there is a memory leak in one service then only that service will affect. The other services will continue to handle requests.

Microservices, containers, continuous integration and delivery have become critical tools in the digital transformation journey. Microservices architect applications in a way that is resilient, adaptable, and portable to almost any infrastructure. This is the only way that can help achieve broad-scale automation.

The accelerating digitization of business drives many organizations to rethink their operating models to meet the expectations of technology-savvy customers. Companies across all industries ensure their services are available through digital channels and are competitive with technological innovations.

Organizations address the business needs for operational flexibility, functional simplicity and continuous change that define today’s digital economy. Techcello provides enhanced functionality combining applications, business processes, and data by containerizing them into hybrid Microservices.

RUST – The Programming Language

Introduction

1200px-rust_programming_language_black_logo.svg_

The Rust programming language helps write faster, more reliable software. High-level ergonomics and low-level control are often at odds in programming language design; Rust challenges that conflict. Through balancing powerful technical capacity and a great developer experience, Rust gives you the option to control low-level details (such as memory usage) without all the hassle traditionally associated with such control.

The adoption of Rust programming language by the developers has been increasing in an exponential manner. This language provides the simplicity and security which a developer needs while writing codes. According to the Stack Overflow’s annual Developer Survey, Rust surpassed the ever-popular Python and secured the top position as the most-loved language by the developers.

Why RUST?

Rust is proving to be a productive tool for collaborating among large teams of developers with varying levels of systems programming knowledge. Low-level code is prone to a variety of subtle bugs, which in most other languages can be caught only through extensive testing and careful code review by experienced developers. In Rust, the compiler plays a gatekeeper role by refusing to compile code with these elusive bugs, including concurrency bugs. By working alongside the compiler, the team can spend their time focusing on the program’s logic rather than chasing down bugs.

Rust also brings contemporary developer tools to the systems programming world:

  • Cargo, the included dependency manager and build tool, makes adding, compiling, and managing dependencies painless and consistent across the Rust ecosystem.
  • Rustfmt ensures a consistent coding style across developers.
  • The Rust Language Server powers Integrated Development Environment (IDE) integration for code completion and inline error messages.

By using these and other tools in the Rust ecosystem, developers can be productive while writing systems-level code.

Hundreds of companies, large and small, use Rust in production for a variety of tasks. Those tasks include command line tools, web services, DevOps tooling, embedded devices, audio and video analysis and transcoding, cryptocurrencies, bioinformatics, search engines, Internet of Things applications, machine learning, and even major parts of the Firefox web browser.

A Case Study

Microsoft has been facing issues with C and C++ for a while now. In fact, Microsoft spends an estimate of $150,000 per issue as a whole to solve the issues and vulnerabilities. In 2018, there were more than 450 issues faced by the tech giant and it is only getting worse with time. This year, the tech giant faced over 470 issues.

To overcome such issues, Microsoft developers recently announced to use the Rust programming language instead of C and C++ to write and code the components of Windows. The project is known as Verona where the developers will develop a new and safer programming language for Windows.

Why Adopt Rust?

According to the developers at Microsoft Research, using C and C++ for developing software is a billion-dollar problem. C and C++ are one of the oldest programming languages, these languages lack the documentation for resources for modern machines. However, they work great on the low-level systems and it is mainly underlying based on the insecure technologies on which the developers create machines in the present scenario. One thing that really concerns the developers is safety while coding. C and C++ lack this ability to write secure and correct code.

Some of the reasons which led to the adoption of Rust are mentioned below:

  • The memory and data safety guarantees made by the Rust compiler is stronger than that of C and C++.
  • Less time is spent debugging trivial issues or frustrating race conditions in Rust.
  • The compiler warning and error messages are extremely well written in Rust than in C and C++.
  • The documentation of Rust for compiling error messages and other such topics are provided neatly than the documentation of C and C++.

As Rust is relatively younger than C, the developers at Microsoft Research mentioned that there are several important features which are missing in this new language in order to make it fully developed:

  • Safe Transmutation: Safe transmutation helps to safely cast “plain old data” types to and from raw bytes.
  • Safe Support for C Style Unions and Fallible Allocation.

Wrapping Up

Previously, there are a few programming languages which have been created at the Microsoft Research, like:

  • F* – a functional programming language inspired by ML and aimed at program verification, where ML is a general-purpose functional programming language
  • Cω or Comega – a free extension to the C# programming language
  • Spec Sharp – a programming language with specification language features that extends the capabilities of the C# programming language

Rust, the multi-paradigm system programming language has been appreciated by the developers for a few years now. This year, Rust language has secured the second position as the fastest-growing programming language contributed by the GitHub repository contributors.

Developers at Microsoft Research seem to be adopting this language to have a more secure environment for coding. It ensures as a memory-safe programming language and has a few interesting features like unit testing built into Cargo which allows developers to write unit tests in the same file as production code and run them easily while developing. However, this is not the first time that the tech giant research centre is creating a new language.

Click here to watch the Rust-Adoption video.

  • Raghav, Solution Architect

LinQ Query Helper

Building a dynamic LinQ query for every entity or a collection is tedious and complex for maintenance. Based on my previous experience and challenges faced in applications, I have come up with a solution to build dynamic expressions for generic entities or collection. This solution will enable users

  • To perform dynamic “order by” by building generic expressions.
  • Generate dynamic expressions using expression builder.
  • Join expressions with “And” and “Or” operators on any layers of the application.
  • Able to build and execute dynamic generic expressions on database entities as well as other local collection entities.
  • Proper error handling.
  • A generic LinQ solution to the entire application.

This solution is packed as dll and hosted in nuget. Use “Package Manager Console” in Visual Studio and run the below command

Install-Package LinqQueryHelper.dll

Nuget Link: https://www.nuget.org/packages/LinqQueryHelper.dll

For dynamic query on a single entity or a similar group of entity collection we can also use Linq Dynamic Query Library (“System.Linq.Dynamic”) which was beautifully explained in ScottGu’s blog. But this fails on handling dynamic queries and expressions when joins come in to picture.

This solution overcomes all these challenges and provides a very flexible and simple methods to build dynamic expressions and generic query ability on all type of LinQ queries (including joins).

Your feedback is very important. Please comment if this solution works for you and recommend improvements.

 

ReactJS basic app

ReactJS is an very rich client side script which is designed by FaceBook. Thousand thanks to you guys. Its an amazing app development kit which follows FLUX architecture/framework.

Why is ReactJS better than other client side scripts?
Lets talk about some major scripts like “Angular”, “Backbone” and “Ember”.
Angular is a very good client side MVC script but it provides two way binding which leads to too many confusions when the application is very big in size. Its very difficult to track an object state.
Backbone is also a good one but it very open to developers to design the flow or mange state of an object, which leads to other peers very difficult to manage code and some time very complex to understand the coding structure.
Ember is a very powerful MVC script but the documentation/support is poor and hence it is very difficult to get the solutions.

Is it good to use MVC frameworks at client side to build SPAs?
No, it is not recommended to user MVC at client side. MVC is a very good framework at server side since state management is very well defined. But at the client side managing a state of an object in MVC framework is very difficult. So the FLUX architecture is very well defined for state management and it supports one way object binding. ReactJS script follows FLUX architecture and state management is very well controlled. So in my opinion ReactJS is very good solution for client side SPA apps.

Dowload a sample ReactJS app here

Unit Of Work With Multiple DBContexts

An MVC 5 and Web API 2 application with Automapper, EF 6, repository pattern, dependency injection using Autofac which uses one unit of work to deal with multiple dbcontexts with Async functions.

Features:

  • Entity Framework 6
  • MVC 5 Web App
  • Async MVC 5 Web App
  • MVC 5 WebAPI 2
  • Dependency Injection and IOC using Autofac
  • AutoMapper
  • Unit Of Work with multiple DBContexts
  • Generic Repository with Async functions
  • Service layer with Async functions
  • Code First approach
  • Power of repository extensibility
  • NUnit with Async Tests
  • Moq

This application’s design pattern overcomes most of the redundant code while creating repositories and resolves a unit of work with multiple Dbcontexts.

Dos while creating repositories:

  • Create a generic repository class and generic repository interface which exposes common functions to each entity.
  • Create individual repository classes which extends generic repository with additional functions of its interface where ever it is required.
  • Initialize a generic repository’s local DBContext object through its constructor.
  • Set this DBContext’s entities-set in to the generic repository’s local entities-set object.

Don’ts while creating repositories for unit of work:

  • Do not create multiple repository interface and repository classes for all the repositories which is not required.
  • Do not associate or inject your DBContext object to your generic repository class. Get the DBContext reference from Unit of work class.

Good way of designing your service layer:

  • Create one service class for one controller.
  • Have an individual interface for each service class – Which helps customizing functions related to its service.
  • Inherit generic repository interface to the service interface – Which forces the service class to expand all the methods in generic interface (optional).
  • Inject unit of work objects corresponding to its DBContext in the service constructor.
  • Access repositories through Unit of Work object.
  • Commit all the transactions corresponding to a DBContext once with its Unit of Work object.
  • In the controller inject only its related service object and call its functions for further operations.

Click here to download the code

Hope this design pattern helps. Please let me know if i had missed anything and any suggession to improve this design is welcome.

Configuring Tortoise SVN + SSH on Windows 7 or latest

The most irritating part in the IT world is spending most of our time for nothing. I mean the software update happens and if that doesn’t sync with an existing or older tool which was widely used, then it becomes too suffocating for us. It happened to me when my client asked me to configure Tortoise SVN with windows 7 and use pageant key to access the project through VPN. I had spent almost 4 days to find the solution. It was such a mess. Googled a lot but ended up with no good tasty ingredients. Finally after 4 days I found the issue was not with Putty Pageant. The problem was that In Windows 7 and other latest versions of Windows OS, Tortoise is not able to communicate with Pageant’s “Plink.exe”.  So if you try to load the key with Putty Pageant and access the SVN Documents over VNP, it throws some irrelevant errors.

I tried with lots of tricks to solve the puzzle. Believe me the solution was so simple and I was shocked. If you are using Windows 7 or any latest Windows version of OS you do not need Putty Pageant installed in your machine. You can configure and load your key directly using Tortoise by making use of the tortoise command line parameters.

Here are the steps:

1. Open Tortoise SVN settings.

Tortoise SVN + SSH in Windows 7 configuration

2. Then go to “Network” in the left menu

Tortoise SVN + SSH in Windows 7 configuration

3. In the SSH block, Fill the SSH Client details as follows

“C:\Program Files\TortoiseSVN\bin\TortoisePlink.exe” -i path-to-your-pageant-key.ppk -l username -pw password

Completing the above 3 steps you are ready to access your Tortoise SVN over VPN with pageant key.

Any questions please contact me @Raghav

Powering LINQ and joins over OData

Joins over Odata

–          Joins Categories and products table/vategory with the combination of categoryid and product id

–          Get the collection/records of Categories and products table/category based on the condition

–          Get only selected field/column value based on the join condition

–          Get selected filed/column value of the Products table/category based on joining condition

–          Get only a value string (with no xml tags) of selected field/column value based on joining condition

–          Get the collection/records linked with products table/category

–          Works same as the above query which identifies the category table/category and gets the collection/records associated with products table

–          Get selected fields/entity of all the collection/records of the products table/category associated with categories table/category

–          Get selected fields/entity of the first collection/record of products table/category associated with categories and order_details table/category

–           Get selected filed/column value of the Products table/category based on joining and search condition in which OrderID=10285

–          Get selected filed/column value of the Products table/category based on joining and search condition in which price greater than 200

Odata formats:

–          Get the collection/records of the products table/category with atom format (default fomat)

–          Get the collection/records of the products table/category with json format

For more information please click here

With all these information hope you get a good idea on OData and in another corner you might be scratching your head thinking all the queries are written only through url (http context) but how to use this in .net code-behind with LINQ.

We can also use the http context call in code-behind but obviously we can use our OData web service with LINQ in code-behind.

Here we go with the process.

Add a web page to your web project. Add a web service reference which you have created earlier to your project. Then write the code in your web page code behind file as follows

var serviceURI = new Uri("http://www.yourserviceurl.com/WcfDataService1.svc");
var context = new NorthwindService.NORTHWNDEntities(serviceURI);

//Get all records/collection of the products table/category
var query = (from el in context.Products select el).toList();

//Get the top record/collection of products table/category
var query = (from el in context.Products select el).FirstOrDefault();

//Get all top records/collection of the products table/category by joining category, supplier and order_details table/category
var query = (from el in context.Products.Expand("Category,Supplier,Order_Details") where el.ProductID.Equals(1) select el).FirstOrDefault();

//Search the result collection of the last quesry with orderID
var searchres = (from el in query.Order_Details where el.OrderID.Equals(10285) select el).FirstOrDefault();

Once you get the required collection with the join you can search and do other linq operations. Please feel free to contact me @Raghav for any queries

Basic queries of Odata

Some basic queries to play around with Odata

–          Get all the collection/records of the Products category

–          Get a single entity/record of the products with the value productid 1

–          Get only productName filed of entity/record of the products category with the value of productid 1

–          Get the collection/records count of the products category

–          Get all the collection/records of the Products category order by rating field/column/entity ascending

–          Get all the collection/records of the Products category order by rating field/column/entity descending

– Passing a value ‘car’ to the function ‘ProductsByType’ which accepts a single string parameter

–          Get the top 2 collection/records of the Products category

–          Skip first 5 records and get all collection/records of the products category

Serving OData

Odata is open data web protocol for querying and updating data. The main advantage avail is reduced bandwidth. We can query or update only as much data as we want.

We all know that more than 50% of the internet users are using wireless devices to access internet nowadays. Expectations are high with speed and performance when using any web apps or web sites. So Odata is a right techi to adopt and its right time.

OData is very powerful and efficient for accessing data faster. Performance is the key. Since it works with https context and querying the data is easy and more flexible. Odata says, “I give everything, you take how much you want” – on a single hit.

So a vendor can expose any bulk of data over the web service but customer can query and get only as much data as they need. This reduces bandwidth and avoids downloading unnecessary data.

Let us setup a web service to work with Odata:

  • Start Visual Studio 2010 and select new ASP.net web project
  • Add new ADO.NET Entity Data Model (.edmx) item and name it as “Northwind”
  • Select generate from database, select SQL server, choose connection and add selected tables to .edmx file.
  • Add new WCFDataService file and add the following code
using System;
using System.Collections.Generic;
using System.Data.Services;
using System.Data.Services.Common;
using System.Linq;
using System.ServiceModel.Web;
using System.Web;

namespace Test
{
// Add the class name of your edmx file to your DataService and set the entity access rule here.
public class WcfDataService1 : DataService
{
// This method is called only once to initialize service-wide policies.
public static void InitializeService(DataServiceConfiguration config)
{
// TODO: set rules to indicate which entity sets and service operations are visible, updatable, etc.
// Examples:
config.SetEntitySetAccessRule("*", EntitySetRights.AllRead);
config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2;
}
}
}
  • Run the web service. Add the service reference to the Web project.

Now the application is ready with Odata webservice.

Run the service with .net or host the service in IIS and your service is ready to query through http context.