Welcome to weblogs.com.pk Sign in | Join | Help

AVL Tree

In the last project I needed AVL Tree inspired data structure. The base class library doesn't expose any thing that could be extended so I setup a Test Project to establish some building blocks. AVL Tree is a self balancing binary search tree, the first of this kind. The basic idea is after adding the node into the Binary Search Tree, the parent node is checked recursively till root node for either left or right sub-trees have similar height or not; and if not; required rotation is done to balance the heights. The height of the node is its distance in term of children from the root. Here is the first test case for an AVL Tree; that we need to satisfy, and using Visual Studio’s auto class/method generation we end up with such an AVL Tree class.

testclass

Given AVL Tree is a specialized Binary Search Tree and Binary Search Tree is specialized Binary Tree; lets setup this class hierarchy and make these classes Generics aware

classes-1

Now that we have signatures of the required classes; lets start adding some code into them. First of all for the BinaryTreeNode<T>; lets also add Parent property that will be helpful for us next. Lets also add Height property that will tell the number of generation that exists as children and grand children of that specific node. It can be easily coded using recursion.

binarytreenode

For the BinaryTree<T> abstract class; we can add BreadthFirst and DepthFirst methods that for traversing the binary tree. I am using C#’s Action<T> delegate; so we can reuse it in the same class for Clear() and in inherited classes.

binarytree

For the BinarySearchTree<T>; we need to override Contains and Remove from our abstract BinaryTree<T> class. Also note that we have specialized the “T” with condition that it should implement IComparable so that we can add it in our Binary Search Tree (BST) according to the algorithm of BST. The Add(T) method is kept virtual so we can override it subclass; AvlTree later. This method returns the newly added node. The rest is implementation detail that one can learn in Data Structure books or Wikipedia; I am also sharing the whole project through Github so you can look it up there as well

For the AvlTree, the BreadthFirst() is just now calling the BreadthFirst from BinaryTree<T> base class with the delegate that simply populates the List and return it as an array later. Its not production ready approach as its duplicating the data; and we can / should implement Iterators Pattern For the Add; we simply will class base class Add that will insert the data into the tree and returns the node. The parent node the newly added node is now need to check and balance upto the root node for the AVL Tree to work as expected, that we will do next.

bst

For the AvlTree, we need a balanceFactor(Node) that will tell us if the given node is “left heavy” or “right heavy”; it can do this now easily using the “Height” property that we have already established. If the balanceFactor() returns >1 then the node is left heavy and if <-1 then it is right heavy and we need to “balance” it using rotateLeft, rotateRight, rorateLeftRight or rotateRightLeft. If balanceFactor is –1, 0 or 1; then the difference of height between left and right is just one generation / height that we can ignore. These are AVL tree technical specifications that one can read about on Wikipedia etc.

avl-1

  • Single rotations (rotateLeft and rotateRight) are done to the given node.
  • Double rotations are done to the child node and then the given node; for rotateLeftRight; we rotateLeft the left child first and then rotateRight the given node; and for rotateRightLeft we do rotateRight the right child and then rotateLeft the given node

For the single rotation; we select the pivot node and then swap the parents and children according to the rotation type. Double rotations can be implemented calling single rotation methods accordingly.

avl-2

  • The node being rotated parent has the child link to it that gets dormant after the rotation. We are not setting this in the rotation methods instead it is left as responsibility of the caller of these methods as they “know” either to change left or right child of the parent and can refer the parent node easily before calling these rotation methods

Now for the balance() we need to call balanceFactor() to check if the node is left or right heavy.

  • If its left heavy and balanceFactor of its left node is also left heavy then we will simply do rotateRight else we need to do rotateLeftRight
  • If its right heavy and balanceFactor if its right node is left heavy then we will do rotateRightLeft else we will simply do rotateLeft

We also need to set the appropriate child of the parent after the rotation and need to keep balancing parent node recursively till we reach the root node. The root node also need to balance if required

avl-3

As you can see; using the modern languages and their framework libraries; we can implement such classic data structures and build upon these to more exciting data structures. The project is available at https://github.com/khurram-aziz/HelloDataStructures

Posted by khurram | 0 Comments
Filed under:

MVC5: Minimal Provider for ASP.NET Identity

ASP.NET once came with Membership and Role Providers, that we used and abused in past, then came Simple Membership Provider with Razor and Webmatrix that exposed a simpler API but still used the same providers behind the scene. Finally with Visual Studio 2013 and ASP.NET 4.5.1 they introduced ASP.NET Identity that offers a modern replacement of these old providers. ASP.NET Identity has modern API and exposes plug-ability and different levels. It uses Entity Framework; and if you want to store your users in database of your choice, you just need to replace SQL Entity Framework provider that gets configured out of the box with the provider of your database. You can also go deeper and implement the required interfaces and use your custom classes that don't use Entity Framework and in this post we will do exactly this; we will explore what minimal classes we need to get things going. We have a pet MVC project that always becomes the test bed for anything MVC related. It already has gone through MVC3 and MVC4 upgrades, and it was time that we replace its Membership and Role providers with ASP.NET Identity and upgrade it to MVC5

For the MVC5 upgrade; I would recommend that you create a new separate project as the new template comes with Bootstrap goodness, and then simply copy over your models, views and controllers. We availed this opportunity replacing old ASPX views to newer Razor based CSHTML views cleaning up some old mess.

If you have created a new MVC5 project; then you will need to delete ApplicationUser and ApplicationDbContext (if you intend not to use Code First Entity Framework that gets configured by default out of the box) from Models namespace. You also need to delete ApplicationUserManager class from App_Start\IdentityConfig.cs. For our implementation; we need to implement IUser<T> and IUserStore<T> and need to setup UserManager and SigninManager classes. This is very well documented at Overview of Custom Storage Providers for ASP.NET Identity. The minimal required code will look something like this

Once these files are in place; we will need to use their Create() methods in App_Start\Startup.Auth.cs for app.CreatePerOwinContext methods instead of original User and Signin Managers. We will also need to rename the Manager classes in Controllers\AccountController and Controllers\ManageControler

Email Confirmation

If you want to have Email Confirmation option; you should read Account Confirmation and Password Recovery with ASP.NET Identity (C#) where its documented. For this to work; your store needs to implement IUserEmailStore<T> and IUserTokenProvider<T, U>. You will also need a helper class implementing IIdentityMessageService for Email and SMS and add this code into UserManager’s static Create() method

manager.RegisterTwoFactorProvider("Phone Code", new PhoneNumberTokenProvider<YourAppUser> { MessageFormat = "Your security code is {0}" });
manager.RegisterTwoFactorProvider("Email Code", new EmailTokenProvider<YourAppUser> { Subject = "Security Code", BodyFormat = "Your security code is {0}" });
manager.EmailService = new YourEmailService();
manager.SmsService = new YourSmsService();
var dataProtectionProvider = options.DataProtectionProvider;
if (dataProtectionProvider != null)
{
  manager.UserTokenProvider = userStore; //userStore instance thats implementing the IUserTokenProvider
}

You will also need to change code in AccountController accordingly for Register and Login scenario and add required new views and change existing Register, Profile views for Email / SMS

  • The ASP.NET Identity was open sourced and the code is available at https://aspnetidentity.codeplex.com; being open source, you can always at the code and fix your provider accordingly
  • Having your own UserManager and UserStore, you have a choice to either override UserManager’s methods that are virtual and hook them directly to your additional code in UserStore; for instance when creating user through UserManager base class it demands that Store also implement IUserPasswordStore<T>; if you want to avoid this; you can simply override Create() methods of base UserManager and call your UserStore methods directly. Similarly you can avoid Token Providers and override Email/Sms methods; point is; you have choice either override required UserManager methods or implement required interfaces. Having code from codeplex helps to peek into whats happening in the base classes
  • Don't confuse it with ASP.NET Identity Core for ASP.NET Core thats available at https://github.com/aspnet/identity
Posted by khurram | 0 Comments
Filed under:

Dotnet Core :: PostgreSQL

postgresql Dotnet Core Series

Given, Dotnet Core can run on Linux; and in the series we have been exploring different aspects of having a Microservice based Containerized Dotnet Core application running on a Linux in Docker Containers. SQL Server has been the defacto database for .NET application, there even exists SQL Server for Linux (in a Public Preview form at the time of this post) and there is even an official image of SQL Server for Linux for Docker Engine that we can use; and connect our existing beloved SQL Tools to connect to it; but it needs 3.25GB memory and its an evaluation version.

db-sqlserver-docker

db-pg-pgadminPostgreSQL is ACID compliant transactional object-relational database available for free on Windows, Linux and Mac. Its not as popular as mySQL; but it does provide serveral indexing functions, asynchronous commit, optimizer, synchronous and asynchronous replication that makes it technically more solid choice. Given its available for Windows; we can install it on the Windows development machines along with pgAdmin; the SQL Management Studio like client tool.

Entity Framework Core

Entity Framework Core is a cross platform data access technology for Dotnet Core. Its not EF 7 or EF6.x compatible; its developed from scratch and supports many database engines through Database Providers. Npgsql is an excellent .NET data provider for PostreSQL (Their GitHub Repositories) and it supports EF Core. All you need to do is install the Npgsql.EntityFrameworkCore.PostgreSQL NUGET package using the dotnet core CLI (dotnet add package Npgsql.EntityFrameworkCore.PostgreSQL) It will bring along the EF Core and Npgsql libraries into your project.

We can now write our Entity classes and a DbContext class. For Npgsql, in OnConfiguring override, we will use UseNpgsql instead of UseSqlServer with DbContextOptionsBuilder passing on required PostgreSQL connection string. Here’s one entity class and context file I made for testing!

We can use the Context class with all the LINQ goodness; similar to SQL Server; for instance here’s the controller class

And here’s displaying the products in the View

  • If the entities are in different namespace; either import the namespace in the web.config file in the Views folder; or add the namespace in particular View by adding the using at the top @using ECommMvc.Models;

Entity Framework Core Command Line Tools

EF Core .NET Command Line Tools extends Dotnet CLI; and add ef commands to the dotnet (CLI) We need to add Microsoft.EntityFrameworkCore.Tools.Dotnet and Microsoft.EntityFrameworkCore.Design NUGET packages using dotnet add package NUGET; dotnet restore them and then add PackageReference using Design and DotnetCliToolReference using the Tools.Dotnet package; and you should end up having the dotnet ef commands in the project

db-eftools

Using the ef commands; we can add the Migration (dotnet ef migration add Migration-Name), remove it; update the database (dotnet ef database update) and more. Once we have the Migrations in place; we can continue to evolve our Entities and Database accordingly.

Seeding

Using the Migrations; we can Seed our database as well; we can create a Migration naming Seed; and add the required seeding code in the migration’s CS file

When deploying into the Docker Container; we often need “side kick” container that “seeds” the cache or database (for details see Dockerizing PHP + MySQL Application Part 2); as when container is started we get the clean slate. Given the Migrations code become part of the MVC project; and in .NET Core; there is a Program.cs the entry point where Kestrel / MVC is initialized; we can add more code there as well. We can use the Context that’s in place and Migrations and update the database (that will initialize and seed)

Docker

Now with database work in place and Docker building techniques shown in previous posts (Redis Clients -- ASP.NET Core and Jenkins); we can have v2 Compose file or v3 Compose file (for Docker Swarm) and deploy our .NET Core MVC application that’s using Redis as Caching and PostgreSQL as Database into Docker Containers running on Linux node(s)

Project is available at https://github.com/khurram-aziz/HelloDotnetCore

Jenkins

jenkins-logo

Docker Swarm Series

Dotnet Core Series

Jenkins is an open source “automation server” developed in Java. Its more than a typical build system / tool, its additional features and plugins; can help to automate the non-human part of the software development process. Its a server based system that needs a servlet container such as Tomcat. There are number of plugins available for integration with different version control systems and databases, setting up automated tests using the framework of your choice including MSTest and NUnit and doing lot more during the build than compiling the code. If Java and Servlet Containers are not your "thing"; running Jenkins in the Docker provides enough black box around these that getting started with it cant be more simpler. There is an official image on the Docker Hub and we just need to map its two exposed port; one for its web interface and the other that its additional agents uses to connect to the server. All that needs to run its instance is docker run –p 8080:8080 –p 50000:50000 jenkins; we can optionally map the container’s /var/jenkins_home to some local folder as well. To learn more options that can be set using the environment variables like JVM options; visit https://hub.docker.com/_/jenkins

We can use Jenkins to build Dotnet Core projects; all we need is to install the Dotnet Core SDK on the system running the Jenkins. For the Container; we can expand the official image and write a Dockerfile to install the required things; but first we need to check which Linux jenkins image is based on and for that; do this

jenkins-image-linux

Knowing that its Debian 8 and its running the things under jenkins user; lets make a Dockerfile for our custom Jenkins docker image

  • Note that we used information from https://www.microsoft.com/net/core#linuxdebian on how to install the latest Dotnet Core SDK on Debian
  • Note that we added git plugin as per guideline at https://github.com/jenkinsci/docker where this official image is maintained; they have provided the install-plugins.sh script; and we can install the plugins while making the image; and we will not have to reinstall these plugins when running the Jenkins

If we build this Dockerfile and tag the image as dotjenkins; we can run it using docker run –rm –p 8080:8080 –p 50000:50000 dotjenkins; running on a console is required so we can get the initial admin password from the info that gets emitted; its required on the first run setup wizard when we will open the http://localhost:8080

  • Visit Dockerfile; if you need the heads up on how to build Docker image from such file
  • During the setup; you can choose Custom Plugins; and then select none; and we will have the minimalist Jenkins ready to build the Dotnet Core project from Git Repository

You can setup a test project; giving https://github.com/khurram-aziz/HelloDocker as Git path that has a Dotnet Core MVC application

jenkins-git

 

 

jenkins-build-shelljenkins-build-shell-stepAnd then can use Execute shell Build step type and enter the familiar dotnet restore and dotnet build commands to build the Dotnet Core application with the Jenkins

Once the test job is setup; we can build it; and it will download the code from Git and build it as per the steps we have defined. We can see the Console output from the web interface as well!

jenkins-build-consoleoutput

 

If you are following the Dotnet Core Series; in the last post; Docker Registry; we also needed to build the mvcapp Docker Container after publishing the Mvc application. In that post; the developer had to have the Docker for Windows installed, as to build the Docker image; we need the Docker Daemon; and we also needed the access of the Docker Registry so we can push the Mvc application as the Docker Container from where the Swarm Nodes will pick it when System Administrator will deploy the “Stack” on the Docker Swarm. We can solve this using the Jenkins; that it can not only automate this manual work; but also we will neither need Docker for Windows at the developer machine nor will need to give access of Docker Registry to the developer.

To build Docker Container Image; from Jenkins running in Docker Container; we first need to technically assess how it can be done.

  • We need to study the Dockerfile of Jenkins at its GitHub Repo, as its creating jenkins user with uid 1000 and running the things under this user.
  • We will need the Docker CLI tools in the container
  • We will need the access of Docker Daemon in the container so that it can build the Docker Images using the Docker daemon

Lets make a separate Dockerfile first to technically assess it without the Jenkins overhead.

If we build and tag the above Dockerfile as dockcli; we can run it as docker run -v /var/run/docker.sock:/var/run/docker.sock -it dockcli

  • Note that we exposed /var/run/docker.sock file as VOLUME in the Dockerfile; so that we can map it when running the container passing the docker.sock file of the Docker Daemon where its “launching” this way we dont need to run the Docker Daemon in the Container and we can “reuse” the Docker Daemon where our image will run; there exists “Docker in Docker” image using which we can run the Docker Daemon inside the Container; but we dont need it here
  • We created a jenkins user similar to how its made and configured in the official Jenkins image
  • We need to add jenkins into sudoers (and for that we also need to install sudo first) so we can access docker.sock using sudo; else it will not have enough permissions

With these arrangements we can access the Docker Daemon from with in the Docker Container!

jenkins-sudo

Now lets add back this work to our dotjenkins Dockerfile; so we can create the Docker Image after the Dotnet Core build in the Jenkins. Here’s the final Dockerfile

Lets run our customized Jenkins image using docker run --name jenk --rm -p 8080:8080 -v /var/run/docker.sock:/var/run/docker.sock dotjenkins

We can now add Docker Image Creation step in our project build steps

jenkins-build-dockerstep

  • Note the usage of Jenkins variable as the tag for the image we are creating

If we do a Jenkins build; we will see the Docker step output in the Console Output and shortly afterwords we will have the required image in the Docker Daemon where Jenkins Container is running

jenkins-build-dockeroutput

We can use docker-compose to define all the related switches needed to run the docker image so it becomes a simple docker-compose up command. On the similar lines; we can now add additional step to push the generated Docker image to the Docker Registry!

Posted by khurram | 0 Comments
Filed under: , , ,

Docker Registry

Docker Swarm Series

Dotnet Core Series

In the “Redis Clients :: ASP.NET Core” post; we made a minimalist ASP.NET Core MVC application that uses Redis; we made v2 Docker Compose file that we used to deploy our application on the Docker as two Containers; one running Redis Server and the other running the ASP.NET Core application. Even though we used Redis and Distributed Caching; but our application still was deployed on the single host. Given its based on a micro-service Docker friendly architecture, we can write a v3 Compose file and using Stack Deploy command we can deploy it on a multi-host Docker Swarm setup. Here’s one such v3 compose file

  • If you are following the Dotnet Core Series; you might have noticed that build and restart options are gone from the v2 compose file that we made in the previous post. This is because if deploying to Docker Swarm; those two are not supported

Before doing stack deploy; given we need a “custom” image; we need to ensure that this image exists with the Docker Daemon and for this we obviously need to first publish our Dotnet Core app; and then we can build the Docker image from the Dockerfile against the Docker Daemon.

registry-docker-build

  • I have used docker-compose to build the image above; I am having the Docker Client and Docker Machine in Linux Subsystem configured against the Swarm and given the Linux Subsystem is still “beta” due to https://github.com/Microsoft/BashOnWindows/issues/1123 we cannot build the image against a remote Docker Daemon (Swarm Node in this case); however docker-compose works fine; and we can use it to build the image from Linux Subsystem 

Once our custom image is made; we can do Stack Deploy using the v3 Compose file we made earlier

The mvcapp container needs to be globally deployed but it gets deployed only on the one node against on where we build it:

registry-stack-deploy

We need to make that mvcapp image available to the remaining participating nodes as well; we can either build the image on each Docker Daemon in the Swarm or we can setup a “Docker Registry” where we can make our image available and setup Swarm nodes with this Registry; update the v3 compose file and redeploy the stack and all the swarm nodes will get the image from the “Registry”. There exists an open source Docker Registry as an official Docker Container image; and running it just a docker run away

registry-docker-run

There is an excellent TL;DR; at https://docs.docker.com/registry; you basically tag the existing custom image prefixing the registry url; in our case we will tag mvcapp as 192.168.10.14:5000/mvcapp; and then push it; and Docker Daemon will upload the image to the Registry; similar to Docker Hub! But it would not work

registry-https-error

The problem now is; this Docker Registry is not “secured” and Docker Daemon by default only likes secured remote registries; we now need to add our registry as "trusted" unsecured remote registry in the docker daemon configuration. It depends what setup you are using; for instance I am using RancherOS to run the Docker Engines; and to add the remote registries; I have to do this:

Once the registry is added; we can push our registry tagged image from where other nodes can download and consume it when required

registry-insecure-registry

All we need to do now is update our v3 Compose File for the Docker Stack Deploy to use the our-registry/mvcapp image instead of just mvcapp so that all nodes can get it from “our-registry” address

  • Notice mvcapp image is changed accordingly
  • Notice the newly added Healthcheck section; it done so that if Redis Container is not available; the MVC app will break; this can happen if the node running the Redis container is not available; the Swarm will reschedule the Redis Container soemwhere else; and due to this healthceck; the MVC containers will also get rescheduled picking new Redis IP. This was done because the MVC app is talking to Redis using the IP and not the host name; details are in Redis Clients -- ASP.NET Core post.

Lets redeploy the stack, and we will have our mvcapp containers running on all the participating nodes; each node downloading the required image from the Registry automatically

registry-stack-mvcapp

Now to try a failure recovery; lets turn off the swarm3 node; and Docker Swarm should be able to recover automatically

registry-swarm3-failure

Check https://docs.docker.com/registry/recipes for other use cases of Docker Registry

If we are using Docker for Windows; we can add our insecure Registry into its Daemon; and once added; can make a docker container tagging directly for the registry and push it from the development machine/environment from where it can be used/picked by Swarm Nodes accordingly; giving us seamless deployment to cluster experience!

registry-docker-for-windows

Posted by khurram | 0 Comments
Filed under: , ,

Redis Clients :: ASP.NET Core

Redis Series

Dotnet Core Series

In the “Redis Clients” post; we explored what it takes to use Redis and how it can be helpful in our applications. We also utilized Redis datatypes and abstractions and how it can be used for page / visitor counters required in web applications. In the “Dotnet Core” post we saw that given the open source version of Dotnet now works in Linux; we can deploy the Dotnet Core applications into the Docker Containers. We even made the simple ASP.NET Core application and connect it to Redis and experienced that we can use Redis as the Distributed Cache backend using the Microsoft.Extensions.Caching.Redis.Core Nuget package that uses the StackExchange.Redis Redis Client library.

They also opened source the MVC framework and we can setup the ASP.NET Core MVC project using the Dotnet Core CLI; dotnet new mvc. We can have Middlewares; that handle the requests and responses. You can learn more about Middlewares at https://docs.microsoft.com/en-us/aspnet/core/fundamentals/middleware and there is a StartTimeHeader middleware at https://docs.microsoft.com/en-us/aspnet/core/performance/caching/distributed that uses the Distributed Caching feature of ASP.NET Core and Redis as the backend. On the same lines, for the Visitor Counter using Redis that we did in “Redis Clients”; we can have a RedisVisitorMiddleware that does its counting and we can have this cross cutting concern handled separately in its own class that can be glued into the MVC application in the Startup.cs

If we are using Micro Services Architecture and the final application will get deployed on Containers; the Redis Server will be running on a remote node; unfortunately we cant use the host name for Redis Cache Server and it will throw Platform Not Supported exception. We will have to resolve the host name and give its IP for the Redis Configuration in Startup.cs; we can have the static property for RedisConnection in Startup.cs; something like this

The hit counter code can be moved to relevant Controller / Action that can use Startup.RedisConnection static property to access Redis. We can use the same property to setup the Redis Distributed Caching provider

We can use Distributed Caching for Page Caching. If there is any Page that takes considerable time to “generate” and content of that page is not dynamic or change rarely; we can use Redis to store the generated page and reuse it from there. ASP.NET Core MVC has concept of Views and Partial Views; we can use Distributed Caching to cache them; for the testing; lets setup a Partial View in ~Views/Shared/ and for proof of concept we can use Thread.Sleep emulating time that it takes to generate.

Tag Helpers in ASP.NET Core MVC enables server side code to participate in creating and rending HTML elements in Razor files. You can learn about them at https://docs.microsoft.com/en-us/aspnet/core/mvc/views/tag-helpers/intro and there is a DistributedCacheTagHelper (code @ https://github.com/aspnet/Mvc/blob/dev/src/Microsoft.AspNetCore.Mvc.TagHelpers/DistributedCacheTagHelper.cs) that we can use for Partial View Caching.

Now to deploy our application into the Dockers; if we are doing it on a single host and having two containers; we will have the following v2 Compose file that we can use with Docker Compose tool

The Dockerfile for our MVC application will be something like this:

  • Before building the Docker image using the above Dockerfile; we need to have the published application in the "output" folder; by running; dotnet restore (if required) and dotnet publish -c Release -o output
Posted by khurram | 0 Comments
Filed under: , ,

Installing Visual Studio 2017

You can create an offline installation files for Visual Studio 2017; the steps are documented at https://docs.microsoft.com/en-us/visualstudio/install/create-an-offline-installation-of-visual-studio The good thing this time is that we can run the vs_sku.exe –layout subsequently to update the installation files.

If you are seeing that installing or modifying the Visual Studio 2017; from the offline folder is still downloading the content from the internet; there can be two reasons; either the certificates are not installed as documented in the URL above; or you installed it in the “online” mode the first time; so uninstall it first…and then reinstall from the offline folder after importing the certificates. Once done this way; modifying the installation later adding more workloads etc; it will continue to use the contents from the offline folder

The tip is; that you delete everything in the %TEMP% folder before installation; and then do –layout folder thing (—lang en-US) and it will update the offline folder and create a log file in %TEMP% folder; check the log file ensuring that its saying everything went smoothly. Install JDK and Android SDK / NDK yourself if you want them at folders of your choice (in case you are using Android Studio and dont want multiple JDK/SDKs) and unselect JDK and Android SDK etc during installation (workload details)

Secondly; if you are planning to uninstall 2015; uninstall the Dotnet Core Preview tools first and then uninstall Visual Studio 2015

Posted by khurram | 0 Comments
Filed under:

Redis Clients

Redis Series

For the Redis clients; imagine we have an e-commerce platform having a Python based component that does some analysis to show which product or campaign / deal to show on the main page; these results are posted / updated into the Redis Server from where the Asp.Net Core application picks them.

For the Python client; we need pip; a PyPa recommeded tool for installing Python packages; on Ubuntu this can be done using the following command

sudo apt-get update && sudo apt-get install python-dev python-pip

Once pip is available; give this command

sudo pip install redis

Now to connect to Redis Server; we will have a code like this

  • You can see that we are simply adding the product ids and offer ids into the cache; the web interface will retrieve the data from the database and render it accordingly. If we want to; the web application can cache the rendered HTML as well and reuse it to save database trips for performance improvement

We can install Python on Windows development machine and use Visual Studio Code; there is a nice Python extension available at https://marketplace.visualstudio.com/items?itemName=donjayamanne.python that provides linting, intellisense and what not

redisclient-python

For the .NET Core; we can use https://www.nuget.org/packages/StackExchange.Redis Nuget package that’s .NETStandard compatible. This is very famous Redis client library from StackOverFlow guys and its code is available at https://www.nuget.org/packages/StackExchange.Redis

If our application is ASP.NET Core; we can instead use https://www.nuget.org/packages/Microsoft.Extensions.Caching.Redis.Core package; which is Distributed cache implementation of Microsoft.Extensons.Caching.Distributed.IDistributedCache using Redis; its an interface for Distributed cache mechanism basked into the ASP.NET Core to improve the performance and scalability of the applications. This package uses Strong Name version of StackExchange.Redis and to add it into the ASP.NET Core application; use dotnet add package Microsoft.Extensions.Caching.Redis.Core command

For our simple proof of concept; given the command dotnet new web in your project folder

It creates a very minimalist Hello World web application; to use the Static Files; give dotnet add package Microsoft.AspNetCore.StaticFiles command. For using Session; give dotnet add package Microsoft.AspNetCore.Session; and finally give dotnet add package Microsoft.Extensions.Caching.Redis.Core command. Restore the packages using dotnet restore and change the Startup.cs to this

  • redisclient-dotnetAs per StackExchange.Redis recommendation; we can reuse the ConnectionMultiplexer instance; therefore its defined as the static variables
  • Its initialized in the static constructor with ConfigurationOptions through which we defined the Redis Server and its password information
  • In the ConfigureService(IServiceCollection); the Redis Caching Extensions is added; and Redis Servir and password information is again specified while adding it
  • The Session service is also added in the ConfigureService according to its requirements
  • In the Configure(IApplicationBuilder, IHostingEnvironment, ILoggerFactory) method that gets called by the .NET Core runtimes for HTTP request pipeline; we are attaching the StaticFiles and Session extensions as per ASP.NET Core’s app.Use* conventions

We are using Redis in the code above in our ASP.NET Core application for two purposes; unique visitor counter and page hit counter. Page Hit Counter is the simple INCR Redis command. For Unique visitor we are using Cookie and a Sets support of Redis; SADD for adding the visitor and SCARD to determine the length of the set. StackExchange.Redis APIs are StringIncrement, SetAdd and SetLength respectively. Using Sets; we dont have to worry about duplicates as Redis automatically takes care of it and we can continue to add into the set with same id and it will not allow duplicates.

Dotnet Core

Dotnet Core Series

This post is a quick lap around Dotnet Core; especially on Linux and Containers. Dotnet Core is an open source .NET implementation and is also available for different flavors of Linux. We know how cool .NET is and how great it is now to use C# to develop and deploy applications on the black screen OSes :) As long as you are using fairly newer Linux distributions you are able to install Dotnet Core. Installation information and downloads are available at https://www.microsoft.com/net/core; there are currently 1.0 LTS version and 1.1 CURRENT version available. At the time of writing; 1.0.4 and 1.1.1 versions are the most recent available at https://www.microsoft.com/net/download/linux

If you want to create, build and package the code; you need SDK; else if you already have a compiled application available to run; only RUNTIME is sufficient. SDK installs the the Runtime as well. They have released v1 as SDK recently; and if you installed the SDK earlier; you might have the “preview” SDK; you can check it using dotnet binary with –version

dotnet-preview

They initially opted for JSON based project file (similar to NPM); which gets created when dotnet new was used that creates the Hello World Dotnet Core console application

dotnet-preview-structure

  • The lock file gets created on dotnet restore

We do dotnet restore; that restores the dependencies defined in the project.json from Nuget; an online library distribution service. And then we can do dotnet build and dotnet run to build and run our application. If we want a minimalist Hello World web application in Dotnet Core; we can use Microsoft.AspNetCore.Server.Kestrel package from Nuget that is a HTTP server based on libuv; we define this package dependency in project.json and then change the Program.cs file to this

using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Http;
public class Program
{
    public static void Main()
    {
        new WebHostBuilder()
                .UseKestrel()
                .UseUrls("http://127.0.0.1:3000")
                .Configure(a => a.Run(c => c.Response.WriteAsync("Hello World!")))
                .Build()
                .Run();
    }
}

Finding and adding Nuget package reference in the JSON file was a manual work; there is a Visual Studio Code extension that we used in the Zookeeper post that we can use to find / add Nuget package dependencies into project.json like Kestrel above if we are using Visual Studio Code; which is also an open source editor. This is all now not required with the brand new non preview (now released) SDK.

The SDK version is 1.x; and there are two runtimes; 1.x LTS and 1.1 CURRENT; the Dotnet Core 1.1 SDK is 1.x SDK :)

dotnet-install-sdk

    dotnet-new

Installing SDK; install the Runtimes as well

With released SDK; when we do dotnet new to create the project; it now creates a CSPROJ file thats XML and is very clean / minimal similar to JSON; given you didnt specified F# as the language

dotnet-structure

  • dotnet binary now can create different types of project; including web; so we dont have to do anything special for the web project
  • We also dont need any special extension of Visual Studio Code to add Nuget references; we can use dotnet binary to add Nuget packages using dotnet add package Nuget-Package-Name; this means that even if we are not using any editor; we can do this easily using the SDK only; very useful in Linux Server environments where there is usually no GUI!

Now lets switch gear and try to build a simple Docker Container for Dotnet Core web application. We will use dotnet new web similar to the screenshot; this web application will be connecting to the Redis Server and for this; we need some .NET library that's also compatible with Dotnet Core; StackExchange.Redis is one such library; to add this package into our Dotnet Core web project; we will issue

dotnet add package StackExchange.Redis

  • Don't forget to restore the packages after adding them

We will not do anything further for this post; we will simply publish the Release build of our application into the “output” folder using dotnet publish –c Release –o output

And then create a Dockerfile with following content

FROM microsoft/dotnet:1.1-runtime
WORKDIR /app
COPY output .
ENV ASPNETCORE_URLS http://+:80
EXPOSE 80
ENTRYPOINT ["dotnet", "Redis.dll"]
  • Before building the container; the application should be published into the output folder that will get included into the /app directory in the container    
  • Dotnet core uses an environment variable ASPNET_CORE_URLS and setup the Kestrel accordingly; here we are running our web application at http://*:80; meaning at port 80; the default HTTP port on all the ips of the containers
  • We need to expose container’s 80 port as well

We can build this Docker image using docker build –t some-tag .

Once the image is created; we can run it using docker run and mapping its 80 port; something like

docker run –rm –p 5000:80 some-tag

And we can access our Hello World Dotnet Core web application at http://localhost:5000

dotnet-docker

Posted by khurram | 0 Comments

Redis

Redis Series

redis-logoREmote DIctionary; or Redis is an open source data structure server; its a key-value database and can be used as NoSQL database, cache and message broker.

redis-cliIts distinguishing feature is that we can store data structures such as strings, hashes, lists, sets, sorted sets, bitmaps, hyperloglogs and geospatial indexes. It also offers functions around the data structures for instance range queries for sorted sets, radius queries for geospatial. It has replication support built in and we can have master-slave based tree like Redis cluster. It has Least Recently Used based Eviction / Cache Expiration mechanism along with transaction support. There is also Lua scripting support as well. Redis typically has all the data in the memory but it also persists it on to the disk for durability; it journal its activity so in case of any failure only few seconds of data get lost; it can write data to file in the background using the journal and we can also snapshot the in memory data.

We can get Windows optimized Redis releases from https://github.com/MSOpenTech/redis/releases that are maintained by https://msopentech.com; a Microsoft subsidiary; they had AppFabric product that had Redis like Caching component; it seems they dont have any plans to continue it any further given they are now Open source friendly company and instead is offering Windows optimized Redis through GitHub; and its great!

I simply run the installer and it did everything “Windows way” the binaries are in Program Files; and there is also Redis service defined; we can configure it as desired; run it from Administrative command prompt. Similar to ZooKeeper; it comes with redis-cli that we can use to connect to local Redis server. There are plethora of commands that we can play with using the CLI. Some of them are shown in the screenshot.

We can use keys command to query the keys and del to delete them. SET command has nx parameter; if specified; it will only set the key value if its not defined. There is also xx parameter; if specified; it will only set the key value if key already exists. These are useful when multiple clients want to set the same key. SET also has ex and px parameters where we define expiration time of the key in seconds and milliseconds respectively

  • GETSET is an interesting command; it sets the new value and retrieve the old value in the single go; useful for resetting counters!

redis-keys-set  redis-keys-expiration

  • We can give multiple key names while deleting

The keys and values can be maximum of 512Mb in size, keys can be any binary data; string, integer or even file content; but its recommended to use appropriate sized keys with type colon value colon something else; for example user:khurram etc

Using MGET and MSET we can retrieve and set multiple keys; useful for reducing latencies. We can use EXPIRY existing-key seconds to set the cache expiry of existing key; and use TTL key; to know the remaining time for cache expiry.

For the lists; there are LPUSH (Left / Head) and RPUSH; using which we can push multiple values against single key (Lists). We can use LPUSH/RPUSH key val1 val2 … to push multiple values at once. LRANGE is used to retrieve the values and takes start and end index parameters. We can give –1 as parameter for last index, –2 as second last; so to retrieve whole list we will use LRANGE list 0 -1

  • The lists can be used for Producers / Consumer scenarios; RPOP exists especially for Consumers; and when list is empty; it will return null
  • There is also LPOP but not used in Producer / Consumer; Producer should use LPUSH and Consumer RPOP
  • BRPOP and LRPOP are Blocking versions of RPOP and LPOP; and instead of polling; consumers can uses BRPOP

LTRIM is similar to LRANGE; but it trims the remaining values; we can use it before pushing the data and it will only take the defined elements

Given Redis is a network server; we should secure our Redis; we should use iptables / firewall so clients from known locations can connect to it; there’s also a security section in the conf file; on Windows; the conf file is passed as parameter to the service binary and its in Program Files\redis; we can open it up and enable authentication

  • Additionally you can run the service under specific login, giving required permissions to run as service, can listen on network and NTFS permissions. Its always a good idea to run the services (and especially network services) under a login with just enough permissions. Take a look at http://antirez.com/news/96 how one can compromise Redis in few seconds

redis-conf

Redis will not let read/write data unless client authenticate themselves first

redis-auth

You can see that similar to ZooKeeper; Redis can be used as foundational service in modern distributed applications. Similar to ZooKeeper; the application workers connect to Redis server over network and there are libraries for many languages; from C/C++ to Java/C#, Perl to Python, ActionScript to NodeJS and Go. In the next post; we will build some client applications

Docker Swarm

ZooKeeper Series

Docker Swarm Series

swarmDocker Swarm is native clustering for Docker. Before Docker Engine 1.12 (June 2016); it was a separate thing that turned the pool of docker hosts into a single virtual Docker host; and now since 1.12 its included and is part of Docker Engine and now called “Swarm Mode”. We can use Docker CLI to create a swarm, deploy application services to a swarm and manage its behavior.

architecture

For Swarm; we need to have multiple Docker Engines running on nodes; one or more node acts as Manager and then we add Workers into the Swarm. The quickest way to try it is to use docker-machine and setup multiple Docker Engines across different Hosts or Virtual Machines. I have three VMs running Docker Engine v1.13. For these VMs; I used RancherOS; the tiny Linux distro ideal to run Docker Engine. I added them into my environment using Docker-Machine. Please note; RancherOS and Racher are seperate products; Rancher OS is the Linux distro and Rancher is Swarm like container management product. Rancher also supports using Swarm as its underlying clustering engine along with Cattle; its own; and Kubernetes and Mesos. But for this post we will remain committed to using Swarm using the Docker CLI and tools!

docker-machine

To create a Swarm; we choose one machine as the Manager, set the Docker environment for that machine; and run docker swarm init; it will initialize the Swarm environment on that machine, make it a manager and outputs the docker CLI command that we can run on the other machines with which we can add them as as workers

docker-swarm

Unlike Rancher; there is no GUI or web based interface to manage the Docker Swarm, but there are third party tools available; and mostly as containers that we can run on the underlying Docker Engines. Docker Swarm Visualizer is the popular one; Portainer is another!

visualizer

composeIn Docker 1.13 (January 2017); they have added a docker-compose support to the docker stack deploy command so that services can be deployed using a docker-compose.yml file directly. They have also introduced compose file v3 format that has new options like deploy related to deployment and running of the services in a Swarm, labels to specify the labels to the services

Lets make a v3 compose file for our ZooKeeper; sadly for such application; where one node needs to know about the other; and every node need its own configuration; we have to define service for each node. Once we have the compose file; we deploy “the stack” using docker stack deploy –compose-file yml-file NameOfStack; we defined the deployment constraints; and the manager will deploy zoo1 service (single node) on swarm1, zoo2 on swarm2 and zoo3 on swarm3 node automatically

zookeeper

We can list the services using docker service ls

docker-service

Hopefully such workarounds will not be required once Swarm and Compose get more matured!

Posted by khurram | 0 Comments
Filed under: ,

Higher-level Constructs with ZooKeeper

ZooKeeper Series

ZooKeeper provides a solid foundation to implement higher order functions required for “Clustering Applications” / Distributed Systems. In this post; we will implement “Barrier” that Distributed systems uses to block processing of a set of nodes until a condition is met at which time all the nodes are allowed to proceed. Barriers are implemented in ZooKeeper by designating a barrier node. The barrier is in place if the barrier node exists. For the modern scalable applications; often we don't know how many nodes are participating; this is something that is decided at the runtime and is expected to be changeable when required. If there is more load on the applications; we should have an option to add more nodes to meet the demand. In such scenarios; its important to know at runtime how many nodes are participating so each node wait at barrier accordingly and also an option of node enrollment is required so we allow some time to nodes to come online / participate and then calculate how many nodes will participate in the barrier!

To keep things interesting; we will be implementing a proof of concept in Dotnet Core; and given Dotnet Core applications can be run in Linux; we will use Docker to run ZooKeeper as well as our Core CLR nodes. For the sake of simplicity; we will use single instance of ZooKeeper and run all the nodes as separate Docker containers on a single host machine. We can deploy the containers across multiple machines using Rancher, Swarm or Kubernetes etc. You can check out Rancher—First Application post on how to deploy the Docker application across multiple hosts. We will use Barrier example from Visual Studio 2010 Training Kit and re implement accordingly.

Here’s the modified DriveToBoston() function that’s using Barrier helper class that we will write. We will pass-on the ZooKeeper connection string to it in the constructor and it will have EnrollIntoBarrier, GetParticipantCount, ReachBarrier and WaitAtBarrier functionalities. Given containers takes random times to come online based on the host resources and what’s in the container; we are simulating it as “Decision Time”; this is also important given ZooKeeper takes couple of seconds to start accepting connection; similar to any other database. “Roll Time” is simulating the wait time to allow participating nodes to join; “Time To Gas Station” is from the Training Kit example and is simulating the different time nodes will take to reach barrier; where they will sync and proceed.

static void DriveToBoston(string connectionString, string name, TimeSpan timeToLeaveHome, TimeSpan timeToRoll, TimeSpan timeToGasStation)
{
    try
    {
        Console.WriteLine("[{0}] Leaving house", name);
        Thread.Sleep(timeToLeaveHome); //let zookeeper come online and decision time
        var barrier = new Barrier(connectionString);
        bool envrolled = barrier.EnrollIntoBarrierAsync(timeToRoll, name).Result;
        if (!envrolled)
        {
            Console.Write("[{0}] Couldnt join the caravan!", name);
            return;
        }
        Console.WriteLine("[{0}] Going to Boston!", name);
        int participants = barrier.GetParticipantCountAsync().Result;
        Console.WriteLine("[{0}] Caravan has {1} cars!", name, participants);
        // Perform some work
        Thread.Sleep(timeToGasStation);
        object o = barrier.ReachBarrierAsync(name).Result;
        Console.WriteLine("[{0}] Arrived at Gas Station", name);
        // Need to sync here
        barrier.WaitAtBarrier(participants);
        // Perform some more work
        Console.WriteLine("[{0}] Leaving for Boston", name);
    }
    catch (Exception ex)
    {
        Console.WriteLine("[{0}] Caravan was cancelled! Going home!", name);
        Console.WriteLine(ex);
    }
}

For the Barrier Helper; we will be using /dotnetcoreapp as the application root node in the ZooKeeper; and /dotnetcoreapp/barrier as the Barrier node. Our Barrier node has two children; participants and reached. All these nodes are Persistent. Each node will create a child node under /dotnetcoreapp/barrier/participants node when enrolling itself; and after a roll time; we will count number of children to determine the number of participants. And when the processing will start; each node will report itself when it will reach barrier by creating a node under /dotnetcoreapp/barrier/reached. When children under reached node becomes equal to the number of participant each node will get sync and proceed with any further processing.

We will use “watcher” functionality that ZooKeeper provides to watch the reached node; the watch will get triggered whenever there is any change; a new child is created.

One of the most interesting things about ZooKeeper is that even though ZooKeeper uses asynchronous notifications, we can use it to build synchronous consistency primitives. We will use this for the roll call situation. After a roll call time out; the node will create /dotnetcoreapp/barrier/rollcomplete; each node will first check its existance; if its not there; will enroll itself; and then check again for its existance and if its there; will check the Czxid of the two nodes; the create zookeeper transaction id; and as ZooKeeper stamps all the node in sequential way; if rollcomplete id is less than the node’s enrollment node id; it means node failed to get itself enrolled before roll get completed.

Here’s the code of our Barrier helper class

  • ZooKeeper exists() api returns the Stat structure that we can use to determine number of children easily
  • All the Zookeeper nodes created by nodes for participation and reporting itself for eaching barrier are Ephemeral; they will get deleted automatically when node will disconnect from the Zookeeper server

The code of the Dotnet Core project is available at https://github.com/khurram-aziz/HelloDocker/tree/master/Zoo; you can clone the code and then run dotnet restore to restore the used packages including the ZooKeeper Client library. We will run three docker containers of this app providing different parameters to simulate the Training Kit example. To build the container image of our application; first run dotnet publish –c Release –o out to build + publish the release confiiguration of our app into the “out” folder; and then use this Dockerfile to build the container image

zoo-dotnet-publish

We can use docker-compose to run the ZooKeeper and the instances of our Dotnet Core for simulation. Here’s the YML file that simulating the three nodes as per Training Kit original example

  • I have specified the dockerfile for the Dotnet Core application; we can use docker-compose up –build (dash dash build) and it will build and run the containers with single command
  • Also note that dennis node parameters are such that they will not be able to join the “caravan” given its taking too much time to decide and by that time; enrollment gets complemented

If everything goes smoothly; you will see an output similar to this


mac_1      | [Mac] Leaving house
dennis_1   | [Dennis] Leaving house
charlie_1  | [Charlie] Leaving house

mac_1      | [Mac] Going to Boston!
mac_1      | [Mac] Caravan has 2 cars!
charlie_1  | [Charlie] Going to Boston!
charlie_1  | [Charlie] Caravan has 2 cars!

dennis_1   | [Dennis] Couldnt join the caravan!zoo_dennis_1 exited with code 0

charlie_1  | [Charlie] Arrived at Gas Station
mac_1      | [Mac] Arrived at Gas Station
charlie_1  | [Charlie] Leaving for Boston
mac_1      | [Mac] Leaving for Boston

zoo_mac_1 exited with code 0
zoo_charlie_1 exited with code 0

zoo-docker-compose

We can create other higher order constructions; in the ZooKeeper; its called ZooKeeper Recipes; their pseudo codes are discussed at https://zookeeper.apache.org/doc/trunk/recipes.html; some of these recipes are available in their official Java client library and given the Apache ZooKeeper .NET async client library we are using is based on the Java client library; they have also made available ZooKeeperNetEx.Recipes nuget package that we can use. Leader Election and Queue are available in there.

Happy Containering / Clustering / Distributing your app!

Posted by khurram | 0 Comments

ZooKeeper

ZooKeeper Series

Apache ZooKeeper is an open-source server which enables highly reliable distributed coordination. It helps us by providing a distributed synchronization service that can be used for maintaining configuration information, naming, group service and other similar aspects of distributed applications. This itself is distributed and is highly reliable and instead of reinventing the wheel; we can use this foundation service in our distributed applications. It was a subproject of Apache Hadoop but now it is a top level project. In a nutshell; its a distributed hierarchical key-value store, and in a distributed environment we typically setup multiple ZooKeeper servers to which clients; nodes running our distributed application; connect and retrieve or set information.


Picture from cwiki.apache.org

It stores the information in “znodes” and provides the namespace that is much like a file system. Znode data typically is less than a megabyte; and we can also have ACLs at Znode level. If there are multiple Zookeeper servers; they need to know about each other and they then maintain a quorum and write requests are forwarded to other servers and go through consensus before a response is generated. It also maintains the update order; updates are identified by the unique zxid; the transaction id; and we can have “watches” that Zookeeper server will trigger accordingly.

Its a Java application that can run on Linux, Solaris or FreeBSD operating system. The simplest way to have it running in lab, development or production environment is no doubt using Docker! With two commands; we can have a server up and running and a connected client!

Docker

  • zookeeper is an official Docker image and we can run its two instance; one as a server and another as a client; zkCli.sh is its CLI client that we can use
  • The image exposes; 2181, 2888 and 3888; ZooKeeper client, follower and election ports; and we can use Docker standard linking
  • Visit the image page to learn about how we can further configure it using the environment variables and volume information where it stores the data and log

zkCliWe can use zkCli.sh / ZooKeeper CLI to create / read znodes.

  • We can create three types of znodes using create PATH command; simple, ephemeral (with –e flag) and sequential (with –s flag)
  • Ephemeral node will automatically get deleted when the session expires; we can disconnect and reconnect and use ls command to verify this

zkCli-helpzkCli-ls

  • The Ephemeral node might continue to appear for a while; the node gets deleted after the connection time out and by default its 30sec

Similarly; we can update data in an existing node using set. We can check the stat of the znode using stat to know the zxid and time values. There are two transaction and timestamp values; cZxid and ctime for create and mZxid and mtime for modification.

delete is used to delete the node that has no children and to delete any znode recursively we use rmr

We can also set ACL to znodes; restrict it to certain IP for write or read; there’s also plugin based authentication support and we can define ACLs accordingly. There’s quota support as well

To connect from our application; there exists language bindings and client libraries. C. Java, Perl and Pythn language bindings are official supported. https://cwiki.apache.org/confluence/display/ZOOKEEPER/ZKClientBindings has list of client bindings.

https://github.com/shayhatsor/zookeeper is a .NET async client also available as Nuget at https://www.nuget.org/packages/ZooKeeperNetEx; the good thing about this is that its not only .NET async friendly (Task based APIs) but also compatible with .NET Core

https://marketplace.visualstudio.com/items?itemName=ksubedi.net-core-project-manager is .NET Core Project Manager (Nuget) that allows us to search, install and remove Nuget package right from Visual Studio Code; that we know is a free, open source, runs everywhere, lightweight code editor with debugging and git support. Here’s the .NET Core client code using this Nuget

  • We need to map the Zookeeper’s 2181 port to Docker host so we can access it at known IP address; run the Zookeeper using docker run –-rm –p 2181:2181 zookeeper
  • Notice we are specifying the connection time out when connecting to Zookeeper and also need a watcher; a null watcher code is at https://github.com/khurram-aziz/HelloDocker/blob/master/Zoo/ZooHelper.cs

dotnet-core

We can now use docker-compose and can easily run more instances of Zookeeper in our lab/development environment. Here’s a docker-compose YAML file to run three instances for the Zookeeper cluster

  • Notice that we have mapped container’s 2181 ports to Docker host’s 2181, 2182 and 2183 ports; we can now use 127.0.0.1:2181,127.0.0.1:2182,127.0.0.1:2183 as the connection string and our client will connect to the one Zookeeper instance out of this cluster automatically; or we can specify one or two nodes of our choice

We can stop one instance of the Zookeeper server; and write the value using available nodes, then bring back the node and check if the updated value gets replicated! We can try writing after stopping two instances. Will it allow to write if quorum is not complete?

Posted by khurram | 0 Comments

Johnny-Five

In Firmata post we established that we can have Python or Node.js application running on a computer; that can be a Single Board Computer like Raspberry Pi running Rasbian or Windows 10 IoT and can control and get sensor data from the Microcontrollers like Arduino or ESP8266

j5-firmata

There exists many IoT frameworks for Javascript / Node.js that allows us to write our programs and Johnny-Five is one such popular framework. Using such framework we not only have access to many Javascript / Node.js libraries but also get a platform on which we can write our program quickly in more friendlier environment. Johnny-Five supports Arduino as well as many other boards through the IO Plugin; that is Firmata compatible interface; to communicate with non Arduino hardware. Johnny Five can be used on richer boards like Raspberry Pi and Galileo as well as with Microcontrollers through IO Plugins. It also includes DSL libraries for working with many different actuators and sensors that enable writing IoT code more fun.

Johnny Five needs Node.js v4.2.1 (at the time of this writing); and on Raspberry Pi you can get more recent version using NodeSource

curl -sL https://deb.nodesource.com/setup_7.x | sudo -E bash -
sudo apt install nodejs

And then use npm install johnny-five to get the bits. For ESP8266; we will create firmata object similar to the Firmata post and then on its ready event; hand it over as io to johnny five. We can then continue from there and subscribe to its ready event and write the program. The Blink code will be something like this:

There are also DSL libraries for other sensors and actuators; for instance we can use Thermometer library for temperature sensor; https://github.com/rwaldron/johnny-five/wiki/Thermometer has more details

  • Note that we are using ESP8266 that is 3.3V powered and we are not using the built in controller and instead giving our own temperature calculation lambda

j5-thingspeak

  • 3.3V of ESP8266 is noisy; we can get better result by using some digital sensor
  • ESP01; the widely used ESP8266 board sadly doesnt expose its Analog pin; and we have to use digital sensor

We can use Johnny Five on Windows 10 IoT as well, in fact Node.js Tools for Visual Studio UWP Extension support Johny-Five and Cylon by providing Project Templates. For details on Node.js Tools for Visual Studio UWP Extension; check out Blink with Windows 10 IoT post

j5-ntvs-uwp

For development in Visual Studio and deployment on Windows IoT; we need to watch out for certain gotchas. After NPM package restore; we need to update them; this will apply Windows IoT specific patches to the node modules. Another important thing to watch out is MAX_PATH issue; when building the nodule modules are zipped up and make part of the package; and in doing this; it can face this issue; use npm dedupe to flatten the node modules; and we might have to go deep and dedupe inner modules as well; depending on the errors it generate. For instance I faced issues in node modules under firmata; I simply navigated there and dedupe it and then dedupe in root again. We have to restart the Visual Studio so it picks up the changed things. At the Windows 10 IoT side; we also need to enable Long Paths by issue-ing reg add "HKLM\SYSTEM\CurrentControlSet\Control\FileSystem" /t REG_DWORD /d 1 /v LongPathsEnabled /f at Device Portal and then restart the device to pick the things.

j5-uwp-notes

You can optionally specify –use-logger debug option and it will store the console output in the log file that you can then review

j5-windows10io-log

Posted by khurram | 0 Comments

Firmata

Microcontrollers are great; but in today’s ever changing and more demanding world; we often need an ability to upgrade the software; fixing bugs if any; adding and enhancing the functionality. There are well established mechanisms for upgrading software on computers (PCs, Tablets and Phones); but updating firmware on Microcontrollers can become challenging. The connected smart appliances with larger memories can be updated with Over The Air (OTA) updates; but it needs resources like connectivity, enough memory/storage and developing + testing appropriate update mechanism in the firmware; that are not often available in all the appliances. In addition to this; our IoT software might be either complex or depends on other resources like cloud connectivity, database or accessing files that cant be done directly “on” the microcontroller. Further the solution might comprise of many appliances and there is need to coordinate across the appliances, taking input from one appliance doing something on other. MQTT can be used for data passing but sometimes we need an ability to treat the appliance as a “dump gadget” connected to a “smarter software” running on a computer. This is where firmata comes in; its a protocol for communicating with microcontrollers from software on a computer. The protocol is implemented in firmware on microcontroller; Arduino and Spark.IO are supported officially and there exists client libraries for different languages and platforms, from Python, Perl and Ruby to Java, .NET and Javascript/Node and many other (including mobile/tablet platforms)

Firmata Llbrary for Arduino comes preinstalled in the IDE and we can use it with supported boards, Arduino or ESP8266.

firmata-sketch

  • For Arduino; we use StandardFirmata and for ESP8266 we use StandardFirmataWifi

For the “Blink” example; I am showing you Python and Node.js examples; we can program in any language / platform and can find required library

firmata-python

firmata-nodejs

  • For the Node.js; I am using firmata

If you are wondering; the code is not compiled and sent to microcontroller; instead microcontroller after firmata firmware is acting as a slave always listening to what’s being sent over the wire in the firmata protocol; you can watch the RX activity clearly on the Arduino board

firmata-rx-activity

In case of Arduino, the clear drawback is that we need serial connectivity between the board and the computer running the program. We can either use Ethernet or Wifi shield or ESP8266 where this serial cable connection is avoided and the program can connect to the microcontroller through Wifi. Simply use StandardFirmataWifi; edit the WifiConfig.h according to your Wifi settings, optionally uncomment SERIAL_DEBUG to view the debug logs in the Serial Monitor and you are good to go

firmata-esp8266

I connected an Analog Temperature Sensor to Analog Pin 0 and wrote this little Node.js program that retrieve the temperature value and send it to ThingsSpeak for further analysis / reporting.

I find this firmata approach intuitive and easier; given we can change / manage the program easily on the computer instead of reflashing microcontroller firmware, especially ESP8266 based IoT appliances works great in this way. The appliance can continue to be installed where it is; say Sonoff switch and you can change / update the program on the computer possibly even remotely; say Raspberry Pi by sushing it

ESP8266 comes in all sizes; check this video for inspiration; we can deploy sensors connected to ESP01 (that has two GPIOs) and solder/glue/pack the things to normal wall socket USB chargers (with required voltage regulation)

ESP8266 ESP-01
Posted by khurram | 0 Comments