Welcome to weblogs.com.pk Sign in | Join | Help

Dotnet Core

Dotnet Core Series

This post is a quick lap around Dotnet Core; especially on Linux and Containers. Dotnet Core is an open source .NET implementation and is also available for different flavors of Linux. We know how cool .NET is and how great it is now to use C# to develop and deploy applications on the black screen OSes :) As long as you are using fairly newer Linux distributions you are able to install Dotnet Core. Installation information and downloads are available at https://www.microsoft.com/net/core; there are currently 1.0 LTS version and 1.1 CURRENT version available. At the time of writing; 1.0.4 and 1.1.1 versions are the most recent available at https://www.microsoft.com/net/download/linux

If you want to create, build and package the code; you need SDK; else if you already have a compiled application available to run; only RUNTIME is sufficient. SDK installs the the Runtime as well. They have released v1 as SDK recently; and if you installed the SDK earlier; you might have the “preview” SDK; you can check it using dotnet binary with –version


They initially opted for JSON based project file (similar to NPM); which gets created when dotnet new was used that creates the Hello World Dotnet Core console application


  • The lock file gets created on dotnet restore

We do dotnet restore; that restores the dependencies defined in the project.json from Nuget; an online library distribution service. And then we can do dotnet build and dotnet run to build and run our application. If we want a minimalist Hello World web application in Dotnet Core; we can use Microsoft.AspNetCore.Server.Kestrel package from Nuget that is a HTTP server based on libuv; we define this package dependency in project.json and then change the Program.cs file to this

using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Http;
public class Program
    public static void Main()
        new WebHostBuilder()
                .Configure(a => a.Run(c => c.Response.WriteAsync("Hello World!")))

Finding and adding Nuget package reference in the JSON file was a manual work; there is a Visual Studio Code extension that we used in the Zookeeper post that we can use to find / add Nuget package dependencies into project.json like Kestrel above if we are using Visual Studio Code; which is also an open source editor. This is all now not required with the brand new non preview (now released) SDK.

The SDK version is 1.x; and there are two runtimes; 1.x LTS and 1.1 CURRENT; the Dotnet Core 1.1 SDK is 1.x SDK :)



Installing SDK; install the Runtimes as well

With released SDK; when we do dotnet new to create the project; it now creates a CSPROJ file thats XML and is very clean / minimal similar to JSON; given you didnt specified F# as the language


  • dotnet binary now can create different types of project; including web; so we dont have to do anything special for the web project
  • We also dont need any special extension of Visual Studio Code to add Nuget references; we can use dotnet binary to add Nuget packages using dotnet add package Nuget-Package-Name; this means that even if we are not using any editor; we can do this easily using the SDK only; very useful in Linux Server environments where there is usually no GUI!

Now lets switch gear and try to build a simple Docker Container for Dotnet Core web application. We will use dotnet new web similar to the screenshot; this web application will be connecting to the Redis Server and for this; we need some .NET library that's also compatible with Dotnet Core; StackExchange.Redis is one such library; to add this package into our Dotnet Core web project; we will issue

dotnet add package StackExchange.Redis

  • Don't forget to restore the packages after adding them

We will not do anything further for this post; we will simply publish the Release build of our application into the “output” folder using dotnet publish –c Release –o output

And then create a Dockerfile with following content

FROM microsoft/dotnet:1.1-runtime
COPY output .
ENTRYPOINT ["dotnet", "Redis.dll"]
  • Before building the container; the application should be published into the output folder that will get included into the /app directory in the container    
  • Dotnet core uses an environment variable ASPNET_CORE_URLS and setup the Kestrel accordingly; here we are running our web application at http://*:80; meaning at port 80; the default HTTP port on all the ips of the containers
  • We need to expose container’s 80 port as well

We can build this Docker image using docker build –t some-tag .

Once the image is created; we can run it using docker run and mapping its 80 port; something like

docker run –rm –p 5000:80 some-tag

And we can access our Hello World Dotnet Core web application at http://localhost:5000


Posted by khurram | 0 Comments


Redis Series

redis-logoREmote DIctionary; or Redis is an open source data structure server; its a key-value database and can be used as NoSQL database, cache and message broker.

redis-cliIts distinguishing feature is that we can store data structures such as strings, hashes, lists, sets, sorted sets, bitmaps, hyperloglogs and geospatial indexes. It also offers functions around the data structures for instance range queries for sorted sets, radius queries for geospatial. It has replication support built in and we can have master-slave based tree like Redis cluster. It has Least Recently Used based Eviction / Cache Expiration mechanism along with transaction support. There is also Lua scripting support as well. Redis typically has all the data in the memory but it also persists it on to the disk for durability; it journal its activity so in case of any failure only few seconds of data get lost; it can write data to file in the background using the journal and we can also snapshot the in memory data.

We can get Windows optimized Redis releases from https://github.com/MSOpenTech/redis/releases that are maintained by https://msopentech.com; a Microsoft subsidiary; they had AppFabric product that had Redis like Caching component; it seems they dont have any plans to continue it any further given they are now Open source friendly company and instead is offering Windows optimized Redis through GitHub; and its great!

I simply run the installer and it did everything “Windows way” the binaries are in Program Files; and there is also Redis service defined; we can configure it as desired; run it from Administrative command prompt. Similar to ZooKeeper; it comes with redis-cli that we can use to connect to local Redis server. There are plethora of commands that we can play with using the CLI. Some of them are shown in the screenshot.

We can use keys command to query the keys and del to delete them. SET command has nx parameter; if specified; it will only set the key value if its not defined. There is also xx parameter; if specified; it will only set the key value if key already exists. These are useful when multiple clients want to set the same key. SET also has ex and px parameters where we define expiration time of the key in seconds and milliseconds respectively

  • GETSET is an interesting command; it sets the new value and retrieve the old value in the single go; useful for resetting counters!

redis-keys-set  redis-keys-expiration

  • We can give multiple key names while deleting

The keys and values can be maximum of 512Mb in size, keys can be any binary data; string, integer or even file content; but its recommended to use appropriate sized keys with type colon value colon something else; for example user:khurram etc

Using MGET and MSET we can retrieve and set multiple keys; useful for reducing latencies. We can use EXPIRY existing-key seconds to set the cache expiry of existing key; and use TTL key; to know the remaining time for cache expiry.

For the lists; there are LPUSH (Left / Head) and RPUSH; using which we can push multiple values against single key (Lists). We can use LPUSH/RPUSH key val1 val2 … to push multiple values at once. LRANGE is used to retrieve the values and takes start and end index parameters. We can give –1 as parameter for last index, –2 as second last; so to retrieve whole list we will use LRANGE list 0 -1

  • The lists can be used for Producers / Consumer scenarios; RPOP exists especially for Consumers; and when list is empty; it will return null
  • There is also LPOP but not used in Producer / Consumer; Producer should use LPUSH and Consumer RPOP
  • BRPOP and LRPOP are Blocking versions of RPOP and LPOP; and instead of polling; consumers can uses BRPOP

LTRIM is similar to LRANGE; but it trims the remaining values; we can use it before pushing the data and it will only take the defined elements

Given Redis is a network server; we should secure our Redis; we should use iptables / firewall so clients from known locations can connect to it; there’s also a security section in the conf file; on Windows; the conf file is passed as parameter to the service binary and its in Program Files\redis; we can open it up and enable authentication

  • Additionally you can run the service under specific login, giving required permissions to run as service, can listen on network and NTFS permissions. Its always a good idea to run the services (and especially network services) under a login with just enough permissions. Take a look at http://antirez.com/news/96 how one can compromise Redis in few seconds


Redis will not let read/write data unless client authenticate themselves first


You can see that similar to ZooKeeper; Redis can be used as foundational service in modern distributed applications. Similar to ZooKeeper; the application workers connect to Redis server over network and there are libraries for many languages; from C/C++ to Java/C#, Perl to Python, ActionScript to NodeJS and Go. In the next post; we will build some client applications

Docker Swarm

ZooKeeper Series

Docker Swarm Series

swarmDocker Swarm is native clustering for Docker. Before Docker Engine 1.12 (June 2016); it was a separate thing that turned the pool of docker hosts into a single virtual Docker host; and now since 1.12 its included and is part of Docker Engine and now called “Swarm Mode”. We can use Docker CLI to create a swarm, deploy application services to a swarm and manage its behavior.


For Swarm; we need to have multiple Docker Engines running on nodes; one or more node acts as Manager and then we add Workers into the Swarm. The quickest way to try it is to use docker-machine and setup multiple Docker Engines across different Hosts or Virtual Machines. I have three VMs running Docker Engine v1.13. For these VMs; I used RancherOS; the tiny Linux distro ideal to run Docker Engine. I added them into my environment using Docker-Machine. Please note; RancherOS and Racher are seperate products; Rancher OS is the Linux distro and Rancher is Swarm like container management product. Rancher also supports using Swarm as its underlying clustering engine along with Cattle; its own; and Kubernetes and Mesos. But for this post we will remain committed to using Swarm using the Docker CLI and tools!


To create a Swarm; we choose one machine as the Manager, set the Docker environment for that machine; and run docker swarm init; it will initialize the Swarm environment on that machine, make it a manager and outputs the docker CLI command that we can run on the other machines with which we can add them as as workers


Unlike Rancher; there is no GUI or web based interface to manage the Docker Swarm, but there are third party tools available; and mostly as containers that we can run on the underlying Docker Engines. Docker Swarm Visualizer is the popular one; Portainer is another!


composeIn Docker 1.13 (January 2017); they have added a docker-compose support to the docker stack deploy command so that services can be deployed using a docker-compose.yml file directly. They have also introduced compose file v3 format that has new options like deploy related to deployment and running of the services in a Swarm, labels to specify the labels to the services

Lets make a v3 compose file for our ZooKeeper; sadly for such application; where one node needs to know about the other; and every node need its own configuration; we have to define service for each node. Once we have the compose file; we deploy “the stack” using docker stack deploy –compose-file yml-file NameOfStack; we defined the deployment constraints; and the manager will deploy zoo1 service (single node) on swarm1, zoo2 on swarm2 and zoo3 on swarm3 node automatically


We can list the services using docker service ls


Hopefully such workarounds will not be required once Swarm and Compose get more matured!

Posted by khurram | 0 Comments
Filed under: ,

Higher-level Constructs with ZooKeeper

ZooKeeper Series

ZooKeeper provides a solid foundation to implement higher order functions required for “Clustering Applications” / Distributed Systems. In this post; we will implement “Barrier” that Distributed systems uses to block processing of a set of nodes until a condition is met at which time all the nodes are allowed to proceed. Barriers are implemented in ZooKeeper by designating a barrier node. The barrier is in place if the barrier node exists. For the modern scalable applications; often we don't know how many nodes are participating; this is something that is decided at the runtime and is expected to be changeable when required. If there is more load on the applications; we should have an option to add more nodes to meet the demand. In such scenarios; its important to know at runtime how many nodes are participating so each node wait at barrier accordingly and also an option of node enrollment is required so we allow some time to nodes to come online / participate and then calculate how many nodes will participate in the barrier!

To keep things interesting; we will be implementing a proof of concept in Dotnet Core; and given Dotnet Core applications can be run in Linux; we will use Docker to run ZooKeeper as well as our Core CLR nodes. For the sake of simplicity; we will use single instance of ZooKeeper and run all the nodes as separate Docker containers on a single host machine. We can deploy the containers across multiple machines using Rancher, Swarm or Kubernetes etc. You can check out Rancher—First Application post on how to deploy the Docker application across multiple hosts. We will use Barrier example from Visual Studio 2010 Training Kit and re implement accordingly.

Here’s the modified DriveToBoston() function that’s using Barrier helper class that we will write. We will pass-on the ZooKeeper connection string to it in the constructor and it will have EnrollIntoBarrier, GetParticipantCount, ReachBarrier and WaitAtBarrier functionalities. Given containers takes random times to come online based on the host resources and what’s in the container; we are simulating it as “Decision Time”; this is also important given ZooKeeper takes couple of seconds to start accepting connection; similar to any other database. “Roll Time” is simulating the wait time to allow participating nodes to join; “Time To Gas Station” is from the Training Kit example and is simulating the different time nodes will take to reach barrier; where they will sync and proceed.

static void DriveToBoston(string connectionString, string name, TimeSpan timeToLeaveHome, TimeSpan timeToRoll, TimeSpan timeToGasStation)
        Console.WriteLine("[{0}] Leaving house", name);
        Thread.Sleep(timeToLeaveHome); //let zookeeper come online and decision time
        var barrier = new Barrier(connectionString);
        bool envrolled = barrier.EnrollIntoBarrierAsync(timeToRoll, name).Result;
        if (!envrolled)
            Console.Write("[{0}] Couldnt join the caravan!", name);
        Console.WriteLine("[{0}] Going to Boston!", name);
        int participants = barrier.GetParticipantCountAsync().Result;
        Console.WriteLine("[{0}] Caravan has {1} cars!", name, participants);
        // Perform some work
        object o = barrier.ReachBarrierAsync(name).Result;
        Console.WriteLine("[{0}] Arrived at Gas Station", name);
        // Need to sync here
        // Perform some more work
        Console.WriteLine("[{0}] Leaving for Boston", name);
    catch (Exception ex)
        Console.WriteLine("[{0}] Caravan was cancelled! Going home!", name);

For the Barrier Helper; we will be using /dotnetcoreapp as the application root node in the ZooKeeper; and /dotnetcoreapp/barrier as the Barrier node. Our Barrier node has two children; participants and reached. All these nodes are Persistent. Each node will create a child node under /dotnetcoreapp/barrier/participants node when enrolling itself; and after a roll time; we will count number of children to determine the number of participants. And when the processing will start; each node will report itself when it will reach barrier by creating a node under /dotnetcoreapp/barrier/reached. When children under reached node becomes equal to the number of participant each node will get sync and proceed with any further processing.

We will use “watcher” functionality that ZooKeeper provides to watch the reached node; the watch will get triggered whenever there is any change; a new child is created.

One of the most interesting things about ZooKeeper is that even though ZooKeeper uses asynchronous notifications, we can use it to build synchronous consistency primitives. We will use this for the roll call situation. After a roll call time out; the node will create /dotnetcoreapp/barrier/rollcomplete; each node will first check its existance; if its not there; will enroll itself; and then check again for its existance and if its there; will check the Czxid of the two nodes; the create zookeeper transaction id; and as ZooKeeper stamps all the node in sequential way; if rollcomplete id is less than the node’s enrollment node id; it means node failed to get itself enrolled before roll get completed.

Here’s the code of our Barrier helper class

  • ZooKeeper exists() api returns the Stat structure that we can use to determine number of children easily
  • All the Zookeeper nodes created by nodes for participation and reporting itself for eaching barrier are Ephemeral; they will get deleted automatically when node will disconnect from the Zookeeper server

The code of the Dotnet Core project is available at https://github.com/khurram-aziz/HelloDocker/tree/master/Zoo; you can clone the code and then run dotnet restore to restore the used packages including the ZooKeeper Client library. We will run three docker containers of this app providing different parameters to simulate the Training Kit example. To build the container image of our application; first run dotnet publish –c Release –o out to build + publish the release confiiguration of our app into the “out” folder; and then use this Dockerfile to build the container image


We can use docker-compose to run the ZooKeeper and the instances of our Dotnet Core for simulation. Here’s the YML file that simulating the three nodes as per Training Kit original example

  • I have specified the dockerfile for the Dotnet Core application; we can use docker-compose up –build (dash dash build) and it will build and run the containers with single command
  • Also note that dennis node parameters are such that they will not be able to join the “caravan” given its taking too much time to decide and by that time; enrollment gets complemented

If everything goes smoothly; you will see an output similar to this

mac_1      | [Mac] Leaving house
dennis_1   | [Dennis] Leaving house
charlie_1  | [Charlie] Leaving house

mac_1      | [Mac] Going to Boston!
mac_1      | [Mac] Caravan has 2 cars!
charlie_1  | [Charlie] Going to Boston!
charlie_1  | [Charlie] Caravan has 2 cars!

dennis_1   | [Dennis] Couldnt join the caravan!zoo_dennis_1 exited with code 0

charlie_1  | [Charlie] Arrived at Gas Station
mac_1      | [Mac] Arrived at Gas Station
charlie_1  | [Charlie] Leaving for Boston
mac_1      | [Mac] Leaving for Boston

zoo_mac_1 exited with code 0
zoo_charlie_1 exited with code 0


We can create other higher order constructions; in the ZooKeeper; its called ZooKeeper Recipes; their pseudo codes are discussed at https://zookeeper.apache.org/doc/trunk/recipes.html; some of these recipes are available in their official Java client library and given the Apache ZooKeeper .NET async client library we are using is based on the Java client library; they have also made available ZooKeeperNetEx.Recipes nuget package that we can use. Leader Election and Queue are available in there.

Happy Containering / Clustering / Distributing your app!

Posted by khurram | 0 Comments


ZooKeeper Series

Apache ZooKeeper is an open-source server which enables highly reliable distributed coordination. It helps us by providing a distributed synchronization service that can be used for maintaining configuration information, naming, group service and other similar aspects of distributed applications. This itself is distributed and is highly reliable and instead of reinventing the wheel; we can use this foundation service in our distributed applications. It was a subproject of Apache Hadoop but now it is a top level project. In a nutshell; its a distributed hierarchical key-value store, and in a distributed environment we typically setup multiple ZooKeeper servers to which clients; nodes running our distributed application; connect and retrieve or set information.

Picture from cwiki.apache.org

It stores the information in “znodes” and provides the namespace that is much like a file system. Znode data typically is less than a megabyte; and we can also have ACLs at Znode level. If there are multiple Zookeeper servers; they need to know about each other and they then maintain a quorum and write requests are forwarded to other servers and go through consensus before a response is generated. It also maintains the update order; updates are identified by the unique zxid; the transaction id; and we can have “watches” that Zookeeper server will trigger accordingly.

Its a Java application that can run on Linux, Solaris or FreeBSD operating system. The simplest way to have it running in lab, development or production environment is no doubt using Docker! With two commands; we can have a server up and running and a connected client!


  • zookeeper is an official Docker image and we can run its two instance; one as a server and another as a client; zkCli.sh is its CLI client that we can use
  • The image exposes; 2181, 2888 and 3888; ZooKeeper client, follower and election ports; and we can use Docker standard linking
  • Visit the image page to learn about how we can further configure it using the environment variables and volume information where it stores the data and log

zkCliWe can use zkCli.sh / ZooKeeper CLI to create / read znodes.

  • We can create three types of znodes using create PATH command; simple, ephemeral (with –e flag) and sequential (with –s flag)
  • Ephemeral node will automatically get deleted when the session expires; we can disconnect and reconnect and use ls command to verify this


  • The Ephemeral node might continue to appear for a while; the node gets deleted after the connection time out and by default its 30sec

Similarly; we can update data in an existing node using set. We can check the stat of the znode using stat to know the zxid and time values. There are two transaction and timestamp values; cZxid and ctime for create and mZxid and mtime for modification.

delete is used to delete the node that has no children and to delete any znode recursively we use rmr

We can also set ACL to znodes; restrict it to certain IP for write or read; there’s also plugin based authentication support and we can define ACLs accordingly. There’s quota support as well

To connect from our application; there exists language bindings and client libraries. C. Java, Perl and Pythn language bindings are official supported. https://cwiki.apache.org/confluence/display/ZOOKEEPER/ZKClientBindings has list of client bindings.

https://github.com/shayhatsor/zookeeper is a .NET async client also available as Nuget at https://www.nuget.org/packages/ZooKeeperNetEx; the good thing about this is that its not only .NET async friendly (Task based APIs) but also compatible with .NET Core

https://marketplace.visualstudio.com/items?itemName=ksubedi.net-core-project-manager is .NET Core Project Manager (Nuget) that allows us to search, install and remove Nuget package right from Visual Studio Code; that we know is a free, open source, runs everywhere, lightweight code editor with debugging and git support. Here’s the .NET Core client code using this Nuget

  • We need to map the Zookeeper’s 2181 port to Docker host so we can access it at known IP address; run the Zookeeper using docker run –-rm –p 2181:2181 zookeeper
  • Notice we are specifying the connection time out when connecting to Zookeeper and also need a watcher; a null watcher code is at https://github.com/khurram-aziz/HelloDocker/blob/master/Zoo/ZooHelper.cs


We can now use docker-compose and can easily run more instances of Zookeeper in our lab/development environment. Here’s a docker-compose YAML file to run three instances for the Zookeeper cluster

  • Notice that we have mapped container’s 2181 ports to Docker host’s 2181, 2182 and 2183 ports; we can now use,, as the connection string and our client will connect to the one Zookeeper instance out of this cluster automatically; or we can specify one or two nodes of our choice

We can stop one instance of the Zookeeper server; and write the value using available nodes, then bring back the node and check if the updated value gets replicated! We can try writing after stopping two instances. Will it allow to write if quorum is not complete?

Posted by khurram | 0 Comments


In Firmata post we established that we can have Python or Node.js application running on a computer; that can be a Single Board Computer like Raspberry Pi running Rasbian or Windows 10 IoT and can control and get sensor data from the Microcontrollers like Arduino or ESP8266


There exists many IoT frameworks for Javascript / Node.js that allows us to write our programs and Johnny-Five is one such popular framework. Using such framework we not only have access to many Javascript / Node.js libraries but also get a platform on which we can write our program quickly in more friendlier environment. Johnny-Five supports Arduino as well as many other boards through the IO Plugin; that is Firmata compatible interface; to communicate with non Arduino hardware. Johnny Five can be used on richer boards like Raspberry Pi and Galileo as well as with Microcontrollers through IO Plugins. It also includes DSL libraries for working with many different actuators and sensors that enable writing IoT code more fun.

Johnny Five needs Node.js v4.2.1 (at the time of this writing); and on Raspberry Pi you can get more recent version using NodeSource

curl -sL https://deb.nodesource.com/setup_7.x | sudo -E bash -
sudo apt install nodejs

And then use npm install johnny-five to get the bits. For ESP8266; we will create firmata object similar to the Firmata post and then on its ready event; hand it over as io to johnny five. We can then continue from there and subscribe to its ready event and write the program. The Blink code will be something like this:

There are also DSL libraries for other sensors and actuators; for instance we can use Thermometer library for temperature sensor; https://github.com/rwaldron/johnny-five/wiki/Thermometer has more details

  • Note that we are using ESP8266 that is 3.3V powered and we are not using the built in controller and instead giving our own temperature calculation lambda


  • 3.3V of ESP8266 is noisy; we can get better result by using some digital sensor
  • ESP01; the widely used ESP8266 board sadly doesnt expose its Analog pin; and we have to use digital sensor

We can use Johnny Five on Windows 10 IoT as well, in fact Node.js Tools for Visual Studio UWP Extension support Johny-Five and Cylon by providing Project Templates. For details on Node.js Tools for Visual Studio UWP Extension; check out Blink with Windows 10 IoT post


For development in Visual Studio and deployment on Windows IoT; we need to watch out for certain gotchas. After NPM package restore; we need to update them; this will apply Windows IoT specific patches to the node modules. Another important thing to watch out is MAX_PATH issue; when building the nodule modules are zipped up and make part of the package; and in doing this; it can face this issue; use npm dedupe to flatten the node modules; and we might have to go deep and dedupe inner modules as well; depending on the errors it generate. For instance I faced issues in node modules under firmata; I simply navigated there and dedupe it and then dedupe in root again. We have to restart the Visual Studio so it picks up the changed things. At the Windows 10 IoT side; we also need to enable Long Paths by issue-ing reg add "HKLM\SYSTEM\CurrentControlSet\Control\FileSystem" /t REG_DWORD /d 1 /v LongPathsEnabled /f at Device Portal and then restart the device to pick the things.


You can optionally specify –use-logger debug option and it will store the console output in the log file that you can then review



Microcontrollers are great; but in today’s ever changing and more demanding world; we often need an ability to upgrade the software; fixing bugs if any; adding and enhancing the functionality. There are well established mechanisms for upgrading software on computers (PCs, Tablets and Phones); but updating firmware on Microcontrollers can become challenging. The connected smart appliances with larger memories can be updated with Over The Air (OTA) updates; but it needs resources like connectivity, enough memory/storage and developing + testing appropriate update mechanism in the firmware; that are not often available in all the appliances. In addition to this; our IoT software might be either complex or depends on other resources like cloud connectivity, database or accessing files that cant be done directly “on” the microcontroller. Further the solution might comprise of many appliances and there is need to coordinate across the appliances, taking input from one appliance doing something on other. MQTT can be used for data passing but sometimes we need an ability to treat the appliance as a “dump gadget” connected to a “smarter software” running on a computer. This is where firmata comes in; its a protocol for communicating with microcontrollers from software on a computer. The protocol is implemented in firmware on microcontroller; Arduino and Spark.IO are supported officially and there exists client libraries for different languages and platforms, from Python, Perl and Ruby to Java, .NET and Javascript/Node and many other (including mobile/tablet platforms)

Firmata Llbrary for Arduino comes preinstalled in the IDE and we can use it with supported boards, Arduino or ESP8266.


  • For Arduino; we use StandardFirmata and for ESP8266 we use StandardFirmataWifi

For the “Blink” example; I am showing you Python and Node.js examples; we can program in any language / platform and can find required library



  • For the Node.js; I am using firmata

If you are wondering; the code is not compiled and sent to microcontroller; instead microcontroller after firmata firmware is acting as a slave always listening to what’s being sent over the wire in the firmata protocol; you can watch the RX activity clearly on the Arduino board


In case of Arduino, the clear drawback is that we need serial connectivity between the board and the computer running the program. We can either use Ethernet or Wifi shield or ESP8266 where this serial cable connection is avoided and the program can connect to the microcontroller through Wifi. Simply use StandardFirmataWifi; edit the WifiConfig.h according to your Wifi settings, optionally uncomment SERIAL_DEBUG to view the debug logs in the Serial Monitor and you are good to go


I connected an Analog Temperature Sensor to Analog Pin 0 and wrote this little Node.js program that retrieve the temperature value and send it to ThingsSpeak for further analysis / reporting.

I find this firmata approach intuitive and easier; given we can change / manage the program easily on the computer instead of reflashing microcontroller firmware, especially ESP8266 based IoT appliances works great in this way. The appliance can continue to be installed where it is; say Sonoff switch and you can change / update the program on the computer possibly even remotely; say Raspberry Pi by sushing it

ESP8266 comes in all sizes; check this video for inspiration; we can deploy sensors connected to ESP01 (that has two GPIOs) and solder/glue/pack the things to normal wall socket USB chargers (with required voltage regulation)

ESP8266 ESP-01
Posted by khurram | 0 Comments

MQ Telemetry Transport

As soon as we have more than one IoT “thing” to manage; we need some kind of management; it quickly becomes cumbersome to use and manage the “smart devices”. We can use Shields like Ethernet, Wifi or GSM with Arduino or use ESP8266 with Arduino or on its own for connectivity, but we need some kind of server where all these devices connect to and from where we can control them. Secondly; even though these MCUs have descent processing power but for the user friendly solution often we need more and integration with the cloud or some proper PC / Single Board Computer is required. Say we deploy ESP8266 based Sonoff Smart switches for outdoor lights and want a “recipe” that when sun goes down the lights are turned on and when sun rises lights are turned off. For this we not only need the knowledge of sunsets and sunrises but we also need to coordinate the Sonoff “smart devices”. This is where protocols like MQTT comes in. As per wikipedia

MQTT (MQ Telemetry Transport) is an ISO standard publish-subscribe-based "lightweight" messaging protocol for use on top of the TCP/IP protocol. It is designed for connections with remote locations where a "small code footprint" is required or the network bandwidth is limited. The publish-subscribe messaging pattern requires a message broker. The broker is responsible for distributing messages to interested clients based on the topic of a message.

There exists PubSubClient library that we can use in Arduino IDE; the library works perfectly fine with Arduino Core for ESP8266. For the context you might want to check out my previous blog posts


We need MQTT Server; called MQTT Broker some where with known and static address. This can be in the cloud; either commercial or free; or we can deploy our own broker within the network. Mosquitto is popular open source MQTT broker that we can install either on Linux or on Windows. Installing on Linux is straight forward; we can even install and configure it on Raspbian (Raspberry Pi), but setting it up on Windows is little tricky. You need to install few other softwares and copy over the DLLs from there to Mosquitto’s installlation folder. Its all documented in its Setup Wizard; but if you need the step by step instructions; visit https://sivatechworld.wordpress.com/2015/06/11/step-by-step-installing-and-configuring-mosquitto-with-windows-7/

Its always a good idea to use authentication with network servers; use mosquitto_passwd utility that comes with it to create a password file and add the required users.


  • Given I am creating the password file in Program Files; I needed Administrative Command Prompt

Once the password file is in place; edit its conf file disabling anonymous users and setting path of the password file


  • Given the conf file is in Program Files; you will need to open file in some editor that's running Administratively

We can use MQTTLens; a Chrome application; to test Mosquitto; our MQTT Brokermqtt-mqttlens

Once the broker is properly in place; we need to write a firmware for our appliance; that connects to MQTT broker; publish its status / logs there and gets the command from it by subscribing to related topics. The PubSubClient library comes with many examples including mqtt_auth that we can use to write the firmware for Arduino or ESP8266 MCUs


Here’s the Arduino IDE code for one such firmware that I wrote for my NodeMCU development board; it takes the command from the MQTT and turn the LED on or off accordingly.

Once we have the firmware developed and tested; we can easily use it in real world IoT appliances like Sonoff. Now that our smart appliance is connected to the MQTT; we can control it from elsewhere on demand. We can have a program / scheduled job / trigger; configured in a cloud or running on a PC or Raspberry Pi that at the time of Sunset and Sunrise send appropriate messages to the MQTT broker and the smart appliance with oblige accordingly. Eclipse’ Paho project provides open source client implementations of MQTT; https://eclipse.org/paho/clients/python/ is one such Python client library that’s just “pip install paho-mqtt” away. Here’s one such python script that flashes the LED connected to NodeMCU by sending required payloads to the appropriate topic where NodeMCU is subscribed to


On receiving the messages; the NodeMCU flashes the LED accordingly; here’s what my firmware is writing to Serial port


With MQTT understanding and how it can be used to orchestrate the IoT appliances; the architecture of our Sunset/Sunrise Automatic Outdoor Lights solution would be something like this


We can either user some Cloud API for sunset/sunrise times; or store the times in some local database; given Raspbian on Raspberry Pi gives us a very rich Linux experience; we can deploy mySQL and Mosquito on it along with our custom program in Python (or any other language / platform of our choice)

Sonoff Dissection

Blink Series

ESP8266 Series

Sonoff is a Wifi Smart Switch produced by ITEAD that you can get from China for cheap. This switch can be used for Smart Home / Enterprise solutions. You can connect it to 90V-250V / 10A AC line and can control it over Wifi


If we cut it open; we will find its a 10A Relay connected to ESP8266 (1Mb model) based switch with required Power regulatory circuit, and good thing about this switch is that they have exposed the TTL and GPIO pins making it very DIY/Maker/Hacker friendly. We can solder Male Headers to these pins; either by our self or seek some help from some road side Radio / Repair shop



We need to connect that Square pin and three pins above for powering ESP8266; its Microcontroller Unit (MCU) and for Serial TTL.


The Square pin; the closest to the Reset is Power; the next two are RX and TX and the fourth one is Ground. Once soldered; we can connect the jumper wires to these male headers



Above picture from: http://randomnerdtutorials.com/reprogram-sonoff-smart-switch-with-web-server/

Next we need USB – TTL Adapter; get the one that has 3.3V / 5V setting; we need to power it with 3.3V. I am using a cheap CH340 based USB-TTL Adapter


Keep the reset button pressed when power it up; it will take the on board ESP8266 into programming mode; and USB port will get connected and you will be able to connect to it either from ESP8266 Flasher and flash NodeMCU; like we did in the Blink post or use Arduino Core for ESP8266 and make a custom firmware similar to Arduino core for ESP8266 WiFi chip post. The onboard LED is connected to ESP8266’ pin 13 and the relay is connected to pin 12. You can open these both pins as OUTPUT and writing HIGH to pin 12 where relay is connected; the switch will turn on allowing the AC current to flow and writing LOW will disconnect.

Using this information; I wrote one such firmware that's being demoed in this video; PI3 along with power adapter is connected as the appliance; and we can turn it on or off “remotely” using the web interface; similar to Arduino core for ESP8266 WiFi chip post where we were turning the led on or off

Posted by khurram | 0 Comments
Filed under: , ,

Arduino core for ESP8266 WiFi chip

Blink Series

ESP8266 Series

esp8266-examplesStarting with 1.6.4, Arduino (IDE) allows using libraries and installation of third-party platform packages using Boards Manager. https://github.com/esp8266/Arduino is a package that we can install and it enables support for ESP8266 chip in Arduino environment; we can write sketches using the familiar Arduino functions and libraries and run them directly on ESP8266. The Package comes with libraries to communicate over Wifi using TCP and UDP, setup HTTP, mDNS, SSDP and DNS servers, do Over The Air (OTA) updates, use a file system in flash memory, work with SD cards, servos, SPI and I2C peripherals. The GitHub page has the installation instructions and it comes with Examples and we can have the Blink firmware running on NodeMCU in no time.

  • https://github.com/arduino/Arduino/wiki/Unofficial-list-of-3rd-party-boards-support-urls has list of boards that Arduino IDE supports using such third party Board Extensions
  • NodeMCU and Arduino development environments for ESP8266 are popular among makers and developers; there exists few other options as well; Espressif; the vendor who designed this chip offers two SDKs; one of which is FreeRTOS based, there’s MicroPython, ESP8266 Basic and Zbasic for ESP8266 as well.
    Multicast DNS (mDNS) and Simple Service Discovery Protocol (SSDP) are protocols to discover ESP8266 on the network; in this post we will be using mDNS
  • Serial Peripheral Interface (SPI) bus and Inter-Integrated Circuit (I2C) are low level communication protocols used when two circuits / MCUs / ICs are connected
  • Visit https://github.com/esp8266/Arduino/tree/master/doc/ota_updates for how we can do OTA updates; its an advanced level topic; but is very useful when building an IoT solution for devices that will get deployed in far away places

Here’s the pin layout of NodeMCU; source: http://www.slideshare.net/roadster43/esp8266-nodemcu


Now if we connect the LED to GPIO0; and we want a firmware that ESP connects to a specified Wifi and have a web interface through which we can turn the led on or off; the Arduino sketch for such a firmware using libraries from Arduino core for ESP8266 WiFi chip; would be something like this

  • esp8266-blinkI am using the Serial for “debug” logs; once the firmware is uploaded; we can open up the Serial Monitor to view what’s being written on the serial port by the device from within the Arduino IDE
  • I have setup mDNS; on Windows using Bonjour Service; we can resolve .local names and can access our nodemcu device using a friendlier DNS name that the device is publishing itself with
  • I also used the LED during the setup how it flashes while connecting to Wifi; its always useful to give user a visual clues what’s happening
  • The web interface is minimal for turning the led on or off; but we get the idea that we can achieve the same using ESP8266 Arduino Core what we could do using Lua in NodeMCU firmware; and on top of this; we can use many Arduino libraries that are out there in our ESP8266 firmware


Posted by khurram | 0 Comments
Filed under: , ,

Blink with Android Things

android-thingsBlink Series

Android Things is Google’s OS offering for IoT scene. It leverages Android’ development tools and APIs and add new APIs to provide low level I/O and also offer libraries for common components like temperature sensors and display controllers etc. You can get the Developer Preview from https://developer.android.com/things/index.html, Raspberry PI 3 along with few other boards are supported. For PI3; there is an image that we burn to SD card; very similar to Rasbian and boot the PI connecting it to the Ethernet. We can identify the IP of the device either using DHCP Server or Wifi Router with Ethernet ports. Alternately; it has https://en.wikipedia.org/wiki/Multicast_DNS support that has become very popular in IoT devices; and Android Things board will publish itself as android.local on the subnet. Unfortunately Windows 10 “yet” doesn't support it “completely”; but we can install Apple’s Bonjour Service (it comes with iTunes; or install Bonjour Print Services) and can discover the IP of the device.


For the development; we will need Android Studio 2.2+. The SDK Tools should be 24 or higher in the Android SDK Manager. We will also need Android 7.0 (API 24) or higher. Create a project in Android Studio targeting Android 7.0 or above; and then add com.google.android.things:androidthings:0.1-devpreview dependency in module level build.gradle file


We also need to specify IOT_LAUNCHER category for intent-filter in the manifest file to declare our activity as the main entry point after the device boots


For our Blink; we will use Handler and Runnable for scheduling and blink logic along with PeripheralManagerService to access the GPIO pins. The code for our activity would be something like this

package pk.com.weblogs.khurram.hellothings;

import android.os.Handler;
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.util.Log;

import com.google.android.things.pio.Gpio;
import com.google.android.things.pio.PeripheralManagerService;

import java.io.IOException;

public class MainActivity extends AppCompatActivity {
String TAG = "HelloThings";
Handler handler = new Handler();
Runnable blink = null;
Gpio led = null;

protected void onDestroy() {
if (null!=blink)
if (null!=led)
try {
catch (IOException e) {
Log.e(TAG, "Failed to close the LED", e);

protected void onCreate(Bundle savedInstanceState) {
PeripheralManagerService service = new PeripheralManagerService();
//Log.d(TAG, "Available GPIO: " + service.getGpioList());
try {
this.led = service.openGpio("BCM17");
if (null == led) return;
this.blink = new Runnable() {
public void run() {
try {
handler.postDelayed(this, 1000);
} catch (IOException e) {
Log.e(TAG, "Failed to set value on GPIO", e);
} catch (IOException e) {
Log.e(TAG, "Failed to open BCM17", e);

We can run/debug our application from Android Studio; given the Android Debug Bridge is connected to the device


Similar to Windows 10 IoT; we can have graphical interface and when the application runs; it gets displayed over the HDMI on the Raspberry PI and we can connect USB mouse/keyboard and interact with the application.

Android Things Resources

Android Things Review

Its still a developer preview and its not fair to give any final verdict; however it feels and smells a lot like Windows 10 IoT with Windows 10 IoT a bit more mature and attractive due to its Node.js support etc. Given Raspberry PI is capable of doing a lot; Rasbian / PIXEL has shown this; and I think Raspberry PI like Single Board Computers deserves better Windows 10 IoT and Android Things; may be Windows 10 Core with proper Shell / Store / Apps as free version and Windows 2016 Server Core as commercial version and Android Lite / Core as a complete Android TV like experience minus phone features. Cortana and Google Now makes a lot of sense here as well. The problem with these both Windows 10 IoT and Android Things platforms is; if you want to design a home automation or some similar solution and want to keep things cloud free; you will need a separate Raspberry PI running Rasbian or a PC for things like MQTT or database / web interface etc. However these both OSes have solid User Interface frameworks and can be used as GUI “Consoles” for our IoT solutions

Posted by khurram | 0 Comments
Filed under: , ,

Blink with Windows 10 IoT

Blink Series

They say they designed this edition of Windows for Internet of Things; and its part of their “universal device platform” vision. With Anniversary Edition; its now called Windows 10 IoT Core; and is available from https://developer.microsoft.com/en-us/windows/iot; Raspberry Pi 2 and 3 along with couple of other boards are supported. Last time I tried it on Pi 2; and couldn't found the compatible Wifi USB Dongle but thankfully Pi 3 has built in Wifi and it works seamlessly now; they have also improved device compatibility a little.

From the developer portal; you select the supported board and the the Windows version; either Anniversary one or the Insider Preview one; for Anniversary; it downloads the Windows 10 IoT Core Dashboard, a ClickOnce desktop application, using which you can download the OS image and prepare the SD Card. Connect the board with the Ethernet and Dashboard will find it (given you are on in same subnet); once you have the IP; you can open its Device Portal and from there you can change the Administrator password and setup Wifi. Alternately; you can connect the monitor/screen on PI’s HDMI and connect Mouse/Keyboard and setup the Wifi from the console!

  • The dashboard offers to set the Administrator password; but in my case it didn't work and OS had default password; which is p@ssw0rd (Zero r d)

For the development; given its part of “universal device platform”; you need Visual Studio 2015 and the Windows SDK for Anniversary Edition. Install the Windows 10 IoT Core Project Templates from https://www.visualstudiogallery.msdn.microsoft.com/55b357e1-a533-43ad-82a5-a88ac4b01dec (or https://marketplace.visualstudio.com/items?itemName=MicrosoftIoT.WindowsIoTCoreProjectTemplates) that installs the C#, Visual Basic and C++ project templates for Background Application (IoT). If you create a C# project using this; it creates a class implementing the required interface having single Run method with IBackgroundTaskInstance parameter. For the “Blink”; we will need a ThreadPoolTimer that will blink our LED using GpIoPin class that we get from GpIoController. These GPIO related classes are in Windows.Devices.Gpio namespace from the Microsoft.NETCore.UniversalWindowsPlatform package that's already setup when we create the project. Here’s the code that we need in the Run()

public void Run(IBackgroundTaskInstance taskInstance)
    // TODO: Insert code to perform background work
    // If you start any asynchronous methods here, prevent the task
    // from closing prematurely by using BackgroundTaskDeferral as
    // described in

    var deferral = taskInstance.GetDeferral();

    var gpio = GpioController.GetDefault();
    if (null == gpio) return;

    var pin = gpio.OpenPin(17);
    if (null == pin) return;


    var toWrite = GpioPinValue.High;
    var timer = ThreadPoolTimer.CreatePeriodicTimer(delegate
        if (toWrite == GpioPinValue.High)
            toWrite = GpioPinValue.Low;
            toWrite = GpioPinValue.High;
    }, TimeSpan.FromMilliseconds(1000));

  • LED’s Positive/Anode side is connected to BCM GPIO 17 (GPIO0) / Header Pin 11 and its Negative/Cathode side is connected to Header Pin 6 (OV) similar to http://weblogs.com.pk/khurram/archive/2017/01/12/blink.aspx (Raspberry Pi Pin Layout is given on this previous post)
  • Given this background task will get triggered “once” and there is no “loop” concept like in Arduino; we need to setup a timer for LED blinking, and we want to continue to run our task while timer is “ticking” we can get “Deferral” from taskInstance and if we do that and don't call its “Commit” our task will continue to run.

We can use Visual Studio to deploy and run the application (select ARM and Remote Machine) or we can create App Package (Project > Store) and upload our application and its certificate using the “Device Portal”. Using the Device Portal; we can also set it up as “Startup” application so it will run automatically on boot.


Given its; Windows; we can make our Blink application as a typical User Interface / Forground as well; and can have routine XAML based visual interface in the application to blink the LED. https://developer.microsoft.com/en-us/windows/iot/samples/helloworld has information on such application along with how you can use PowerShell to make your Forground application as the Default App; (replacing the default Console Shell / Launcher) etc

Another interesting development option is Node.js (Chakra build) on Windows 10. Node.js uses Chrome’ Javascript engine; Microsoft open sourced its Edge’ Javascript engine Chakra; and there exists https://github.com/nodejs/node-chakracore that let Node.js uses Chakra. https://www.npmjs.com/package/uwp NPM package allows us to access Universal Windows Platform (UWP) apis from Node.js (Chakra build) on Windows 10; including Windows 10 IoT Core. We can install Node.js Tools for Visual Studio (NTVS) that enables a powerful Node.js development environment within Visual Studio and there exists NTVS UWP Extension that enables deploying Node.js + our app as a UWP application to Windows 10; including Desktop, Mobile and IoT.


Installing Node.js Tools for UWP Apps; takes care of everything; it will install Chakra Node.js, NTVS and NTVS UWP Extensions

  • I didn't added Chakra Node.js into PATH; as I already had a “normal” Node.js in the PATH and didn't wanted to disturb my other project; and this doesnt affect NTVS UWP Projects and they work fine


NTVS UWP Extensions also installs some nice collection of Project Templates. For my Blink; I used Basic Node.js Web Server (Universal Windows) project template and it had uwp npm module already setup. I simply wrote these lines in the server.js

var http = require('http');
var uwp = require("uwp");
var gpioController = Windows.Devices.Gpio.GpioController.getDefault();
var pin = gpioController.openPin(17);

http.createServer(function (req, res) {
    res.writeHead(200, { 'Content-Type': 'text/plain' });
    if (pin.read() == Windows.Devices.Gpio.GpioPinValue.high) {
    } else {


  • Notice the use of uwp and that it needs to be close() at the end
  • Notice how we got referenced to gpioController and pin; and how their naming is Javascript friendly camel Cased
  • We kept things simple; on each reload; the LED switches from On to Off and vice versa; we can have buttons like ESP8266 web interface we did in previous post


Thanks to NTVS UWP Extensions; the project template offers UWP projects like Visual Studio integrations for debugging / running and deploying the application to remote machine (Windows 10 IoT) and packaging the application etc.

Windows 10 IoT Resources

NTVS UWP Extension Resources

Windows 10 IoT Review

Windows 10 IoT Core is okay; it can become better if they provide Windows 10 like “Start” screen / launcher with Notifications, proper Command Prompt + PowerShell, File Explorer, Settings, Application Installer and Task Scheduler. PowerShell remoting is too cryptic and SSH is industry standard; Windows badly needs SSH Server for headless deployments. They should also revitalize ClickOnce; its a great enterprise grade auto-update platform; and can be used to update “Metro or IoT Apps” in personal / enterprise scenarios. Say if I develop an IoT solution for a farmer who is not that tech savvy; if he asks me to make some changes or add features; its too complicated to update the apps on the devices remotely. We can use Windows Store; but its too demanding and doesn't fit everywhere

There is no “Server Side” stuff; given Rasbian is a solid OS on Raspberry Pi and we can deploy mySQL, Apache, MQTT broker and what not. Microsoft should bring their server offerings to such single board computers; make at least IIS and ASP.NET Core available on it; SQL Express, MSMQ (with MQTT support) and some file syncing tools / solution would be appreciated. Currently it feels more like a “terminal” for their Azure offerings and not everyone wants to connect to the “Cloud”. For designing such solutions we need a separate Raspberry PI running Rasbian or a PC / Server for things like MQTT, database and web interface etc.

  • I had Cortana in the list above; but in latest Insider builds; its there      
  • My son would like to connect the XBox Controller to PI’s USB and play games from Windows Store on the PI connected to big screen; XBox is a great gaming console; but PI is a nice Media Center on Linux platform; and “Windows” has solid foundation and with casual games, TV Shows and Movies from Windows Store; this can become a “thing”
Posted by khurram | 0 Comments
Filed under: , , ,


Blink Series

ESP8266 Series

Blink in Electronics is Hello World equivalent of Programming. Raspberry Pi like Single Board Computers and Arduino like open source microcontroller boards have made Electronics accessible to even kids and we as professionals can use these platforms to develop software and hardware solutions. As Alan Kay said People who are really serious about software should make their own hardware; with such platforms in market; its not that difficult anymore!

Arduino comes in different shapes and sizes; UNO is one of the more popular boards and recommended for getting started. We connect it to computer over USB and using Arduino IDE write “sketch”; an Arduino program in C like environment that we can then upload and run on the microcontroller seamlessly. Board gets the power from the USB when connected; we can disconnect the USB and power it externally as well. It has all the necessary USB to TTL adapter and power regulation circuitry along with General Purpose Input Output (GPIO) pins where we can connect additional peripherals / hardware / electronics. Arduino IDE has example sketches; and Blink example is there.


uno-r3-led uno-r3setup() method gets called when the board boots up and loop() keep getting called later on. LED_BUILTIN is constant and on UNO its pin 13 where the on board LED is connected. On one side of UNO are Digital Pins on the other sides are Power pins with different voltages and Analog pins. The pin numbers are labeled and we can connect our LED to any of it and use the number instead of LED_BUILTIN.

  • The longer pin of LED is Anode / +ve that needs to be connected to any of the Digital Pin other than Ground (GND) and the shorter pin is Cathode / –ve that needs to be connected to the GND pin
  • If we see through the LED; the Anode side is smaller and the Cathode side is flattened.


  • We need jumper wires that comes in three flavors, female to female, male to male and male to female, we will need male to female wire for connecting LED to Arduino board directly without using the breadboard. For breadboard; we will need male to male!

Arduino is great; the IDE support libraries and there exists many “Shields” / daughter boards for Arduino that provides additional functionality from storing data to SD card to Wifi or Ethernet connectivity etc; there is a great ecosystem around this. There are some limitations like UNO’s ATmega328P Micro Controller is not that speedy and has limited memory; you cannot run complicated “software” on it and adding internet connectivity increases the cost. Thankfully there are many other options like ESP8266 based boards. ESP8266 provides Wifi / TCP-IP stack out of the box and its ESP12 onwards variants are FCC approved and has 1MB to 4MB memory which is quite respectable. NodeMCU is getting quite popular as it comes with a firmware that has Lua scripting support; its ESP12 based and AMICA’s NodeMCU R2 board is Breadboard friendly and comes with USB to TTL and USB Power Regulator circuitry similar to Arduino.


  • Watch out; there exists many NodeMCU / ESP8266 boards; http://frightanic.com/iot/comparison-of-esp8266-nodemcu-development-boards/ has nice comparison; NodeMCU AMICA R2 is the respected one and highly recommended board.
  • Having a respected board is recommended so that when connected to your computer its USB to TTL adapter drivers gets installed; AMICA R2 comes with CP2102 adapter and it gets detected and drivers get installed seamlessly.

To use the board; we first flash the NodeMCU firmware using https://github.com/nodemcu/nodemcu-flasher; once flashed; we can use ESPlorer; an IDE for ESP8266; similar to Arduino IDE. It supports writing LUA scripts. As per NodeMCU requirement; we need to create init.lua file and upload; and then this script gets executed when the ESP8266 is reset having NodeMCU firmware. Here is one such script which is a webserver giving us the option to On/Off the GPIO0 and GPIO2 ports of the Micro Controller. It will also connect the chip to the wifi network with provided credentials

led1 = 3
led2 = 4
gpio.mode(led1, gpio.OUTPUT)
gpio.mode(led2, gpio.OUTPUT)
    conn:on("receive", function(client,request)
        local buf = "";
        local _, _, method, path, vars = string.find(request, "([A-Z]+) (.+)?(.+) HTTP");
        if(method == nil)then
            _, _, method, path = string.find(request, "([A-Z]+) (.+) HTTP");
        local _GET = {}
        if (vars ~= nil)then
            for k, v in string.gmatch(vars, "(%w+)=(%w+)&*") do
                _GET[k] = v
        buf = buf.."<h1>ESP8266 Web Server</h1>";
        buf = buf.."<p>GPIO0 <a href=\"?pin=ON1\"><button>ON</button></a>&nbsp;<a href=\"?pin=OFF1\"><button>OFF</button></a></p>";
        buf = buf.."<p>GPIO2 <a href=\"?pin=ON2\"><button>ON</button></a>&nbsp;<a href=\"?pin=OFF2\"><button>OFF</button></a></p>";
        local _on,_off = "",""
        if(_GET.pin == "ON1")then
              gpio.write(led1, gpio.HIGH);
        elseif(_GET.pin == "OFF1")then
              gpio.write(led1, gpio.LOW);
        elseif(_GET.pin == "ON2")then
              gpio.write(led2, gpio.HIGH);
        elseif(_GET.pin == "OFF2")then
              gpio.write(led2, gpio.LOW);

  • NodeMCU firmware comes with Lua library for wifi and networking that we are using above to connect to Wifi network and setting up a basic web server and writing data to GPIOs

Once its up we can then access our LUA app and On/Off the GPIO port where we connected our LEDesplorer

  • We can call LUA functions from the IDE; using wifi.sta.getip() we can get the IP address the device has got on the Wifi network
  • Arduino started developing non AVR boards; and they modified their IDE to support different tool chains to support these boards using “core”; there exists ESP8266 Arduino Core using which we can use Arduino IDE, its C/C++ based language and many Arduino libraries out in the world to compile “Arduino Sketch” for ESP8266 as well; the ESP8266 Core comes with required libraries for Wifi / Networking capabilities of the MCU (Micro Controller Unit)

NodeMCU / ESP8266 can do a lot in the modern IoT world; there are many devices out there powered with this Micro Controller and we can achieve a lot; from cloud connectivity to auto update-able app + firmware. But if we want the ability to have computer like features in our electronics setup; that we can run many programs; change and debug them remotely; have rich on board computing or need additional software on the board (database / programming environment / custom networking capability) or need some hardware that needs drivers or some custom application (USB camera or similar hardware); Raspberry Pi is the way to go; because one powerful feature of the Raspberry Pi is the row of GPIO pins that provide physical interface between the Pi (Linux and other operating system) and Electronics world


The RPi.GPIO python module offers easy access to the general purpose IO pins on the Raspberry Pi. To install it use apt-get and install python-dev and python-rpi.gpio packages

sudo apt-get install python-dev python-rpi.gpio

Once installed; we can write a simple blink.ph script similar to Arduino above; we set the numbering scheme using GPIO.setmode; there is GPIO.BCM for GPIO Numbering mode and GPIO.BOARD for Physical Numbering.

The GPIO pins are numbered in two ways; GPIO Numbering; the way computer sees them and these jump about all over the place and we need a printed reference for that and the other is Physical Numbering; in which we refer to the pin by simply counting across and down.

There also exists WiringPi; a GPIO C Library that enables us to access our electronics connected to Raspberry Pi from the C/C++ environment in very Arduino like fashion. http://wiringpi.com/download-and-install/ has all the information you need to download and compile it.

Image from http://wiringpi.com/pins; where WiringPi numbering is documented

GPIO0 is physically pin 11 and BCM wise its 17 and in WiringPi its refered to as pin 0. With this information; our blink.ph (Python) and blink.c (C using wiringpi) would be something like this:


  • To compile the C file; we will use –Wall and –lwiringPi switches with gcc

Raspberry Pi might feel costlier; but there exists Pi Zero; thats almost NodeMCU size, gives us similar feature of its bigger brother and is NodeMCU like quite affordable. However we will need OTG Ethernet or Wifi dongle in case we need internet connectivity in our project and though total cost might be more than NodeMCU; but having a PC like experience to control the electronics and the freedom that comes due to its Linux based operating system is simply unmatched and it gives much better and wider options in term of manageability when things are in production.


Posted by khurram | 0 Comments

Dockerizing PHP + MySQL Application Part 2

In the previous post we used mysql:5.6 official Docker Image for Database Container and created a Custom Dockerfile for the PHP Container. We had to expose the MySQL container’s ports and so we can connect to the database using the MySQL CLI to create the required WordPress database and execute the MySQL dump file. In production; on the Server or System Administrator’s machine these CLI tools might not be available and more importantly we dont want to expose the MySQL ports given that Web Container can be "linked" to it within Docker. We can sort this out by creating a “DB Helper” container that has the MySQL CLI and its job is to wait till MySQL “server” image spins up and then connect to it and create the database and run the dump SQL script.

For this we will create a Shell script having the following code

while ! mysql -h db -u root --password=passwd -e 'quit'
    do sleep 10;
mysql -h db -u root --password=passwd << EOF
create database wordpress;
use wordpress;
source wordpress.sql;

The Dockerfile for our “DB Helper” container will be

FROM ubuntu
RUN apt-get update
RUN apt-get install -y mysql-client
#RUN apt-get install -y nano

ADD wordpress.sql /tmp/wordpress.sql
ADD createdb.sh /tmp/createdb.sh

RUN chmod +x /tmp/createdb.sh
RUN sed -i -e 's/\r$//' /tmp/createdb.sh

CMD /bin/bash createdb.sh

Given everything is “automated”; we can now create a docker-compose file using which we can define the whole environment. Here’s its contents

version: '2'

        image: mysql:5.6
        restart: unless-stopped
            MYSQL_ROOT_PASSWORD: passwd
            context: .
            dockerfile: Dockerfile.dbhelper
        image: wordpress/dbhelper
            - db
            context: .
            dockerfile: Dockerfile.web
        image: wordpress/web
        restart: unless-stopped
            - db
            - "57718:80"

Once its in place; we can simply issue docker-compose up --build to spins up all the containers for the WordPress; the DB Helper container will create the required database and import the dump and our application will shortly be made available on the specified port; 57718 in this example

  • Use docker-compose up --build –d to launch the containers in the background in Production
  • Alternatively one can also use docker-compose build to create required containers and then docker-compose up to launch and docker-compose down to stop the imagesl
  • docker-compose doesn't get installed along the Docker Daemon; please refer to its installation instruction on how to install it on the Server
  • We can execute more SQL commands to change the host / site urls if we created the dump in the development / staging environment and want to have different host / site urls in the production using the DB Helper container.
Posted by khurram | 0 Comments
Filed under: , ,

Dockerizing PHP + MySQL Application

Docker allow us to package the application with all its dependencies; and this makes an ideal platform to deploy and migrate existing PHP / MySQL applications. We cannot only consolidate multiple PHP applications on the server; where one application is using the latest runtimes and other might need specific versions of PHP and MySQL, but also can test the application in an isolated environment with newer runtimes or try updating the application framework to newer versions. For this post; I will be migrating WordPress application to Docker.

I am running WordPress using IIS Express and MySQL on my Windows development machine; but things apply equally to Linux. For migration; we need the WordPress’ PHP files and SQL script to generate its associated MySQL database. Using the information from wp-config.php file and mysqldump we can create the SQL script; and using Windows 10 Anniversary Update’s Bash on Ubuntu on Windows (quite mouthful) we can create the TGZ (tar + gzip) file from the WordPress www root easily. We are creating the TGZ file because Docker supports it natively; when creating the web container it will automatically expand it to the specified folder and its easier to manage single file instead of hundreds of web application files

  • Note; that we created the TGZ file from the www root folder; its required so that there is no parent directory in the archive; I also placed it in the separate project folder where all the required files for Docker will be kept

Lets setup the required containers; I am going to setup two containers; one for MySQL / Database and the other for the PHP / Web. I will be using the official images so that I can rebuild my images whenever they are updated (security fix; newer versions etc) Lets start a standard mysql:5.6 container; we will need to name it so we can link it later with the web container. I am also exposing its MySQL port so I can connect to it from MySQL CLI to create the database and import the data using the dump SQL we created earlier.

  • I am using Visual Studio Code; and have set the End of Line from \r\n to \n in its Workspace Setting to make the files I am creating / editing Linux friendly; later we will be creating scripts
  • I am using Docker for Windows
  • Stop the Local MySQL Windows Service; if you have it; before mapping the container’s MySQL port
  • When running the MySQL container in production; we don't have to expose its port; we can link the web container to it and it will be able to access MySQL just fine

Lets create a Dockerfile for the web container; we will be basing our container on a standard php:5.6-apache and add the required Linux component and PHP extensions using the script / mechanism php:5.6 container recommends. We can add TGZ file and Docker Build will extract it into the specified folder. I have also kept a copy of wp-config.php; make changes in it for new MySQL settings; and created a test.php file to check the connection and copying these two files over the TGZ

  • I simply referred to WordPress official Dockerfile to learn which Linux components and PHP extensions it need

Dockerfile for Web Container

FROM php:5.6-apache
RUN apt-get update

COPY php.ini /usr/local/etc/php/

RUN set -ex; apt-get install -y libjpeg-dev libpng12-dev

RUN docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr
RUN docker-php-ext-install gd mysqli opcache

RUN { echo 'opcache.memory_consumption=128'; \
    echo 'opcache.interned_strings_buffer=8'; \
    echo 'opcache.max_accelerated_files=4000'; \
    echo 'opcache.revalidate_freq=2'; \
    echo 'opcache.fast_shutdown=1'; \
    echo 'opcache.enable_cli=1'; \
    } > /usr/local/etc/php/conf.d/opcache-recommended.ini

RUN a2enmod rewrite expires

ADD wordpress.tgz /var/www/html
COPY wp-config.php /var/www/html/wp-config.php
COPY test.php /var/www/html/test.php

Once our web image is created; we can simply run it linking to the MySQL container in which we created the required database earlier

  • We have to use the same port we were using earlier; as WordPress stores the Site URL in its database and redirect to it automatically if any other URL is requested

Stay tuned for the second part; in which we will use docker-compose and also try to automate certain manual steps we had to perform above

Posted by khurram | 1 Comments
Filed under: , ,