Welcome to weblogs.com.pk Sign in | Join | Help

Blink with Android Things

android-thingsBlink Series

Android Things is Google’s OS offering for IoT scene. It leverages Android’ development tools and APIs and add new APIs to provide low level I/O and also offer libraries for common components like temperature sensors and display controllers etc. You can get the Developer Preview from https://developer.android.com/things/index.html, Raspberry PI 3 along with few other boards are supported. For PI3; there is an image that we burn to SD card; very similar to Rasbian and boot the PI connecting it to the Ethernet. We can identify the IP of the device either using DHCP Server or Wifi Router with Ethernet ports. Alternately; it has https://en.wikipedia.org/wiki/Multicast_DNS support that has become very popular in IoT devices; and Android Things board will publish itself as android.local on the subnet. Unfortunately Windows 10 “yet” doesn't support it “completely”; but we can install Apple’s Bonjour Service (it comes with iTunes; or install Bonjour Print Services) and can discover the IP of the device.

bonjour-service

For the development; we will need Android Studio 2.2+. The SDK Tools should be 24 or higher in the Android SDK Manager. We will also need Android 7.0 (API 24) or higher. Create a project in Android Studio targeting Android 7.0 or above; and then add com.google.android.things:androidthings:0.1-devpreview dependency in module level build.gradle file

hellothings-gradle

We also need to specify IOT_LAUNCHER category for intent-filter in the manifest file to declare our activity as the main entry point after the device boots

hellothings-manifest

For our Blink; we will use Handler and Runnable for scheduling and blink logic along with PeripheralManagerService to access the GPIO pins. The code for our activity would be something like this

package pk.com.weblogs.khurram.hellothings;

import android.os.Handler;
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.util.Log;

import com.google.android.things.pio.Gpio;
import com.google.android.things.pio.PeripheralManagerService;

import java.io.IOException;

public class MainActivity extends AppCompatActivity {
String TAG = "HelloThings";
Handler handler = new Handler();
Runnable blink = null;
Gpio led = null;

@Override
protected void onDestroy() {
super.onDestroy();
if (null!=blink)
this.handler.removeCallbacks(this.blink);
if (null!=led)
try {
led.close();
}
catch (IOException e) {
Log.e(TAG, "Failed to close the LED", e);
}
}

@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
PeripheralManagerService service = new PeripheralManagerService();
//Log.d(TAG, "Available GPIO: " + service.getGpioList());
//BCM17
try {
this.led = service.openGpio("BCM17");
if (null == led) return;
this.led.setDirection(Gpio.DIRECTION_OUT_INITIALLY_LOW);
this.blink = new Runnable() {
@Override
public void run() {
try {
led.setValue(!led.getValue());
handler.postDelayed(this, 1000);
} catch (IOException e) {
Log.e(TAG, "Failed to set value on GPIO", e);
}
}
};
handler.post(this.blink);
} catch (IOException e) {
Log.e(TAG, "Failed to open BCM17", e);
}
}
}

We can run/debug our application from Android Studio; given the Android Debug Bridge is connected to the device

hellothings-adb

Similar to Windows 10 IoT; we can have graphical interface and when the application runs; it gets displayed over the HDMI on the Raspberry PI and we can connect USB mouse/keyboard and interact with the application.

Android Things Resources

Android Things Review

Its still a developer preview and its not fair to give any final verdict; however it feels and smells a lot like Windows 10 IoT with Windows 10 IoT a bit more mature and attractive due to its Node.js support etc. Given Raspberry PI is capable of doing a lot; Rasbian / PIXEL has shown this; and I think Raspberry PI like Single Board Computers deserves better Windows 10 IoT and Android Things; may be Windows 10 Core with proper Shell / Store / Apps as free version and Windows 2016 Server Core as commercial version and Android Lite / Core as a complete Android TV like experience minus phone features. Cortana and Google Now makes a lot of sense here as well. The problem with these both Windows 10 IoT and Android Things platforms is; if you want to design a home automation or some similar solution and want to keep things cloud free; you will need a separate Raspberry PI running Rasbian or a PC for things like MQTT or database / web interface etc. However these both OSes have solid User Interface frameworks and can be used as GUI “Consoles” for our IoT solutions

Posted by khurram | 0 Comments
Filed under: , ,

Blink with Windows 10 IoT

Blink Series

They say they designed this edition of Windows for Internet of Things; and its part of their “universal device platform” vision. With Anniversary Edition; its now called Windows 10 IoT Core; and is available from https://developer.microsoft.com/en-us/windows/iot; Raspberry Pi 2 and 3 along with couple of other boards are supported. Last time I tried it on Pi 2; and couldn't found the compatible Wifi USB Dongle but thankfully Pi 3 has built in Wifi and it works seamlessly now; they have also improved device compatibility a little.

From the developer portal; you select the supported board and the the Windows version; either Anniversary one or the Insider Preview one; for Anniversary; it downloads the Windows 10 IoT Core Dashboard, a ClickOnce desktop application, using which you can download the OS image and prepare the SD Card. Connect the board with the Ethernet and Dashboard will find it (given you are on in same subnet); once you have the IP; you can open its Device Portal and from there you can change the Administrator password and setup Wifi. Alternately; you can connect the monitor/screen on PI’s HDMI and connect Mouse/Keyboard and setup the Wifi from the console!

  • The dashboard offers to set the Administrator password; but in my case it didn't work and OS had default password; which is p@ssw0rd (Zero r d)

For the development; given its part of “universal device platform”; you need Visual Studio 2015 and the Windows SDK for Anniversary Edition. Install the Windows 10 IoT Core Project Templates from https://www.visualstudiogallery.msdn.microsoft.com/55b357e1-a533-43ad-82a5-a88ac4b01dec (or https://marketplace.visualstudio.com/items?itemName=MicrosoftIoT.WindowsIoTCoreProjectTemplates) that installs the C#, Visual Basic and C++ project templates for Background Application (IoT). If you create a C# project using this; it creates a class implementing the required interface having single Run method with IBackgroundTaskInstance parameter. For the “Blink”; we will need a ThreadPoolTimer that will blink our LED using GpIoPin class that we get from GpIoController. These GPIO related classes are in Windows.Devices.Gpio namespace from the Microsoft.NETCore.UniversalWindowsPlatform package that's already setup when we create the project. Here’s the code that we need in the Run()

public void Run(IBackgroundTaskInstance taskInstance)
{
    // TODO: Insert code to perform background work
    //
    // If you start any asynchronous methods here, prevent the task
    // from closing prematurely by using BackgroundTaskDeferral as
    // described in
http://aka.ms/backgroundtaskdeferral

    var deferral = taskInstance.GetDeferral();

    var gpio = GpioController.GetDefault();
    if (null == gpio) return;

    var pin = gpio.OpenPin(17);
    if (null == pin) return;

    pin.SetDriveMode(GpioPinDriveMode.Output);

    var toWrite = GpioPinValue.High;
    var timer = ThreadPoolTimer.CreatePeriodicTimer(delegate
    {
        pin.Write(toWrite);
        if (toWrite == GpioPinValue.High)
            toWrite = GpioPinValue.Low;
        else
            toWrite = GpioPinValue.High;
    }, TimeSpan.FromMilliseconds(1000));
}

  • LED’s Positive/Anode side is connected to BCM GPIO 17 (GPIO0) / Header Pin 11 and its Negative/Cathode side is connected to Header Pin 6 (OV) similar to http://weblogs.com.pk/khurram/archive/2017/01/12/blink.aspx (Raspberry Pi Pin Layout is given on this previous post)
  • Given this background task will get triggered “once” and there is no “loop” concept like in Arduino; we need to setup a timer for LED blinking, and we want to continue to run our task while timer is “ticking” we can get “Deferral” from taskInstance and if we do that and don't call its “Commit” our task will continue to run.

We can use Visual Studio to deploy and run the application (select ARM and Remote Machine) or we can create App Package (Project > Store) and upload our application and its certificate using the “Device Portal”. Using the Device Portal; we can also set it up as “Startup” application so it will run automatically on boot.

device-portal

Given its; Windows; we can make our Blink application as a typical User Interface / Forground as well; and can have routine XAML based visual interface in the application to blink the LED. https://developer.microsoft.com/en-us/windows/iot/samples/helloworld has information on such application along with how you can use PowerShell to make your Forground application as the Default App; (replacing the default Console Shell / Launcher) etc

Another interesting development option is Node.js (Chakra build) on Windows 10. Node.js uses Chrome’ Javascript engine; Microsoft open sourced its Edge’ Javascript engine Chakra; and there exists https://github.com/nodejs/node-chakracore that let Node.js uses Chakra. https://www.npmjs.com/package/uwp NPM package allows us to access Universal Windows Platform (UWP) apis from Node.js (Chakra build) on Windows 10; including Windows 10 IoT Core. We can install Node.js Tools for Visual Studio (NTVS) that enables a powerful Node.js development environment within Visual Studio and there exists NTVS UWP Extension that enables deploying Node.js + our app as a UWP application to Windows 10; including Desktop, Mobile and IoT.

nodejs-uwp-tools

Installing Node.js Tools for UWP Apps; takes care of everything; it will install Chakra Node.js, NTVS and NTVS UWP Extensions

  • I didn't added Chakra Node.js into PATH; as I already had a “normal” Node.js in the PATH and didn't wanted to disturb my other project; and this doesnt affect NTVS UWP Projects and they work fine

nodejs-uwp-projecttemplates

NTVS UWP Extensions also installs some nice collection of Project Templates. For my Blink; I used Basic Node.js Web Server (Universal Windows) project template and it had uwp npm module already setup. I simply wrote these lines in the server.js

var http = require('http');
var uwp = require("uwp");
uwp.projectNamespace("Windows");
var gpioController = Windows.Devices.Gpio.GpioController.getDefault();
var pin = gpioController.openPin(17);
pin.setDriveMode(Windows.Devices.Gpio.GpioPinDriveMode.output)
pin.write(Windows.Devices.Gpio.GpioPinValue.high);

http.createServer(function (req, res) {
    res.writeHead(200, { 'Content-Type': 'text/plain' });
    if (pin.read() == Windows.Devices.Gpio.GpioPinValue.high) {
        pin.write(Windows.Devices.Gpio.GpioPinValue.low);
        res.end("Off");
    } else {
        pin.write(Windows.Devices.Gpio.GpioPinValue.high);
        res.end("On");
    }
}).listen(3000);

uwp.close();

  • Notice the use of uwp and that it needs to be close() at the end
  • Notice how we got referenced to gpioController and pin; and how their naming is Javascript friendly camel Cased
  • We kept things simple; on each reload; the LED switches from On to Off and vice versa; we can have buttons like ESP8266 web interface we did in previous post

nodejs-uwp-run

Thanks to NTVS UWP Extensions; the project template offers UWP projects like Visual Studio integrations for debugging / running and deploying the application to remote machine (Windows 10 IoT) and packaging the application etc.

Windows 10 IoT Resources

NTVS UWP Extension Resources

Windows 10 IoT Review

Windows 10 IoT Core is okay; it can become better if they provide Windows 10 like “Start” screen / launcher with Notifications, proper Command Prompt + PowerShell, File Explorer, Settings, Application Installer and Task Scheduler. PowerShell remoting is too cryptic and SSH is industry standard; Windows badly needs SSH Server for headless deployments. They should also revitalize ClickOnce; its a great enterprise grade auto-update platform; and can be used to update “Metro or IoT Apps” in personal / enterprise scenarios. Say if I develop an IoT solution for a farmer who is not that tech savvy; if he asks me to make some changes or add features; its too complicated to update the apps on the devices remotely. We can use Windows Store; but its too demanding and doesn't fit everywhere

There is no “Server Side” stuff; given Rasbian is a solid OS on Raspberry Pi and we can deploy mySQL, Apache, MQTT broker and what not. Microsoft should bring their server offerings to such single board computers; make at least IIS and ASP.NET Core available on it; SQL Express, MSMQ (with MQTT support) and some file syncing tools / solution would be appreciated. Currently it feels more like a “terminal” for their Azure offerings and not everyone wants to connect to the “Cloud”. For designing such solutions we need a separate Raspberry PI running Rasbian or a PC / Server for things like MQTT, database and web interface etc.

  • I had Cortana in the list above; but in latest Insider builds; its there      
  • My son would like to connect the XBox Controller to PI’s USB and play games from Windows Store on the PI connected to big screen; XBox is a great gaming console; but PI is a nice Media Center on Linux platform; and “Windows” has solid foundation and with casual games, TV Shows and Movies from Windows Store; this can become a “thing”
Posted by khurram | 0 Comments
Filed under: , , ,

Blink

Blink Series

ESP8266 Series

Blink in Electronics is Hello World equivalent of Programming. Raspberry Pi like Single Board Computers and Arduino like open source microcontroller boards have made Electronics accessible to even kids and we as professionals can use these platforms to develop software and hardware solutions. As Alan Kay said People who are really serious about software should make their own hardware; with such platforms in market; its not that difficult anymore!

Arduino comes in different shapes and sizes; UNO is one of the more popular boards and recommended for getting started. We connect it to computer over USB and using Arduino IDE write “sketch”; an Arduino program in C like environment that we can then upload and run on the microcontroller seamlessly. Board gets the power from the USB when connected; we can disconnect the USB and power it externally as well. It has all the necessary USB to TTL adapter and power regulation circuitry along with General Purpose Input Output (GPIO) pins where we can connect additional peripherals / hardware / electronics. Arduino IDE has example sketches; and Blink example is there.

arduino-ide

uno-r3-led uno-r3setup() method gets called when the board boots up and loop() keep getting called later on. LED_BUILTIN is constant and on UNO its pin 13 where the on board LED is connected. On one side of UNO are Digital Pins on the other sides are Power pins with different voltages and Analog pins. The pin numbers are labeled and we can connect our LED to any of it and use the number instead of LED_BUILTIN.

  • The longer pin of LED is Anode / +ve that needs to be connected to any of the Digital Pin other than Ground (GND) and the shorter pin is Cathode / –ve that needs to be connected to the GND pin
  • If we see through the LED; the Anode side is smaller and the Cathode side is flattened.

jumpers

  • We need jumper wires that comes in three flavors, female to female, male to male and male to female, we will need male to female wire for connecting LED to Arduino board directly without using the breadboard. For breadboard; we will need male to male!

Arduino is great; the IDE support libraries and there exists many “Shields” / daughter boards for Arduino that provides additional functionality from storing data to SD card to Wifi or Ethernet connectivity etc; there is a great ecosystem around this. There are some limitations like UNO’s ATmega328P Micro Controller is not that speedy and has limited memory; you cannot run complicated “software” on it and adding internet connectivity increases the cost. Thankfully there are many other options like ESP8266 based boards. ESP8266 provides Wifi / TCP-IP stack out of the box and its ESP12 onwards variants are FCC approved and has 1MB to 4MB memory which is quite respectable. NodeMCU is getting quite popular as it comes with a firmware that has Lua scripting support; its ESP12 based and AMICA’s NodeMCU R2 board is Breadboard friendly and comes with USB to TTL and USB Power Regulator circuitry similar to Arduino.

nodemcu

  • Watch out; there exists many NodeMCU / ESP8266 boards; http://frightanic.com/iot/comparison-of-esp8266-nodemcu-development-boards/ has nice comparison; NodeMCU AMICA R2 is the respected one and highly recommended board.
  • Having a respected board is recommended so that when connected to your computer its USB to TTL adapter drivers gets installed; AMICA R2 comes with CP2102 adapter and it gets detected and drivers get installed seamlessly.

To use the board; we first flash the NodeMCU firmware using https://github.com/nodemcu/nodemcu-flasher; once flashed; we can use ESPlorer; an IDE for ESP8266; similar to Arduino IDE. It supports writing LUA scripts. As per NodeMCU requirement; we need to create init.lua file and upload; and then this script gets executed when the ESP8266 is reset having NodeMCU firmware. Here is one such script which is a webserver giving us the option to On/Off the GPIO0 and GPIO2 ports of the Micro Controller. It will also connect the chip to the wifi network with provided credentials

wifi.setmode(wifi.STATION)
wifi.sta.config("YOUR-WIFI-NETWORK","WIFI-PASSWORD")
print(wifi.sta.getip())
led1 = 3
led2 = 4
gpio.mode(led1, gpio.OUTPUT)
gpio.mode(led2, gpio.OUTPUT)
srv=net.createServer(net.TCP)
srv:listen(80,function(conn)
    conn:on("receive", function(client,request)
        local buf = "";
        local _, _, method, path, vars = string.find(request, "([A-Z]+) (.+)?(.+) HTTP");
        if(method == nil)then
            _, _, method, path = string.find(request, "([A-Z]+) (.+) HTTP");
        end
        local _GET = {}
        if (vars ~= nil)then
            for k, v in string.gmatch(vars, "(%w+)=(%w+)&*") do
                _GET[k] = v
            end
        end
        buf = buf.."<h1>ESP8266 Web Server</h1>";
        buf = buf.."<p>GPIO0 <a href=\"?pin=ON1\"><button>ON</button></a>&nbsp;<a href=\"?pin=OFF1\"><button>OFF</button></a></p>";
        buf = buf.."<p>GPIO2 <a href=\"?pin=ON2\"><button>ON</button></a>&nbsp;<a href=\"?pin=OFF2\"><button>OFF</button></a></p>";
        local _on,_off = "",""
        if(_GET.pin == "ON1")then
              gpio.write(led1, gpio.HIGH);
        elseif(_GET.pin == "OFF1")then
              gpio.write(led1, gpio.LOW);
        elseif(_GET.pin == "ON2")then
              gpio.write(led2, gpio.HIGH);
        elseif(_GET.pin == "OFF2")then
              gpio.write(led2, gpio.LOW);
        end
        client:send(buf);
        client:close();
        collectgarbage();
    end)
end)

  • NodeMCU firmware comes with Lua library for wifi and networking that we are using above to connect to Wifi network and setting up a basic web server and writing data to GPIOs

Once its up we can then access our LUA app and On/Off the GPIO port where we connected our LEDesplorer

  • We can call LUA functions from the IDE; using wifi.sta.getip() we can get the IP address the device has got on the Wifi network
  • Arduino started developing non AVR boards; and they modified their IDE to support different tool chains to support these boards using “core”; there exists ESP8266 Arduino Core using which we can use Arduino IDE, its C/C++ based language and many Arduino libraries out in the world to compile “Arduino Sketch” for ESP8266 as well; the ESP8266 Core comes with required libraries for Wifi / Networking capabilities of the MCU (Micro Controller Unit)

NodeMCU / ESP8266 can do a lot in the modern IoT world; there are many devices out there powered with this Micro Controller and we can achieve a lot; from cloud connectivity to auto update-able app + firmware. But if we want the ability to have computer like features in our electronics setup; that we can run many programs; change and debug them remotely; have rich on board computing or need additional software on the board (database / programming environment / custom networking capability) or need some hardware that needs drivers or some custom application (USB camera or similar hardware); Raspberry Pi is the way to go; because one powerful feature of the Raspberry Pi is the row of GPIO pins that provide physical interface between the Pi (Linux and other operating system) and Electronics world

rasberry-pi

The RPi.GPIO python module offers easy access to the general purpose IO pins on the Raspberry Pi. To install it use apt-get and install python-dev and python-rpi.gpio packages

sudo apt-get install python-dev python-rpi.gpio

Once installed; we can write a simple blink.ph script similar to Arduino above; we set the numbering scheme using GPIO.setmode; there is GPIO.BCM for GPIO Numbering mode and GPIO.BOARD for Physical Numbering.

The GPIO pins are numbered in two ways; GPIO Numbering; the way computer sees them and these jump about all over the place and we need a printed reference for that and the other is Physical Numbering; in which we refer to the pin by simply counting across and down.

There also exists WiringPi; a GPIO C Library that enables us to access our electronics connected to Raspberry Pi from the C/C++ environment in very Arduino like fashion. http://wiringpi.com/download-and-install/ has all the information you need to download and compile it.

Image from http://wiringpi.com/pins; where WiringPi numbering is documented

GPIO0 is physically pin 11 and BCM wise its 17 and in WiringPi its refered to as pin 0. With this information; our blink.ph (Python) and blink.c (C using wiringpi) would be something like this:

ssh

  • To compile the C file; we will use –Wall and –lwiringPi switches with gcc

Raspberry Pi might feel costlier; but there exists Pi Zero; thats almost NodeMCU size, gives us similar feature of its bigger brother and is NodeMCU like quite affordable. However we will need OTG Ethernet or Wifi dongle in case we need internet connectivity in our project and though total cost might be more than NodeMCU; but having a PC like experience to control the electronics and the freedom that comes due to its Linux based operating system is simply unmatched and it gives much better and wider options in term of manageability when things are in production.

WP_20170112_14_51_58_Raw

Posted by khurram | 0 Comments

Dockerizing PHP + MySQL Application Part 2

In the previous post we used mysql:5.6 official Docker Image for Database Container and created a Custom Dockerfile for the PHP Container. We had to expose the MySQL container’s ports and so we can connect to the database using the MySQL CLI to create the required WordPress database and execute the MySQL dump file. In production; on the Server or System Administrator’s machine these CLI tools might not be available and more importantly we dont want to expose the MySQL ports given that Web Container can be "linked" to it within Docker. We can sort this out by creating a “DB Helper” container that has the MySQL CLI and its job is to wait till MySQL “server” image spins up and then connect to it and create the database and run the dump SQL script.

For this we will create a Shell script having the following code

#!/bin/bash
while ! mysql -h db -u root --password=passwd -e 'quit'
    do sleep 10;
done;
mysql -h db -u root --password=passwd << EOF
create database wordpress;
use wordpress;
source wordpress.sql;
EOF

The Dockerfile for our “DB Helper” container will be

FROM ubuntu
RUN apt-get update
RUN apt-get install -y mysql-client
#RUN apt-get install -y nano

ADD wordpress.sql /tmp/wordpress.sql
ADD createdb.sh /tmp/createdb.sh

RUN chmod +x /tmp/createdb.sh
RUN sed -i -e 's/\r$//' /tmp/createdb.sh

WORKDIR /tmp
CMD /bin/bash createdb.sh

Given everything is “automated”; we can now create a docker-compose file using which we can define the whole environment. Here’s its contents

version: '2'

services:
    db:
        image: mysql:5.6
        restart: unless-stopped
        environment:
            MYSQL_ROOT_PASSWORD: passwd
    dbhelper:
        build:
            context: .
            dockerfile: Dockerfile.dbhelper
        image: wordpress/dbhelper
        links:
            - db
    web:
        build:
            context: .
            dockerfile: Dockerfile.web
        image: wordpress/web
        restart: unless-stopped
        links:
            - db
        ports:
            - "57718:80"

Once its in place; we can simply issue docker-compose up --build to spins up all the containers for the WordPress; the DB Helper container will create the required database and import the dump and our application will shortly be made available on the specified port; 57718 in this example

  • Use docker-compose up --build –d to launch the containers in the background in Production
  • Alternatively one can also use docker-compose build to create required containers and then docker-compose up to launch and docker-compose down to stop the imagesl
  • docker-compose doesn't get installed along the Docker Daemon; please refer to its installation instruction on how to install it on the Server
  • We can execute more SQL commands to change the host / site urls if we created the dump in the development / staging environment and want to have different host / site urls in the production using the DB Helper container.
Posted by khurram | 0 Comments
Filed under: , ,

Dockerizing PHP + MySQL Application

Docker allow us to package the application with all its dependencies; and this makes an ideal platform to deploy and migrate existing PHP / MySQL applications. We cannot only consolidate multiple PHP applications on the server; where one application is using the latest runtimes and other might need specific versions of PHP and MySQL, but also can test the application in an isolated environment with newer runtimes or try updating the application framework to newer versions. For this post; I will be migrating WordPress application to Docker.

I am running WordPress using IIS Express and MySQL on my Windows development machine; but things apply equally to Linux. For migration; we need the WordPress’ PHP files and SQL script to generate its associated MySQL database. Using the information from wp-config.php file and mysqldump we can create the SQL script; and using Windows 10 Anniversary Update’s Bash on Ubuntu on Windows (quite mouthful) we can create the TGZ (tar + gzip) file from the WordPress www root easily. We are creating the TGZ file because Docker supports it natively; when creating the web container it will automatically expand it to the specified folder and its easier to manage single file instead of hundreds of web application files

 
  • Note; that we created the TGZ file from the www root folder; its required so that there is no parent directory in the archive; I also placed it in the separate project folder where all the required files for Docker will be kept

Lets setup the required containers; I am going to setup two containers; one for MySQL / Database and the other for the PHP / Web. I will be using the official images so that I can rebuild my images whenever they are updated (security fix; newer versions etc) Lets start a standard mysql:5.6 container; we will need to name it so we can link it later with the web container. I am also exposing its MySQL port so I can connect to it from MySQL CLI to create the database and import the data using the dump SQL we created earlier.

  • I am using Visual Studio Code; and have set the End of Line from \r\n to \n in its Workspace Setting to make the files I am creating / editing Linux friendly; later we will be creating scripts
  • I am using Docker for Windows
  • Stop the Local MySQL Windows Service; if you have it; before mapping the container’s MySQL port
  • When running the MySQL container in production; we don't have to expose its port; we can link the web container to it and it will be able to access MySQL just fine

Lets create a Dockerfile for the web container; we will be basing our container on a standard php:5.6-apache and add the required Linux component and PHP extensions using the script / mechanism php:5.6 container recommends. We can add TGZ file and Docker Build will extract it into the specified folder. I have also kept a copy of wp-config.php; make changes in it for new MySQL settings; and created a test.php file to check the connection and copying these two files over the TGZ

  • I simply referred to WordPress official Dockerfile to learn which Linux components and PHP extensions it need

Dockerfile for Web Container

FROM php:5.6-apache
RUN apt-get update

COPY php.ini /usr/local/etc/php/

RUN set -ex; apt-get install -y libjpeg-dev libpng12-dev

RUN docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr
RUN docker-php-ext-install gd mysqli opcache

RUN { echo 'opcache.memory_consumption=128'; \
    echo 'opcache.interned_strings_buffer=8'; \
    echo 'opcache.max_accelerated_files=4000'; \
    echo 'opcache.revalidate_freq=2'; \
    echo 'opcache.fast_shutdown=1'; \
    echo 'opcache.enable_cli=1'; \
    } > /usr/local/etc/php/conf.d/opcache-recommended.ini

RUN a2enmod rewrite expires

ADD wordpress.tgz /var/www/html
COPY wp-config.php /var/www/html/wp-config.php
COPY test.php /var/www/html/test.php

Once our web image is created; we can simply run it linking to the MySQL container in which we created the required database earlier

  • We have to use the same port we were using earlier; as WordPress stores the Site URL in its database and redirect to it automatically if any other URL is requested

Stay tuned for the second part; in which we will use docker-compose and also try to automate certain manual steps we had to perform above

Posted by khurram | 1 Comments
Filed under: , ,

Floating IP and Containers

Rancher Series

Floating IP or Virtual IP address can be moved from one host to another in the same network / datacenter; this technique can be used to implement high availability infrastructure. If one host goes down; the floating ip address is given to a redundant host!

Image Credit: Digital Ocean

On the Linux, we can use UCARP; a Linux port of BSD’s CARP (Common Address Redundancy Protocol); on Debian / Ubuntu; you can get it using apt-get ucarp; https://debian-administration.org/article/678/Virtual_IP_addresses_with_ucarp_for_high-availability is an excellent write up on this topic!

securitynetworkingFor implementing UCARP in Container; we need an administrative access to “host” network interfaces. In Docker this can be done passing –cap-add=NET_ADMIN and --net=host parameters to docker run command. With these two flags we are basically telling Docker to use Host’s network interface as the container’s network interface and giving administrative access to the Container for Network Administration. With these two flags set; a container can add / change / remove additional IPs to the host without any issue. Rancher web interface is sweet and here’s the related settings

  • I was lazy and enabled full access to Host; you can set individual Capabilities from the Rancher's interface

Note that we enabled the Rancher DNS service discovery as well; doing this Rancher will define Environment’s DNS server; using which we can discover the Infrastructure Services and any service we have deployed. This is required to discover and monitor the state of the Load Balancer of our Rancher–First Application in the environment as discussed in the Rancher Infrastructure Services. If we attach to this container; we can see that eth0 is not the virtual one; its same as Host and we are able to resolve Rancher Metadata service end point as well.

debian2

Notice that our Standalone Container is not listing any “managed IP” because it doesn't has any. I have uploaded the UCARP container image at Docker Hub; its available as khurramaziz/ucarp The source code of Container is also uploaded on GitHub and its available at https://github.com/khurram-aziz/HelloDocker For our application; we can setup the floating IP high availability by running UCARP containers on both hosts. Given our web and database containers are running on both hosts and the load balancer is using all the web application containers; our implementation becomes high available as well as scalable (using all available hosts); similar to the Digital Ocean picture above.

As an exercise; enhance khurramaziz/ucarp using the Rancher Infrastructure Services; so that your enhanced container monitors the health of the Load Balancer as shown in the previous post and in case it goes down remove its UCARP ip; # kill -USR2 PidOfUcarp will demote the UCARP master (if its master)

References

Posted by khurram | 0 Comments

Rancher Infrastructure Services

Rancher Series

Rancher provides Infrastructure Services; many of which we used to run our first application.

It provides Networking service; which gives 172.17.0.0/16 Docker bridge ips and 10.42.0.0/16 Rancher managed ips. It then setups a secure network using IPsec tunnels that enables cross host container communication. Using this our web containers were able to connect to MongoDB containers across multiple hosts seamlessly.

We also saw Load Balancer service; that uses HAProxy and it not only scale to multiple hosts but when we “linked” our web service to it; it also find them across multiple hosts and uses all of them as load balancer targets.

It provides a DNS service; when we linked MongoDB cluster into the web server container; we were able to ping all the containers with the service alias; we could use the DNS name of our linked server instead of giving multiple IPs; but we had to setup Mongoose accordingly that we are connecting to a cluster and not to a single MongoDB host

It also provides Metadata service; that we will see more in this post, Persistent Storage Service; using which we can expose volumes to our containers; topic of upcoming Post, Audit Logging; using which admins can view who is configuring what in the environment using the admin web interface and Service Accounts; using which we can make applications that can interact with Rancher using its API

Using Metadata service; we can query the data Rancher is maintaining for the environment. Its a HTTP based API and data is spitted out in JSON format. To try it out; I created a “Standalone Container” using debian; and installed CURL (apt-get install curl). The API end point is http://rancher-metadata; which is made available through Rancher’s DNS service to the container; there are two versions; I used 2015-12-19. The version is appended to the URL and then we need to add different paths for different queries. Here’s the screenshot of some of the queries that I tried to find out if the Load Balancer of our first application is running or not!

curl - metadata

  • To access RancherOS machine using PuTTY; you first need to create PuTTY PPK file using PuTTY Key Generator; import the id_rsa file that has our private key that we generated using Git Tools while installing RancherOS and save this private key; the PuTTY key generator will make a PPK file; keep it in the same %UserProfile%\.ssh folder; and then connect to your RancherOS machine using >putty –i %UserProfile%\.ssh\id_rsa.ppk rancher@YourRancherOSMachineIP

Resource

Posted by khurram | 0 Comments
Filed under:

Rancher–First Application

Rancher Series

New MongoDB StackRancher CatalogLets deploy the first application on our Rancher Environment that we created in the first post. We will be deploying let’s Chat application; its an open source Slack clone built using NodeJS and MongoDB. There exists an official let’s Chat Docker Container so we have a quick clean start.

As we setup two hosts for our Rancher Environment earlier; we will try to deploy it in High Availability and Scalable configuration. Rancher has a catalog and from there we can deploy the MongoDB Stack. This particular Stack setups three clustered MongoDB containers with the specified replication set name. I went ahead and set it up with default values. There are few “sidekick” containers. Its Rancher specific and these are the containers “bound” to the primary container and these all always deployed + run as a unit; we can use this feature to setup volumes or run setup scripts for the primary container. There are two sidekicks for each Mongo Container in the catalog item.

On Launch; it will setup the required containers across our two hosts shortly; it downloads the Docker Images from the Docker Hub; for large containers setup may take a while depending on the network speed. It feel bizarre that Rancher has no built in option that image downloads are orchestrated as well and network resource is not wasted in multiple downloads of same image per host.

InfrastructureMongoDB StackOnce setup; we can scale up or down the MongoDB stack; I set it to two so I have one MongoDB container per host. We can view CPU/Memory and Network usage of the container and review the Sidekick containers from the interface. Knowing the docker image name of the sidekick we can review it on the Docker Hub and if we are lucky can find its source from there or by googling! This is a fantastic way to learn how the community has built up these Stacks; I encourage to go ahead and try other stacks as well!

Rancher provides a modern User Interface to manage and orchestrate our Docker Containers; we can view our Hosts and the containers running on them; we can stop / start or restart the containers easily using the interface.

Given we scaled down the MongoDB stack; there is “an instance” of one stopped image; we can go ahead and Purge it

let's Chat

Now our MongoDB is running in the replication mode across two hosts; even if container goes down or one of the machine running these containers goes down; the MongoDB service will remain available.

Next we need to deploy let’s Chat Docker container in a similar way; notice I have specified to run one instance on each host so it gets deployed on all participating host; have specified their official Docker Image and has linked up our MongoDB cluster as “mongo” service; their image requirement.

We need to do one more step; if we run the letsChat container as is; it will give error failing to connect to our MongoDB cluster; luckily let’s Chat official container has option to set environment variables; and one of it is the Mongo connection string. For the cluster we need to specify the IP addresses of all the MongoDB nodes that we can learn about from Infrastructure page

imageLets set LCB_DATABASE_URI environment variable for the letsChat container and give ips of all the MongoDB nodes in the connection string as per Mongoose requirement; in my case the connection string is mongodb://10.42.38.6:27017,10.42.59.150:27017/letschat

Lets go ahead and deploy the container; it will download the image from Docker Hub and run an instance on all participating hosts shortly.

Rancher interface also has option to connect to the container’s shell or view their logs; using these; the system administrators or developers can debug and solve any issue.

Execute ShellWe can connect to the shell and confirm that Rancher has made appropriate hosts entries for the Mongo Cluster.

If the web application is failing to connect to the MongoDB or giving errors; we can review them from View Logs and from the shell debug and fix it. For instance if we dont set the environment variable above for clustered mongodb; it will fail to run and view logs will have the Mongoose error!

  • Note that Rancher and other similar products have similar features; being developers we should use standard output and error streams and emit appropriate logs that the system administrators can use. They will also appreciate if we provide utilities in the containers to troubleshoot or fix the issue on their own!

So now we have two instances of the web server containers and a clustered MongoDB; for the fail-over and scalability we need a “Load Balancer”. Rencher has built-in one and lets deploy an instance of it!

Add Load Balancer

Load Balancer ConfigurationLets use the same always run one instance scalability option for the Load Balancer; map service-container’s 8080 port (letsChat official container is running the app at 8080 port) and link up the service. Rancher’s Load Balancer is haProxy based and instead of making our onw, using their offering; we can link our web service and it will find (if we have just one container running) and load balance across multiple running containers of our web app seamlessly.

If we want to; we can further customize the haproxy configuration file; the user interface allows us to add configurations. For our case; nothing else required.

Now our stack is complete and we can access the application at the two IPs of our participating hosts!

let's Chat

You might notice the IPs of the containers are changed; its because I went ahead and “Cloned” the containers (that Rancher supports; the quickest way to copy existing containers) setting environment variable accordingly; and while making new containers I choose not to restart the containers; the default configuration; so that I can stop few and simulate container failure. I am stopping one web server container; but I am still able to access my application

Failing Web Server

So we have a fairly complete fail-over and scale out implementation here, if MongoDB container or NodeJS container goes down things will continue to work, even if traffic is coming on one IP; load is distributed across all participating nodes. The only questions left are what if one of the Load Balancer goes down? And which IP to give in the DNS for our application? 192.168.10.15 (Rancher2 VM) or 192.168.10.16 (Rancher3 VM)? If we give both IPs; the traffic will load balance across these two machines; but what if whole host is brought down for maintenance? We will figure it out in the next post!

Posted by khurram | 0 Comments

RancherOS

Rancher Series

Containers in general and Docker Containers in particular are becoming popular everyday mainly because they allow us to have a componentized environment to run the applications and host them on premises or in cloud seamlessly. Containers are portable and simple; vendors are adding new features to allow us to create scalable container environments. Containerizing a simple application following Micro-Service Architecture and then making it scalable and high available; things become complex very quickly; managing and running these application becomes daunting. “Container Orchestration Tools” becomes necessity for Production Environments. These tools provide multi-host / cluster aware container manageability, assist in mitigating security and networking issues and provide services that are needed to scale the applications.

Docker has introduced Swarm recently that’s coming in next release, it’s an orchestration platform and being “in the box” will play an important role. Google’s Kubernetes is currently leading but it’s suited for large cloud environment. Its complex and overkill for enterprise and on premises cloud environments. Apache’s Mesos is another option but it’s not just Container Orchestration Tool; it’s designed for general datacenter management. It’s good to know and learn about these tools and what common features they provide so we the developers can assist our system administrators in keeping our application running in the production environment but using the features and services these tools provide can design and develop cloud aware applications by not reinventing the wheel. This series will be about Rancher, its very user friendly, very little learning curve, provide features and services of any modern Container Orchestration Tool and most importantly can be setup on a typical single laptop based development environment.

Rancher comes in two flavors; its a Docker Container available at Docker Hub that you can run on any Docker host; and in its second offering; its a Linux distribution designed especially to run Docker Containers, very similar to Boot2Docker. The distinguishing thing about the RancherOS is that the OS itself runs everything in the Containers.
rancheros-docker-kernel[1]

Rancher provides the network, storage, load balancer and DNS services. It provides Docker Swarn, Kubernetes and Mesos orchestration along with its own implementation. It has application cataloging features and other enterprise grade required features.

Rancher Overview

To set it up, you will need Git Tools installed on your Windows machine, and you will also need Linux friendly editor like Visual Studio Code and Putty’s PSCP tool.

Using Git Shell; create the SSH key using ssh-keygen; it will create id-rsa and id_rsa.pub file. Using the editor; create a cloud-config.yml that will have your public ssh-key from id_rsa.pub file as shown the picture below.

image

Once the yml file is made with all the required network infrastructure details; we can mount the ISO in the new virtual machine or some physical machine; and boot RancherOS. ISO is available at https://github.com/rancher/os When machine is booted up; simply copy the cloud.config.yml using PSCP (from Putty) to the Rancher Machine. SSH your Rancher machine and issue sudo ros install -c cloud-config.yml -d /dev/sda command and it will download the latest files and install the OS

image

After an install and reboot; we will have the RancherOS running and we can SSH it from Git Shell using ssh rancher@ip. Its just an OS having an ability to run the Docker Containers. We still need to run the rancher container using sudo docker run –d –restart=always –p 8080:8080 rancher/server and when the Rancher Container is running; we can access its GUI at http://YourRancherMachine:8080

image

You can view system level containers running in RancherOS using system-docker and user level containers using traditional docker binary

image

Rancher has concept of hosts; the nodes that act will run the Containers; you need more machines acting as hosts; you add Hosts from Infrastructure option. Typically some Hosts will have Live IPs where you will deploy your public facing apps and some will have internal IPs; its better to have multiple NICs (Network Interfaces) on the Rancher Server machine; you can setup networking in RancherOS using ros config; and for multiple NICs use ros config set rancher.network.interfaces.”mac=11:22:33:44:55:66”; for static ip the commands will be

sudo ros config set rancher.network.interfaces.”mac=11:22:33:44:55:66”.dhcp false
sudo ros config set rancher.network.interfaces.”mac=11:22:33:44:55:66”.gateway 1.1.1.1
sudo ros config set rancher.network.interfaces.”mac=11:22:33:44:55:66”.address 1.1.1.2/24

When we click Add Host button; it gives us the docker run command that we need to run on the node

image

For the Hosts; rancher/agent Docker container runs on the host, that can be run on the machine that has the Docker installed + configured; it can be any Docker compatible Linux Machine (Physical or VM) . We can use RancherOS as well for the hosts. Its recommended not to use Rancher Server machine as agent; as few components of Rancher Server are made in Java; and you know it needs more resources Smile I created two separate VMs for the hosts and running rancher/agent containers using the commands it asks; they appear shortly in the Infrastructure > Hosts

image

Theoretically Raspberry Pi can also act as a "Host"; we can use Raspbian and install Docker (Hypriot prepared Docker deb files as discussed in Docker on Raspberry Pi post) and use it; we can use especially made OS/image for Raspberry that has Docker, like the one from Hypriot; or we can use RancherOS that also comes for Raspberry PI.

image

  • I tried RancherOS on PI; but it was giving weird errors when installing; came to know it was because the memory card I was using was fast Smile so found an old card and used it instead and it installed fine.
  • Sadly it doesn't extend the partition and doesn't use the complete available storage of the card. You have to use GPARTED to extend the partition.
  • There’s no cloud-config.yml to install ROS; instead its offered as memory card image; to set static ip on RancherOS machine; use sudo ros config set rancher.network.interfaces.eth1.address "192.168.10.21/24" followed by sudo system-docker restart network (or reboot)
  • Raspberry Pi is ARM; we cant run / orchestrate x86/x64 containers on Pis; we need ARM images instead
  • Sadly there is no prebuilt ARM rancher/agent container; that needs to run so the PI get listed in Hosts and can be used for orchestration; we need to compile it from source
  • Rancher Server currently cant be compiled on Raspberry Pi; we can only use PIs as hosts / agents; it would be great to have PIs only Rancher Cluster

You can follow these Github Urls for the updates

Visual C++ for Linux Development

Visual C++ for Linux Development is the Visual Studio 2015’s extension by Microsoft that lets us write C++ code in Visual Studio for Linux machines and devices. It connects to the machine or device over SSH and uses machine / device’ g++, gdb and gdbserver to provide compilation and debugging experience from within Visual Studio. After installing the extension; it adds Linux Connection Manager into Visual Studio’s Options dialog through which we manage the SSH connections to Linux machines or devices (ARM also supported). It also adds the project types; currently there are three; Blink (Raspberry), Console Application (Linux) and Empty Project (Linux). You can write the C++ code using Unix headers and libraries. For intellisence / removing red squiggles; you will need to download the header files (using PUTTY’s PSCP) to the development machine and add that folder in the project properties’ VC++ Directories section

Linux Connection Manager

We can use this extension with Docker Container as well; all we need is an image having SSH Server (OpenSSH), g++, gdb and gdbserver. I created this Dockerfile

FROM ubuntu:trusty
MAINTAINER Khurram <khuziz@hotmail.com>

RUN apt-get update && apt-get -y upgrade
RUN apt-get -y install openssh-server
RUN apt-get -y install g++
RUN apt-get -y install gdb gdbserver
RUN apt-get -y install nano iputils-ping

RUN mkdir /var/run/sshd
RUN echo 'root:root' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config

# SSH login fix. Otherwise user is kicked off after login
RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd

ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile

EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]

  • For some wiered reason; g++ installation was failing on latest ubuntu; therefore used ubuntu:trusty as base image
  • We need to set root password and configure OpenSSH to allow direct root ssh
  • 22 SSH port is exposed; that we can map to Docker Host Machine
  • SSHD is started using CMD with –D flag so detailed logs get created in case Visual Studio fails to connect to it and you need troubleshooting

Once the image is built; you can run it with the following docker run command; I have also uploaded the image on to the Docker Hub, so you can directly use the following docker run command and it will download the prebuilt required image for you automatically

docker run –name linuxtools –v /root/projects –p 2222:22 –d khurramaziz/linuxtools

  • We cant map exposed 22 port to host’s 22 port as there’s (usually) already SSH server running on host’s 22; so mapping it to 2222 instead
  • Used –d option to run it in background
  • Notice I have mounted /root/projects as Docker Volume; this is where the extension upload project files and compile and place the built binaries, I have also named the container so that I can use volumes-from when running other Containers later to test the built binaries

Once the container is up and running we can SSH to it; if using putty; use –P flag to specify the port; putty –P 2222 YourDockerHost; and if its working fine; we can set up its connection in Visual Studio’s Linux Connection Manager. When everything is in order; we can build our project; if we doe the DEBUG build; our HelloLinux binary will be at /root/projects/HelloLinux/bin/x64/Debug/HelloLinux.out that we can run from the SSH

SSH

Given we have the binary on the volume; we can run other containers, mounting the volume and run the ELF64 binary and it will run fine

Containers

If you try to run the compiled binary on a plain official busybox Container; it will fail as it doesn't have the required C libraries. Either add --static (dash dash static) in the Linker settings (Project Properties) or there is busybox:ubuntu-14.04 (5.6Mb comparing to 1Mb) Docker Image with all the C libs in place. Your Dockerfile will be something like this

FROM busybox:ubuntu-14.04
COPY HelloLinux/bin/x64/Release/HelloLinux.out /hello
CMD ["/hello"]

Resources

GlusterFS Volume as Samba Share

We made Docker Container using a Dockerfile in the GlusterFS post that can mount a GlusterFS volume (running on Raspberry Pis); lets extend our Dockerfile and add Samba Server to expose the mounted directory as Samba Share so it can be accessed from Windows. For this we need to add these additional lines into the Dockerfile

RUN apt-get -y install samba

EXPOSE 138/udp
EXPOSE 139
EXPOSE 445
EXPOSE 445/udp

We are installing samba and exposing the TCP / UDP ports that Samba uses; if we build and run this container; we need to expose these ports using -p 138:138/udp -p 139:139 -p 445:445 -p 445:445/udp parameters in docker run command. After running it; to expose the directory through Samba; we need to add the following lines into /etc/samba/smb.conf (at the end)

[data]
path = /data
read only = no

Samba uses its own password files; to add the root user into it; run smbpasswd –a root and finally restart the Samba Daemon using service smbd restart Now if we use \\DOCKERMACHINE from the Windows; we should see data share and can access it using root and entered password. These are lots of manual steps after running the container; to solve this; lets create a setup.sh shell script that we will add into the container (through Dockerfile); we will use Environment Variables as we can pass them in dr run command. Our final docker run command will look like this

docker run --name glustersamba --cap-add SYS_ADMIN --device /dev/fuse --rm -e glusterip=Gluster-Server-IP -e glusterhost=Gluster-Server-FriendlyName -e glustervolume=Gluster-Volume-Name -p 138:138/udp -p 139:139 -p 445:445 -p 445:445/udp -it khurramaziz/gluster:3.5.2-samba

  • Notice the three environment variables, glusterip, glusterhost and glustervolume that are passed using –e
  • Notice the Samba ports being exposed using –p
  • Notice that SYS_ADMIN capability and /dev/fuse device is added; required for glusterfs client / mounting
  • khurramaziz/gluster:3.5.2-samba exists on Docker Registry; you can go ahead and run the above command and it will download the image layers; you upload the created image using docker push imagename:tag

If you are not interested how the image is made up; you can skip the remaining post; as I have pushed this image on to the Docker Hub Registry and you can issue the above command and it will work!

Here’s the setup.sh that’s using the above three environment variables to mount the GlusterFS volume at /data and then exposing it through Samba

#!/bin/sh
smbpath="/etc/samba/smb.conf"
echo $glusterip $glusterhost >> /etc/hosts
mkdir /data
mount -t glusterfs $glusterhost:$glustervolume /data
smbpasswd -a root
echo [data] >> $smbpath
echo path = /data >> $smbpath
echo read only = no >> $smbpath
service smbd restart

And here’s the Dockerfile that’s adding the above setup.sh and running it on start up using CMD directive

FROM ubuntu
MAINTAINER Khurram <khuziz@hotmail.com>

RUN apt-get update && apt-get -y upgrade
RUN apt-get -y install software-properties-common python-software-properties
RUN apt-get -y install libpython2.7 libaio1 libibverbs1 liblvm2app2.2 librdmacm1 fuse
RUN apt-get -y install curl nano
RUN curl -sSL
https://download.gluster.org/pub/gluster/glusterfs/3.5/3.5.2/Debian/jessie/apt/pool/main/g/glusterfs/glusterfs-common_3.5.2-4_amd64.deb > glusterfs-common_3.5.2-4_amd64.deb
RUN curl -sSL
https://download.gluster.org/pub/gluster/glusterfs/3.5/3.5.2/Debian/jessie/apt/pool/main/g/glusterfs/glusterfs-client_3.5.2-4_amd64.deb > glusterfs-client_3.5.2-4_amd64.deb
RUN dpkg -i glusterfs-common_3.5.2-4_amd64.deb
RUN dpkg -i glusterfs-client_3.5.2-4_amd64.deb

RUN apt-get -y install samba

EXPOSE 138/udp
EXPOSE 139
EXPOSE 445
EXPOSE 445/udp

ADD setup.sh /setup.sh
RUN chmod +x /setup.sh

CMD /setup.sh && /bin/bash

Ideally; if we are following Micro Services Architecture; we should have a separate container for Samba Server; the GlusterFS Client Container will act as a producer exposing the mounted GlusterFS volume and Samba Server Container acting as Consumer exposing that volume as Samba Share. Sadly this is not possible (or atleast I dont know any way) as Docker Volume that get created will have the files that are there before we mount GlusterFS volume. When the GlusterFS volume is mounted into the producer container; the consumer container will continue to see the “before files + directories” and not what’s in the GlusterFS volume

  • Docker has plugins support, there are Volume Plugins using which we can create the Volumes that gets stored according to the used plugin / driver. There also exist GlusterFS volume plugins that we can use; we will not require the GlusterFS Client Container; instead host will mount the volume and such volumes can be used as Docker Volume in the containers

image

A proof of concept of producer / consumer implementation using Docker Volume

  • Notice the producer is Ubuntu and consumer is CentOS
  • Notice for the producer container run command; name is defined as its required for consumer container run command’s –volumes-from section
  • Notice for the producer container volume only target path is defined; it will create a Docker volume automatically and map as the defined path into the container; this volume / directory will get stored out of the Docker’s Union File System and given the name that can be used in other containers if they are run using –volumes-from

Resources

GlusterFS

GlusterFS is a scale-out network-attached storage file system that has found applications in cloud computing, streaming media services, and content delivery networks. GlusterFS was developed originally by Gluster, Inc. and then by Red Hat, Inc., as a result of Red Hat acquiring Gluster in 2011, says the Wikipedia. Its a distributed file system that we run on multiple hosts having “bricks” that hosts the data physically (on storage); the nodes communicate with other (peers) and we can create a volume across these nodes with different strategies; replication in one of them if chosen data will get stored in bricks of all contributing nodes acting like RAID 1

image

For our little project we will use two Raspberry Pis to create a GlusterFS Volume and then mount it into Docker Container

image

We need to install glusterfs-server on the PIs; give the following command

$ sudo apt-get install glusterfs-server

It installed Gluster 3.5.2; we can check the version using gluster –version; knowing version is important; as we will need to install same version on the Docker Container; newer versions dont talk to older version Gluster servers and vice versa

Once the gluster is installed probe the peers using gluster peer probe hostname; its better to have the two PIs in same subnet and friendly names are added in /etc/hosts files of each participating nodes. In my case I named two nodes, pi and pi2 and was able to do $ sudo gluster peer probe pi2 from pi and probe pi from p2. Once the probing is done successfully; we can create the RAID 1 like replicating volume using gluster volume create. I issued the following command

$ sudo gluster volume create gv replica 2 transport tcp pi:/srv/gluster pi2:/srv/gluster force

  • /srv/gluster is the directories being used as bricks here; I created them on both nodes
  • I used /srv/gluster thats on the SD card’s storage; ideally you should have USB drives mounted and use that; therefore I had to do force
  • I am using tcp as transport and as I have two nodes this using replica 2 and giving their names and brick paths accordingly

Once the volume is created the two nodes are keeping the bricks in sync and we can mount the volume using mount command. On PI I mounted this volume using mount –t glusterfs pi2:gv /mnt/gluster and on PI2 I mounted this volume using mount –f glusterfs pi:gv /mnt/gluster Once mounted we can read / write the data to GlusterFS just like any file system. If you want to you can add fstab entries; but I mounted on both from peer just to check things out

Lets create a Docker Container where we will mount this Gluster Volume; here’s the Dockerfile

FROM ubuntu
MAINTAINER Khurram <khuziz@hotmail.com>

RUN apt-get update && apt-get -y upgrade
RUN apt-get -y install software-properties-common python-software-properties
RUN apt-get -y install libpython2.7 libaio1 libibverbs1 liblvm2app2.2 librdmacm1 fuse
RUN apt-get -y install curl nano
RUN curl -sSL https://download.gluster.org/pub/gluster/glusterfs/3.5/3.5.2/Debian/jessie/apt/pool/main/g/glusterfs/glusterfs-common_3.5.2-4_amd64.deb > glusterfs-common_3.5.2-4_amd64.deb
RUN curl -sSL https://download.gluster.org/pub/gluster/glusterfs/3.5/3.5.2/Debian/jessie/apt/pool/main/g/glusterfs/glusterfs-client_3.5.2-4_amd64.deb > glusterfs-client_3.5.2-4_amd64.deb
RUN dpkg -i glusterfs-common_3.5.2-4_amd64.deb
RUN dpkg -i glusterfs-client_3.5.2-4_amd64.deb

  • Notice I have used the version of GlusterFS that's running on the PIs

If we are going to run the Docker Container in development environment; it will most probably be behind NAT; and we will not be able to connect to our PIs straight away as 3.5.2 version of Gluster dont allow request from clients using non privileged ports. For this edit /etc/glusterfs/glusterd.vol (at least on the server ip that you are going to use when mounting) and add option rpc-auth-allow-insecure on Also give gluster volume set gv server.allow-insecure on command following stop / start volume so that client can communicate with GlusterFS daemon and bricks using non privileged ports. Also make sure dont use any authentication for the volume as it might not work from behind NAT

The second thing before running Docker Container is; the client uses fuse and we need to expose /dev/fuse device and we need to run the container with SYS_ADMIN capability; if the docker image is khurram/gluster:work then run it with something like

docker run --name gluster --cap-add SYS_ADMIN --device /dev/fuse --rm -it khurram/gluster:work

When you are in Container; add pi and pi2 host entries into /etc/hosts, create a folder where you want to mount say /gluster and use mount command to mount it, mount –t glusterfs pi2:gv /gluster

  • As an exercise, can you customize dockerfile or create docker-compose file that takes care of adding hosts entries mounting glusterfs from the docker run parameters?
  • As an additional exercise, can you customize dockerfile or create docker-compose file further that we have SAMBA running and it exposes the mounted GlusterFS volume on Samba so we can access it from Windows and read/write data to it?
  • https://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.3/Raspbian/jessie/ has the more latest GlusterFS binaries that we can use on PIs and update our Dockerfile matching GlusterFS version accordingly
  • You can have one container that mounts the glusterfs and expose the directory as Docker volume; and then mount that Docker volume in another container (Container running Web Server or Database Server)

Happy Containering

Docker on Windows: Docker for Windows

Docker on Windows

If you are using Windows 10 x64 1511 (November Update) and has HyperV support in hardware / OS; you can try out Public Beta of Docker for Windows; it has all the things you need; there is no need to download any binary and keeping them in PATH, no need to set up Boot2Docker, no need to setup NAT or DHCP Server etc, no need of CIFS for mounting Windows folders into the containers. Installing Docker for Windows takes care of all these things; unlike Docker Toolbox that used VirtualBox; it uses HyperV for its MobyLinuxVM running Docker Daemon; installs Docker utilities adding them into the PATH (you should remove previously downloaded binaries to some place not in the PATH after its installation) and has the support of mounting Windows folders into the Containers as well. In short; this is the way to go if you have the supported OS!

image

C:\Users\khurram>docker version
Client:
Version:      1.12.0-rc3
API version:  1.24
Go version:   go1.6.2
Git commit:   91e29e8
Built:        Sat Jul  2 00:09:24 2016
OS/Arch:      windows/amd64
Experimental: true

Server:
Version:      1.12.0-rc3
API version:  1.24
Go version:   go1.6.2
Git commit:   876f3a7
Built:        Tue Jul  5 02:20:13 2016
OS/Arch:      linux/amd64
Experimental: true

To use the Windows folder in the container; right click the Docker whale icon from system tray and enable Shared Drives. Lets modify our docker-compose YML file we created for Dockerizing Mongo and Express for Docker for Windows and “up” our containers thereafter

image

  • It also adds “docker” entry into Windows for the Linux VM and uses HyperV networking, and you can open up the exposed application at friendly URL; in our case http://docker:3000

  • You can learn about IP scheme it has configured from the same settings application’s Network tab

Resources

Posted by khurram | 0 Comments
Filed under: , , ,

docker-compose

Dockerizing Node

When using Docker for some real world application often multiple Containers are required and to build and run them along with their Dockerfiles we need the scripts for building and running them; as realized in Dockerizing Mongo and Express. This becomes hassle and Docker has docker-compose utility that solves exactly this. We can create a “Compose file” (docker-compose.yml file) which is a YAML file; a human readable data serialization format; we configure the application services and its requirements in this file and then using the tool we can create and start all the services using this “compose file”. We define the container environment in a Dockerfile and how they relate to each other and run together in the compose file and then using the docker-compose we can build / run / stop etc them in the single go together.

Lets make a docker-compose.yml file for our Mongo / Express application; our application needs two data volumnes, a docker volume for MongoDB data and the host directory where our Express JS application files are (mounted through CIFS). We need to declare the MongoDB data volume in the compose file. We need two services; one for Mongo and the other for Express (Node); we will define these along with the build entries along with dockerfile entries as we are using alternate file names. We can define image names in there as well. For the HelloExpress; we need to expose the ports and this container also “depends on” the mongo db; with this entry in the compose file; the tool will take care to run it first; we also need to define the links with proper target name as its required given the Express JS application needs a known host name for MongoDB container hard coded in the “connection string” If we don’t define the target name; docker-compose names the container with its own scheme; we can define known names using container_name entries if we want to. Here’s the docker-compose.yml file

version: '2'
volumes:
    mongo-data:
        driver: local
services:
    mongodb:
        build:
            context: .
            dockerfile: Dockerfile.mongodb
        image: khurram/mongo
        #container_name: mongodb
        volumes:
        - mongo-data:/data/db
    helloexpress:
        build:
            context: .
            dockerfile: Dockerfile.node
        image: khurram/node
        #container_name: helloexpress
        volumes:
        - /mnt/srcshare/HelloExpress:/app
        entrypoint: nodejs /app/bin/www
        ports:
        - "3000:3000"
        depends_on:
        - mongodb
        links:
        - mongodb:mongodb

Once the compose file is in place; we can use docker-compose up and it will build + run + attach the required volume and services as defined. We can use –d parameter with docker-compose up to detach

C:\khurram\src\HelloExpress>docker-compose.exe up –d
Creating network "helloexpress_default" with the default driver
Creating helloexpress_mongodb_1
Creating helloexpress_helloexpress_1

C:\khurram\src\HelloExpress>rem Test http://DockerVM:3000

C:\khurram\src\HelloExpress>docker-compose.exe down
Stopping helloexpress_helloexpress_1 ... done
Stopping helloexpress_mongodb_1 ... done
Removing helloexpress_helloexpress_1 ... done
Removing helloexpress_mongodb_1 ... done
Removing network helloexpress_default

Code @ https://github.com/khurram-aziz/HelloExpress is updated accordingly having the docker-compose.yml file; DockerBuild.bat and DockerRun.bat are no longer needed; but I am leaving them there as well so you can compare and see how docker-compose.yml is made using those two scripts!

Resources

Posted by khurram | 0 Comments
Filed under: ,

Dockerizing Mongo and Express

Dockerizing Node

Now that we are familiar with the Docker and how it helps us in high isolation and compartmentalization; lets expand and try out deploying some real world application. I will be using the application that we built for MongoDB and Mongoose; its an Express JS / MongoDB application and we will try deploying it across two Docker containers; one for MongoDB and the other for Express in spirit of Microservice Architecture. As per wikipedia; Microservices are a more concrete and modern interpretation of service-oriented architectures (SOA) used to build distributed software systems. Like in SOA, services in a microservice architecture are processes that communicate with each other over the network in order to fulfill a goal. Also, like in SOA, these services use technology agnostic protocols. Using separate Container for each microservice; we get fine control and can monitor and distribute components of our application at each microservice level.

For MongoDB; lets start an Ubuntu instance; install Mongo and try to run it; we will learn that it needs /data/db directory

image

We can create that in the container but as we know that when container is stopped it loses the data. Its recommended to use Data Volume for such requirement and we will mount one as /data/db. Lets create a Dockerfile for our MongoDB container

FROM ubuntu
MAINTAINER Khurram <khuziz@hotmail.com>

RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927
RUN echo "deb
http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.2 multiverse" | tee /etc/apt/sources.list.d/mongodb-org-3.2.list
RUN apt-get update && apt-get install -y mongodb-org

EXPOSE 27017

ENTRYPOINT ["/usr/bin/mongod"]

Lets create a Dockerfile for the Node JS as well; we will not be including the application code in the Node JS container instead we will use Data Volume for the application files. Note that Node is not being run as ENTRYPOINT or CMD; we will be starting it as a parameter to the container in docker run command and pass the start up JS file as the parameter; this way we can reuse our Node JS container image for different applications; scenarios like running web service in its own container and front end application in separate container

FROM ubuntu
MAINTAINER Khurram <khuziz@hotmail.com>

RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install -y nodejs
RUN apt-get install -y build-essential
RUN apt-get install -y npm

To build the container images; give commands

docker build -t khurram/node -f Dockerfile.node .
docker build -t khurram/mongo -f Dockerfile.mongodb .

  • I have kept different name for the Dockerfile for our containers; as these names are not standard I am passing the file name using –f argument; its done so that I can have both files in one directory
  • Its better to make a BAT / SH script for above commands

Before running the two docker containers; we need two data volumes, one for Mongo and the other for Node application. For the Node application we will use host directory; in our case the directory in Boot2Docker VM; we will use cifs-utils to mount the folder from Windows HyperV Host sharing it on network as discussed in Docker on Windows- Customized Boot2Docker ISO with CIFS; from there on it can act as a host directory in Docker VM and we can use it for data volume. Unfortunately we cant use this arrangement for Mongo as it expects certain features from the file system (for its data locking etc) and mounted directory using cifs-utils doesnt have these features, therefore we will create a volume using docker and use it instead

docker volume create --name mongo-data
mongo-data

docker volume inspect mongo-data
[
    {
        "Name": "mongo-data",
        "Driver": "local",
        "Mountpoint": "/mnt/sda1/var/lib/docker/volumes/mongo-data/_data",
        "Labels": {}
    }
]

To start the Mongo container issue this command

docker run -d -p 27017:27017 -v mongo-data:/data/db --name mongodb khurram/mongo

  • The above created mongo-data volume is passed using –v argument
  • Its mounted as /data/db in the container as required by the Mongo we learned by installing it in a test container
  • The Mongo port is exposed; we can test by connecting to Docker VM from the development machine!

Docker has a Linking feature; using which we can link one or more containers to particular container while starting it; doing so it adds /etc/hosts entry as well as set Environment Variables. Its important that the linking container is given proper name; you will see that /etc/hosts entry and environment variables all depends on it. Lets start the khurram/node instance linking mongodb container that we already have started!

docker run -it -v /mnt/srcshare/HelloExpress:/app --link mongodb:mongodb --name helloexpress khurram/node
root@7be354a7e084:/# cat /etc/hosts
127.0.0.1       localhost
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2      mongodb 75e912d09a6c
172.17.0.3      7be354a7e084
root@7be354a7e084:/# set
BASH=/bin/bash
….
MONGODB_NAME=/helloexpress/mongodb
MONGODB_PORT=tcp://172.17.0.2:27017
MONGODB_PORT_27017_TCP=tcp://172.17.0.2:27017
MONGODB_PORT_27017_TCP_ADDR=172.17.0.2
MONGODB_PORT_27017_TCP_PORT=27017
MONGODB_PORT_27017_TCP_PROTO=tcp

UID=0
_=/etc/hosts
root@7be354a7e084:/# cd /app/
root@7be354a7e084:/app# ls
DockerBuild.bat  DockerRun.bat       Dockerfile.node       HelloExpress.sln  bin     node_modules  package.json  routes
Dockerfile.mongodb  HelloExpress.njsproj  app.js            models  obj           public        views

  • Given it has added /etc/hosts entry; we can simply access the mongodb server with the name in connection string for mongoose.connect() call
  • Note that the information about the mongodb’s exposed port is also available in the environment variables
  • Note that the cifs mounted “local” directory is mounted as volume in the container and we can access its content accordingly

Once the data volumes are in place; and container linking is understood and app.js is updated accordingly for mongoose.connect(); lets clean up and start the fresh instances of our containers

docker stop mongodb
docker stop helloexpress

docker rm mongodb
docker rm helloexpress

docker run -d -v mongo-data:/data/db --name mongodb khurram/mongo
docker run -d -p 3000:3000 -v /mnt/srcshare/HelloExpress:/app --link mongodb:mongodb --name helloexpress khurram/node nodejs /app/bin/www

  • Its better to make a BAT / SH script for the above commands

Code @ https://github.com/khurram-aziz/HelloExpress is updated accordingly having the DockerBuild.bat, DockerRun.bat and Dockerfiles for Mongo and Node

Resources