Welcome to weblogs.com.pk Sign in | Join | Help

Visual C++ for Linux Development

Visual C++ for Linux Development is the Visual Studio 2015’s extension by Microsoft that lets us write C++ code in Visual Studio for Linux machines and devices. It connects to the machine or device over SSH and uses machine / device’ g++, gdb and gdbserver to provide compilation and debugging experience from within Visual Studio. After installing the extension; it adds Linux Connection Manager into Visual Studio’s Options dialog through which we manage the SSH connections to Linux machines or devices (ARM also supported). It also adds the project types; currently there are three; Blink (Raspberry), Console Application (Linux) and Empty Project (Linux). You can write the C++ code using Unix headers and libraries. For intellisence / removing red squiggles; you will need to download the header files (using PUTTY’s PSCP) to the development machine and add that folder in the project properties’ VC++ Directories section

Linux Connection Manager

We can use this extension with Docker Container as well; all we need is an image having SSH Server (OpenSSH), g++, gdb and gdbserver. I created this Dockerfile

FROM ubuntu:trusty
MAINTAINER Khurram <khuziz@hotmail.com>

RUN apt-get update && apt-get -y upgrade
RUN apt-get -y install openssh-server
RUN apt-get -y install g++
RUN apt-get -y install gdb gdbserver
RUN apt-get -y install nano iputils-ping

RUN mkdir /var/run/sshd
RUN echo 'root:root' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config

# SSH login fix. Otherwise user is kicked off after login
RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd

ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile

EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]

  • For some wiered reason; g++ installation was failing on latest ubuntu; therefore used ubuntu:trusty as base image
  • We need to set root password and configure OpenSSH to allow direct root ssh
  • 22 SSH port is exposed; that we can map to Docker Host Machine
  • SSHD is started using CMD with –D flag so detailed logs get created in case Visual Studio fails to connect to it and you need troubleshooting

Once the image is built; you can run it with the following docker run command; I have also uploaded the image on to the Docker Hub, so you can directly use the following docker run command and it will download the prebuilt required image for you automatically

docker run –name linuxtools –v /root/projects –p 2222:22 –d khurramaziz/linuxtools

  • We cant map exposed 22 port to host’s 22 port as there’s (usually) already SSH server running on host’s 22; so mapping it to 2222 instead
  • Used –d option to run it in background
  • Notice I have mounted /root/projects as Docker Volume; this is where the extension upload project files and compile and place the built binaries, I have also named the container so that I can use volumes-from when running other Containers later to test the built binaries

Once the container is up and running we can SSH to it; if using putty; use –P flag to specify the port; putty –P 2222 YourDockerHost; and if its working fine; we can set up its connection in Visual Studio’s Linux Connection Manager. When everything is in order; we can build our project; if we doe the DEBUG build; our HelloLinux binary will be at /root/projects/HelloLinux/bin/x64/Debug/HelloLinux.out that we can run from the SSH

SSH

Given we have the binary on the volume; we can run other containers, mounting the volume and run the ELF64 binary and it will run fine

Containers

If you try to run the compiled binary on a plain official busybox Container; it will fail as it doesn't have the required C libraries. Either add --static (dash dash static) in the Linker settings (Project Properties) or there is busybox:ubuntu-14.04 (5.6Mb comparing to 1Mb) Docker Image with all the C libs in place. Your Dockerfile will be something like this

FROM busybox:ubuntu-14.04
COPY HelloLinux/bin/x64/Release/HelloLinux.out /hello
CMD ["/hello"]

Resources

GlusterFS Volume as Samba Share

We made Docker Container using a Dockerfile in the GlusterFS post that can mount a GlusterFS volume (running on Raspberry Pis); lets extend our Dockerfile and add Samba Server to expose the mounted directory as Samba Share so it can be accessed from Windows. For this we need to add these additional lines into the Dockerfile

RUN apt-get -y install samba

EXPOSE 138/udp
EXPOSE 139
EXPOSE 445
EXPOSE 445/udp

We are installing samba and exposing the TCP / UDP ports that Samba uses; if we build and run this container; we need to expose these ports using -p 138:138/udp -p 139:139 -p 445:445 -p 445:445/udp parameters in docker run command. After running it; to expose the directory through Samba; we need to add the following lines into /etc/samba/smb.conf (at the end)

[data]
path = /data
read only = no

Samba uses its own password files; to add the root user into it; run smbpasswd –a root and finally restart the Samba Daemon using service smbd restart Now if we use \\DOCKERMACHINE from the Windows; we should see data share and can access it using root and entered password. These are lots of manual steps after running the container; to solve this; lets create a setup.sh shell script that we will add into the container (through Dockerfile); we will use Environment Variables as we can pass them in dr run command. Our final docker run command will look like this

docker run --name glustersamba --cap-add SYS_ADMIN --device /dev/fuse --rm -e glusterip=Gluster-Server-IP -e glusterhost=Gluster-Server-FriendlyName -e glustervolume=Gluster-Volume-Name -p 138:138/udp -p 139:139 -p 445:445 -p 445:445/udp -it khurramaziz/gluster:3.5.2-samba

  • Notice the three environment variables, glusterip, glusterhost and glustervolume that are passed using –e
  • Notice the Samba ports being exposed using –p
  • Notice that SYS_ADMIN capability and /dev/fuse device is added; required for glusterfs client / mounting
  • khurramaziz/gluster:3.5.2-samba exists on Docker Registry; you can go ahead and run the above command and it will download the image layers; you upload the created image using docker push imagename:tag

If you are not interested how the image is made up; you can skip the remaining post; as I have pushed this image on to the Docker Hub Registry and you can issue the above command and it will work!

Here’s the setup.sh that’s using the above three environment variables to mount the GlusterFS volume at /data and then exposing it through Samba

#!/bin/sh
smbpath="/etc/samba/smb.conf"
echo $glusterip $glusterhost >> /etc/hosts
mkdir /data
mount -t glusterfs $glusterhost:$glustervolume /data
smbpasswd -a root
echo [data] >> $smbpath
echo path = /data >> $smbpath
echo read only = no >> $smbpath
service smbd restart

And here’s the Dockerfile that’s adding the above setup.sh and running it on start up using CMD directive

FROM ubuntu
MAINTAINER Khurram <khuziz@hotmail.com>

RUN apt-get update && apt-get -y upgrade
RUN apt-get -y install software-properties-common python-software-properties
RUN apt-get -y install libpython2.7 libaio1 libibverbs1 liblvm2app2.2 librdmacm1 fuse
RUN apt-get -y install curl nano
RUN curl -sSL
https://download.gluster.org/pub/gluster/glusterfs/3.5/3.5.2/Debian/jessie/apt/pool/main/g/glusterfs/glusterfs-common_3.5.2-4_amd64.deb > glusterfs-common_3.5.2-4_amd64.deb
RUN curl -sSL
https://download.gluster.org/pub/gluster/glusterfs/3.5/3.5.2/Debian/jessie/apt/pool/main/g/glusterfs/glusterfs-client_3.5.2-4_amd64.deb > glusterfs-client_3.5.2-4_amd64.deb
RUN dpkg -i glusterfs-common_3.5.2-4_amd64.deb
RUN dpkg -i glusterfs-client_3.5.2-4_amd64.deb

RUN apt-get -y install samba

EXPOSE 138/udp
EXPOSE 139
EXPOSE 445
EXPOSE 445/udp

ADD setup.sh /setup.sh
RUN chmod +x /setup.sh

CMD /setup.sh && /bin/bash

Ideally; if we are following Micro Services Architecture; we should have a separate container for Samba Server; the GlusterFS Client Container will act as a producer exposing the mounted GlusterFS volume and Samba Server Container acting as Consumer exposing that volume as Samba Share. Sadly this is not possible (or atleast I dont know any way) as Docker Volume that get created will have the files that are there before we mount GlusterFS volume. When the GlusterFS volume is mounted into the producer container; the consumer container will continue to see the “before files + directories” and not what’s in the GlusterFS volume

  • Docker has plugins support, there are Volume Plugins using which we can create the Volumes that gets stored according to the used plugin / driver. There also exist GlusterFS volume plugins that we can use; we will not require the GlusterFS Client Container; instead host will mount the volume and such volumes can be used as Docker Volume in the containers

image

A proof of concept of producer / consumer implementation using Docker Volume

  • Notice the producer is Ubuntu and consumer is CentOS
  • Notice for the producer container run command; name is defined as its required for consumer container run command’s –volumes-from section
  • Notice for the producer container volume only target path is defined; it will create a Docker volume automatically and map as the defined path into the container; this volume / directory will get stored out of the Docker’s Union File System and given the name that can be used in other containers if they are run using –volumes-from

Resources

GlusterFS

GlusterFS is a scale-out network-attached storage file system that has found applications in cloud computing, streaming media services, and content delivery networks. GlusterFS was developed originally by Gluster, Inc. and then by Red Hat, Inc., as a result of Red Hat acquiring Gluster in 2011, says the Wikipedia. Its a distributed file system that we run on multiple hosts having “bricks” that hosts the data physically (on storage); the nodes communicate with other (peers) and we can create a volume across these nodes with different strategies; replication in one of them if chosen data will get stored in bricks of all contributing nodes acting like RAID 1

image

For our little project we will use two Raspberry Pis to create a GlusterFS Volume and then mount it into Docker Container

image

We need to install glusterfs-server on the PIs; give the following command

$ sudo apt-get install glusterfs-server

It installed Gluster 3.5.2; we can check the version using gluster –version; knowing version is important; as we will need to install same version on the Docker Container; newer versions dont talk to older version Gluster servers and vice versa

Once the gluster is installed probe the peers using gluster peer probe hostname; its better to have the two PIs in same subnet and friendly names are added in /etc/hosts files of each participating nodes. In my case I named two nodes, pi and pi2 and was able to do $ sudo gluster peer probe pi2 from pi and probe pi from p2. Once the probing is done successfully; we can create the RAID 1 like replicating volume using gluster volume create. I issued the following command

$ sudo gluster volume create gv replica 2 transport tcp pi:/srv/gluster pi2:/srv/gluster force

  • /srv/gluster is the directories being used as bricks here; I created them on both nodes
  • I used /srv/gluster thats on the SD card’s storage; ideally you should have USB drives mounted and use that; therefore I had to do force
  • I am using tcp as transport and as I have two nodes this using replica 2 and giving their names and brick paths accordingly

Once the volume is created the two nodes are keeping the bricks in sync and we can mount the volume using mount command. On PI I mounted this volume using mount –t glusterfs pi2:gv /mnt/gluster and on PI2 I mounted this volume using mount –f glusterfs pi:gv /mnt/gluster Once mounted we can read / write the data to GlusterFS just like any file system. If you want to you can add fstab entries; but I mounted on both from peer just to check things out

Lets create a Docker Container where we will mount this Gluster Volume; here’s the Dockerfile

FROM ubuntu
MAINTAINER Khurram <khuziz@hotmail.com>

RUN apt-get update && apt-get -y upgrade
RUN apt-get -y install software-properties-common python-software-properties
RUN apt-get -y install libpython2.7 libaio1 libibverbs1 liblvm2app2.2 librdmacm1 fuse
RUN apt-get -y install curl nano
RUN curl -sSL https://download.gluster.org/pub/gluster/glusterfs/3.5/3.5.2/Debian/jessie/apt/pool/main/g/glusterfs/glusterfs-common_3.5.2-4_amd64.deb > glusterfs-common_3.5.2-4_amd64.deb
RUN curl -sSL https://download.gluster.org/pub/gluster/glusterfs/3.5/3.5.2/Debian/jessie/apt/pool/main/g/glusterfs/glusterfs-client_3.5.2-4_amd64.deb > glusterfs-client_3.5.2-4_amd64.deb
RUN dpkg -i glusterfs-common_3.5.2-4_amd64.deb
RUN dpkg -i glusterfs-client_3.5.2-4_amd64.deb

  • Notice I have used the version of GlusterFS that's running on the PIs

If we are going to run the Docker Container in development environment; it will most probably be behind NAT; and we will not be able to connect to our PIs straight away as 3.5.2 version of Gluster dont allow request from clients using non privileged ports. For this edit /etc/glusterfs/glusterd.vol (at least on the server ip that you are going to use when mounting) and add option rpc-auth-allow-insecure on Also give gluster volume set gv server.allow-insecure on command following stop / start volume so that client can communicate with GlusterFS daemon and bricks using non privileged ports. Also make sure dont use any authentication for the volume as it might not work from behind NAT

The second thing before running Docker Container is; the client uses fuse and we need to expose /dev/fuse device and we need to run the container with SYS_ADMIN capability; if the docker image is khurram/gluster:work then run it with something like

docker run --name gluster --cap-add SYS_ADMIN --device /dev/fuse --rm -it khurram/gluster:work

When you are in Container; add pi and pi2 host entries into /etc/hosts, create a folder where you want to mount say /gluster and use mount command to mount it, mount –t glusterfs pi2:gv /gluster

  • As an exercise, can you customize dockerfile or create docker-compose file that takes care of adding hosts entries mounting glusterfs from the docker run parameters?
  • As an additional exercise, can you customize dockerfile or create docker-compose file further that we have SAMBA running and it exposes the mounted GlusterFS volume on Samba so we can access it from Windows and read/write data to it?
  • https://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.3/Raspbian/jessie/ has the more latest GlusterFS binaries that we can use on PIs and update our Dockerfile matching GlusterFS version accordingly
  • You can have one container that mounts the glusterfs and expose the directory as Docker volume; and then mount that Docker volume in another container (Container running Web Server or Database Server)

Happy Containering

Docker on Windows: Docker for Windows

Docker on Windows

If you are using Windows 10 x64 1511 (November Update) and has HyperV support in hardware / OS; you can try out Public Beta of Docker for Windows; it has all the things you need; there is no need to download any binary and keeping them in PATH, no need to set up Boot2Docker, no need to setup NAT or DHCP Server etc, no need of CIFS for mounting Windows folders into the containers. Installing Docker for Windows takes care of all these things; unlike Docker Toolbox that used VirtualBox; it uses HyperV for its MobyLinuxVM running Docker Daemon; installs Docker utilities adding them into the PATH (you should remove previously downloaded binaries to some place not in the PATH after its installation) and has the support of mounting Windows folders into the Containers as well. In short; this is the way to go if you have the supported OS!

image

C:\Users\khurram>docker version
Client:
Version:      1.12.0-rc3
API version:  1.24
Go version:   go1.6.2
Git commit:   91e29e8
Built:        Sat Jul  2 00:09:24 2016
OS/Arch:      windows/amd64
Experimental: true

Server:
Version:      1.12.0-rc3
API version:  1.24
Go version:   go1.6.2
Git commit:   876f3a7
Built:        Tue Jul  5 02:20:13 2016
OS/Arch:      linux/amd64
Experimental: true

To use the Windows folder in the container; right click the Docker whale icon from system tray and enable Shared Drives. Lets modify our docker-compose YML file we created for Dockerizing Mongo and Express for Docker for Windows and “up” our containers thereafter

image

  • It also adds “docker” entry into Windows for the Linux VM and uses HyperV networking, and you can open up the exposed application at friendly URL; in our case http://docker:3000

  • You can learn about IP scheme it has configured from the same settings application’s Network tab

Resources

Posted by khurram | 0 Comments
Filed under: , , ,

docker-compose

Dockerizing Node

When using Docker for some real world application often multiple Containers are required and to build and run them along with their Dockerfiles we need the scripts for building and running them; as realized in Dockerizing Mongo and Express. This becomes hassle and Docker has docker-compose utility that solves exactly this. We can create a “Compose file” (docker-compose.yml file) which is a YAML file; a human readable data serialization format; we configure the application services and its requirements in this file and then using the tool we can create and start all the services using this “compose file”. We define the container environment in a Dockerfile and how they relate to each other and run together in the compose file and then using the docker-compose we can build / run / stop etc them in the single go together.

Lets make a docker-compose.yml file for our Mongo / Express application; our application needs two data volumnes, a docker volume for MongoDB data and the host directory where our Express JS application files are (mounted through CIFS). We need to declare the MongoDB data volume in the compose file. We need two services; one for Mongo and the other for Express (Node); we will define these along with the build entries along with dockerfile entries as we are using alternate file names. We can define image names in there as well. For the HelloExpress; we need to expose the ports and this container also “depends on” the mongo db; with this entry in the compose file; the tool will take care to run it first; we also need to define the links with proper target name as its required given the Express JS application needs a known host name for MongoDB container hard coded in the “connection string” If we don’t define the target name; docker-compose names the container with its own scheme; we can define known names using container_name entries if we want to. Here’s the docker-compose.yml file

version: '2'
volumes:
    mongo-data:
        driver: local
services:
    mongodb:
        build:
            context: .
            dockerfile: Dockerfile.mongodb
        image: khurram/mongo
        #container_name: mongodb
        volumes:
        - mongo-data:/data/db
    helloexpress:
        build:
            context: .
            dockerfile: Dockerfile.node
        image: khurram/node
        #container_name: helloexpress
        volumes:
        - /mnt/srcshare/HelloExpress:/app
        entrypoint: nodejs /app/bin/www
        ports:
        - "3000:3000"
        depends_on:
        - mongodb
        links:
        - mongodb:mongodb

Once the compose file is in place; we can use docker-compose up and it will build + run + attach the required volume and services as defined. We can use –d parameter with docker-compose up to detach

C:\khurram\src\HelloExpress>docker-compose.exe up –d
Creating network "helloexpress_default" with the default driver
Creating helloexpress_mongodb_1
Creating helloexpress_helloexpress_1

C:\khurram\src\HelloExpress>rem Test http://DockerVM:3000

C:\khurram\src\HelloExpress>docker-compose.exe down
Stopping helloexpress_helloexpress_1 ... done
Stopping helloexpress_mongodb_1 ... done
Removing helloexpress_helloexpress_1 ... done
Removing helloexpress_mongodb_1 ... done
Removing network helloexpress_default

Code @ https://github.com/khurram-aziz/HelloExpress is updated accordingly having the docker-compose.yml file; DockerBuild.bat and DockerRun.bat are no longer needed; but I am leaving them there as well so you can compare and see how docker-compose.yml is made using those two scripts!

Resources

Posted by khurram | 0 Comments
Filed under: ,

Dockerizing Mongo and Express

Dockerizing Node

Now that we are familiar with the Docker and how it helps us in high isolation and compartmentalization; lets expand and try out deploying some real world application. I will be using the application that we built for MongoDB and Mongoose; its an Express JS / MongoDB application and we will try deploying it across two Docker containers; one for MongoDB and the other for Express in spirit of Microservice Architecture. As per wikipedia; Microservices are a more concrete and modern interpretation of service-oriented architectures (SOA) used to build distributed software systems. Like in SOA, services in a microservice architecture are processes that communicate with each other over the network in order to fulfill a goal. Also, like in SOA, these services use technology agnostic protocols. Using separate Container for each microservice; we get fine control and can monitor and distribute components of our application at each microservice level.

For MongoDB; lets start an Ubuntu instance; install Mongo and try to run it; we will learn that it needs /data/db directory

image

We can create that in the container but as we know that when container is stopped it loses the data. Its recommended to use Data Volume for such requirement and we will mount one as /data/db. Lets create a Dockerfile for our MongoDB container

FROM ubuntu
MAINTAINER Khurram <khuziz@hotmail.com>

RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927
RUN echo "deb
http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.2 multiverse" | tee /etc/apt/sources.list.d/mongodb-org-3.2.list
RUN apt-get update && apt-get install -y mongodb-org

EXPOSE 27017

ENTRYPOINT ["/usr/bin/mongod"]

Lets create a Dockerfile for the Node JS as well; we will not be including the application code in the Node JS container instead we will use Data Volume for the application files. Note that Node is not being run as ENTRYPOINT or CMD; we will be starting it as a parameter to the container in docker run command and pass the start up JS file as the parameter; this way we can reuse our Node JS container image for different applications; scenarios like running web service in its own container and front end application in separate container

FROM ubuntu
MAINTAINER Khurram <khuziz@hotmail.com>

RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install -y nodejs
RUN apt-get install -y build-essential
RUN apt-get install -y npm

To build the container images; give commands

docker build -t khurram/node -f Dockerfile.node .
docker build -t khurram/mongo -f Dockerfile.mongodb .

  • I have kept different name for the Dockerfile for our containers; as these names are not standard I am passing the file name using –f argument; its done so that I can have both files in one directory
  • Its better to make a BAT / SH script for above commands

Before running the two docker containers; we need two data volumes, one for Mongo and the other for Node application. For the Node application we will use host directory; in our case the directory in Boot2Docker VM; we will use cifs-utils to mount the folder from Windows HyperV Host sharing it on network as discussed in Docker on Windows- Customized Boot2Docker ISO with CIFS; from there on it can act as a host directory in Docker VM and we can use it for data volume. Unfortunately we cant use this arrangement for Mongo as it expects certain features from the file system (for its data locking etc) and mounted directory using cifs-utils doesnt have these features, therefore we will create a volume using docker and use it instead

docker volume create --name mongo-data
mongo-data

docker volume inspect mongo-data
[
    {
        "Name": "mongo-data",
        "Driver": "local",
        "Mountpoint": "/mnt/sda1/var/lib/docker/volumes/mongo-data/_data",
        "Labels": {}
    }
]

To start the Mongo container issue this command

docker run -d -p 27017:27017 -v mongo-data:/data/db --name mongodb khurram/mongo

  • The above created mongo-data volume is passed using –v argument
  • Its mounted as /data/db in the container as required by the Mongo we learned by installing it in a test container
  • The Mongo port is exposed; we can test by connecting to Docker VM from the development machine!

Docker has a Linking feature; using which we can link one or more containers to particular container while starting it; doing so it adds /etc/hosts entry as well as set Environment Variables. Its important that the linking container is given proper name; you will see that /etc/hosts entry and environment variables all depends on it. Lets start the khurram/node instance linking mongodb container that we already have started!

docker run -it -v /mnt/srcshare/HelloExpress:/app --link mongodb:mongodb --name helloexpress khurram/node
root@7be354a7e084:/# cat /etc/hosts
127.0.0.1       localhost
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2      mongodb 75e912d09a6c
172.17.0.3      7be354a7e084
root@7be354a7e084:/# set
BASH=/bin/bash
….
MONGODB_NAME=/helloexpress/mongodb
MONGODB_PORT=tcp://172.17.0.2:27017
MONGODB_PORT_27017_TCP=tcp://172.17.0.2:27017
MONGODB_PORT_27017_TCP_ADDR=172.17.0.2
MONGODB_PORT_27017_TCP_PORT=27017
MONGODB_PORT_27017_TCP_PROTO=tcp

UID=0
_=/etc/hosts
root@7be354a7e084:/# cd /app/
root@7be354a7e084:/app# ls
DockerBuild.bat  DockerRun.bat       Dockerfile.node       HelloExpress.sln  bin     node_modules  package.json  routes
Dockerfile.mongodb  HelloExpress.njsproj  app.js            models  obj           public        views

  • Given it has added /etc/hosts entry; we can simply access the mongodb server with the name in connection string for mongoose.connect() call
  • Note that the information about the mongodb’s exposed port is also available in the environment variables
  • Note that the cifs mounted “local” directory is mounted as volume in the container and we can access its content accordingly

Once the data volumes are in place; and container linking is understood and app.js is updated accordingly for mongoose.connect(); lets clean up and start the fresh instances of our containers

docker stop mongodb
docker stop helloexpress

docker rm mongodb
docker rm helloexpress

docker run -d -v mongo-data:/data/db --name mongodb khurram/mongo
docker run -d -p 3000:3000 -v /mnt/srcshare/HelloExpress:/app --link mongodb:mongodb --name helloexpress khurram/node nodejs /app/bin/www

  • Its better to make a BAT / SH script for the above commands

Code @ https://github.com/khurram-aziz/HelloExpress is updated accordingly having the DockerBuild.bat, DockerRun.bat and Dockerfiles for Mongo and Node

Resources

Docker on Windows: Customized Boot2Docker ISO with CIFS

Docker on Windows

When using Docker in Linux Virtual Machine on Windows (or Mac); especially in development environment; you definitely going to need “some way” to access data on the host OS system (source code / data of your application). Virtual Box has an ability to exposes the user home folder to the VMs and when you create a Boot2Docker VM using docker-machine it mounts the user home folder that you can access; but when using HyperV driver; sadly this is not the case as HyperV is bit more restrictive. The simplest way is to use Windows Shares; and if you have setup a NAT switch making it “Private” and sharing the required folder; you can access it from Linux / Boot2Docker VM. You will need to install cifs-utils

As per Wikipedia; Server Message Block (SMB), one version of which was also known as Common Internet File System (CIFS),operates as an application-layer network protocol mainly used for providing shared access to files, printers, and serial ports and miscellaneous communications between nodes on a network! Boot2Docker is based on Tiny Core Linux and it has notion of extensions that exists as tcz files and we load them using tce-load. For cifs-utils on Boot2Docker; we need to issue the following commands!

wget http://distro.ibiblio.org/tinycorelinux/5.x/x86/tcz/cifs-utils.tcz
tce-load -i cifs-utils.tcz

Once installed we can mount the shared folder from HyperV Host machine using mount; say for \\192.168.10.1\src we will use following commands

sudo mkdir /mnt/srcshare
sudo mount -t cifs //192.168.10.1/src /mnt/srcshare -o user=khurram,pass=password

Any extension we install on Tiny Core Linux gets lost across reboot; and given CIFS is often needed in development environment (especially if using HyperV as Virtualization platform) its better to create a Docker VM using “customized Boot2Docker ISO” Interestingly we can create a Docker Image to create such customized Boot2Docker ISO. Create a Dockerfile with this content

FROM boot2docker/boot2docker

#wget http://distro.ibiblio.org/tinycorelinux/5.x/x86/tcz/cifs-utils.tczRf
#tce-load -i cifs-utils.tcz

RUN echo "\nBoot2Docker with CIFS\n" >> $ROOTFS/etc/motd
RUN curl -L -o /tmp/cifs-utils.tcz $TCL_REPO_BASE/tcz/cifs-utils.tcz && \
unsquashfs -f -d $ROOTFS /tmp/cifs-utils.tcz && \
rm -rf /tmp/cifs-utils.tcz
RUN /make_iso.sh
CMD ["cat", "boot2docker.iso"]

To create an ISO; give these commands

docker build -t khurram/boot2docker:cifs –f YourAboveDockerFile .
docker run –rm khurram/boot2docker:cifs > boot2docker.cifs.iso

  • khurram/boot2docker:cifs is the tag name for Docker Image
  • docker build can take a considerable time; given it makes 2Gb+ image

And then to create a Docker VM using this customized ISO; use docker-machine

docker-machine create --driver hyperv --hyperv-virtual-switch NAT --hyperv-boot2docker-url boot2docker.cifs.iso Cifs

  • NAT is the name of HyperV Virtual Switch
  • Cifs is the name of HyperV VM

image

  • Note the presence of the MOTD we added in the VM; made with the customized Boot2Docker ISO

You can now easily mount the network shares in the Boot2Docker VM and then mount that host directory as a data volume in the docker container using docker run’s –v flag

References

Docker on Windows: Windows Containers

Docker on Windows

Windows Containers are coming to next versions of the Server and Client OSes; Windows Server Containers will have Linux like isolation through namespace and process. Hyper-V Containers uses light weight virtual machine and this can be tried on Windows 10 Insider Builds. You need 14352 or later build.

There is a step by step guide available at https://msdn.microsoft.com/en-us/virtualization/windowscontainers/quick_start/quick_start_windows_10 and following it you can have the Hyper-V Containers running on the Windows 10

Here’s the output of some Docker commands:

PS C:\WINDOWS\system32> docker images
REPOSITORY                TAG                 IMAGE ID            CREATED             SIZE
microsoft/sample-dotnet   latest              28da49c3bff4        6 days ago          918.3 MB
nanoserver                10.0.14300.1016     3f5112ddd185        4 weeks ago         810.2 MB
nanoserver                latest              3f5112ddd185        4 weeks ago         810.2 MB
PS C:\WINDOWS\system32> docker ps -a
CONTAINER ID        IMAGE                     COMMAND                  CREATED             STATUS                      P
ORTS               NAMES
187e8f0bade3        microsoft/sample-dotnet   "dotnet dotnetbot.dll"   12 minutes ago      Exited (0) 11 minutes ago
                   sad_northcutt

PS C:\WINDOWS\system32> docker info
Containers: 1
Running: 0
Paused: 0
Stopped: 1
Images: 2
Server Version: 1.12.0-dev
Storage Driver: Windows filter storage driver
Windows:
Logging Driver: json-file
Plugins:
Volume: local
Network: transparent nat null
Kernel Version: 10.0 14361 (14361.0.amd64fre.rs1_release.160603-1700)
Operating System: Windows 10 Pro Insider Preview
OSType: windows
Architecture: x86_64
CPUs: 8
Total Memory: 15.94 GiB
Name: ENVY
ID: ****************************************
Docker Root Dir: C:\ProgramData\docker
Debug Mode (client): false
Debug Mode (server): false
Registry:
https://index.docker.io/v1/
Insecure Registries:
127.0.0.0/8

References

Posted by khurram | 0 Comments
Filed under: ,

Dockerfile

Dockerizing Node

Docker can build images automatically by reading the instructions from a Dockerfile. Its a text file that contains the commands how to assemble the required image. This can be used as a replacement of manually creating an image from scratch installing required software etc and then exporting and loading it someplace else; the technique we discussed in the first Docker post here. We can simply handover the Dockerfile instead. Lets create a Node Container using the Dockerfile for that simple Hello World thing! Create a Dockerfile; and punch in the following

FROM ubuntu
MAINTAINER Khurram <khuziz@hotmail.com>

RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install -y nodejs
RUN apt-get install -y build-essential
RUN apt-get install -y npm

ADD hello.js /app/hello.js

EXPOSE 3000

WORKDIR /app
CMD ["nodejs", "hello.js"]

  • Using FROM; we are using ubuntu base images; there are many to choose from at Docker Hub / Registry
  • Using RUN; we are giving commands that needs to run to setup the required things in the Container
  • Using ADD; we are adding the application file(s) into the Container; we use ADD and COPY for this
  • Using EXPOSE; we are telling which ports will get exposed; when the container will run using –P; it will expose this port and map to some random available port on the Docker Machine
  • WORKDIR sets the directory for subsequent RUN, CMD, ADD/COPY and ENTRYPOINT etc
  • Using CMD; we are running the NODEJS command to run our application

Once the Dockerfile is in place; we can “compile” it and build the container using docker build

>docker build –t khurram/node:hello .

  • Using –t we are specifying the tag name of the image that will get created
  • The last dot is the context; the directory; where docker build will run; it will look for Dockerfile there (and some other files if we create like .dockerignore etc) and run/compile it from the specified context

image

After a while; our image will get created that we can check using docker images and can run it using docker run

C:\khurram\src\Staging>docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
khurram/node        hello               b35d15d98edb        2 minutes ago       460 MB
microsoft/dotnet    latest              098162c455c7        11 days ago         576 MB
ubuntu              latest              2fa927b5cdd3        2 weeks ago         122 MB

C:\khurram\src\Staging>docker run -d -p 3000:3000 khurram/node:hello
ecebef4649899b5e46eac42aeedf78372998e00b7a37376cda71c53e6d400148

C:\khurram\src\Staging>docker-machine ls
NAME          ACTIVE   DRIVER   STATE     URL                        SWARM   DOCKER    ERRORS
Boot2Docker   *        hyperv   Running   tcp://192.168.10.13:2376           v1.11.2

C:\khurram\src\Staging>curl http://192.168.10.13:3000
Hello World from Node in Container
C:\khurram\src\Staging>docker ps -a
CONTAINER ID        IMAGE                COMMAND             CREATED             STATUS              PORTS                    NAMES
ecebef464989        khurram/node:hello   "nodejs hello.js"   3 minutes ago       Up 3 minutes        0.0.0.0:3000->3000/tcp   hopeful_leakey

Tips and Hacks

  • Just like HTML; the best way to learn Dockerfile tricks is to read others; for instance the Node’s official Dockerfile; you will learn that instead of ubuntu image they are using buildpack-deps:jessie base image which is more lean and result better Container
  • Having RUN command in separate lines result better cache; the layers that gets created can get reused across different images in a better way; for instance having apt-get update in its own line and as a first line will result its own layer and if we create another image for something else; say MongoDB; it will get reused
  • Having meaningful tags for the images are useful determining what’s what in long run
  • There exists CURL for Windows; you can download and place in some folder which is in PATH and use it similar to how you use it in Linux
  • You can get prebuilt “docker.exe” (Docker CLI) on Windows three ways; through Chocolatey, through Docker Toolbox or from Docker Toolbox’s repository. Docker Toolbox uses Docker to build it; from Toolbox’s Windows Dockerfile you can find out where precompiled docker files are; look for RUN curl lines with entry –o dockerbins.zip; you can make a URL and using CURL for Windows easily download that zip file and find the latest docker.exe in it
  • As we are using Boot2Docker VM for the Docker; running the container and exposing its port; expose them to the VM level; if we want to expose it further to the gues OS level; we need to forward VM's port; topic of next post may be!

 

References

Posted by khurram | 0 Comments
Filed under: , ,

Docker on Windows: HyperV, NAT and DHCP Server

Docker on Windows

In the first part, Docker on Windows, we created an Internal Switch in HyperV and shared the external interface so that our Docker VM gets a fixed known IP as well as have internet connectivity. This arrangement might not work all the time; Internet Connection Sharing (ICS) tends to assign the IPs of its own choice and if we want to switch from Wifi to Ethernet for internet connectivity (laptop scenario) it becomes messy. If you are using Windows 10 / 2016 HyperV; we can avoid ICS setup and instead use newly introduced HyperV Interface type NAT. This allow us to have internal ip subnet of our choice for our VMs (Docker VM) and traffic from VMs connected to this interface will get NATed and VMs will have internet connectivity. We can expose the port / service running on VMs externally as well. Open up an Administrative Power Shell and execute the following commands

> New-VMSwitch –Name “NAT” –SwitchType NAT –NATSubnetAddress 192.168.10.0/24
> New-NetNAT –Name “NAT” –InternalIpInterfaceAddressPrefix “192.168.10.0/24”

image

  • “NAT” is the name of the switch
  • 192.168.10.0/24 is the subnet of our choice; it will automatically give 192.168.10.1 IP to the interface that we can use as a gateway for VMs connected to NAT switch

The NAT switch will appear as “Internal” in HyperV’s visual interfaces

image

We used Boot2Docker for setting up the VM for Docker; it need a DHCP server on the internal network; sadly HyperV networking doesn't have such arrangement out of the box, if your host OS is server you can setup DHCP services, but if you are using client OS i-e Windows 10 you will need either a separate VM acting as DHCP server (Linux Core or something like that) or some third party light weight DHCP server application like http://dhcpserver.de that you can run on the guest OS

  • dhcpserver.de has dhcpwiz.exe a wizard that let you create dhcpsrv.ini and dhcpsrv.exe that you can run as system tray application or as Windows Service
  • Dont forget to add the Firewall Rule that wizard let you create in the last step
  • You can add static ip binding to mac address in the ini file like shown below

image

With this arrangement in place; you can have the known static ip of your choice and Boot2Docker will get it from dhcpserver. You might need to regenerate the certificate once this new setup is in place

Reference

Docker on Windows

Docker on Windows

Setting up Docker on Windows is slightly different; as Docker needs Linux kernel and expects certain namespaces for its working. Therefore on Windows; we need to setup a Virtual Machine (VM) as a Docker Host. You can setup any Docker compatible Linux in a VM; boot2docker is a small Linux OS especially made for this purpose. The official way is to use Docker Toolbox; it comes with Docker Engine, Compose, Machine and Kinematic. There is a step by step guide available. It installs Virtual Box and setup boot2docker VM in it.

Docker /w HyperV

I wanted to use HyperV; as I am already using it for other VMs. If you want to use Docker with HyperV; you only need Machine (docker-machine); its a Command Line Interface (CLI) to manage Docker VMs. It lets us create Docker hosts on our computers, cloud providers or remote servers in the data centers. It accomplish this by having a notion of “drivers” and HyperV is supported driver. Get the latest docker-machine binary from its GitHub repository. At the time of this writing its 0.7 and I downloaded x86_64 version; kept it somewhere which was already in my PATH so I can directly call it from anywhere!

Static Ip and Internet Connectivity for Docker VM

The VM for Docker need to have a static ip; docker-machine will generate the certificates for authentication and they are bound to the ip; if Docker VM ip gets changed, we will have to regenerate the certificate every time and it becomes tedious. The Docker VM also need internet connectivity so it can connect to Docker Hub / Registry to download the images on demand. In HyperV; if we have a DHCP server available (Wifi Router scenarios) we can use “External” interface and have the DHCP server assign the static ip bound to the VM’s MAC Address or we can have an internal interface in HyperV and share the internet connection; doing so you will get private ip on the VM and it will have the internet connectivity automatically.

image

Boot2Docker VM

Once our HyperV switch is ready; we can give docker-machine create command and it will download latest boot2docker.iso and configure a VM all in one go!

C:\Users\khurram>docker-machine create --driver hyperv --hyperv-virtual-switch Docker Boot2Docker
Creating CA: C:\Users\khurram\.docker\machine\certs\ca.pem
Creating client certificate: C:\Users\khurram\.docker\machine\certs\cert.pem
Running pre-create checks...
(Boot2Docker) No default Boot2Docker ISO found locally, downloading the latest release...
(Boot2Docker) Latest release for github.com/boot2docker/boot2docker is v1.11.2
(Boot2Docker) Downloading C:\Users\khurram\.docker\machine\cache\boot2docker.iso from https://github.com/boot2docker/boot2docker/releases/download/v1.11.2/boot2docker.iso...
(Boot2Docker) 0%....10%....20%....30%....40%....50%....60%....70%....80%....90%....100%
Creating machine...
(Boot2Docker) Copying C:\Users\khurram\.docker\machine\cache\boot2docker.iso to C:\Users\khurram\.docker\machine\machines\Boot2Docker\boot2docker.iso...
(Boot2Docker) Creating SSH key...
(Boot2Docker) Creating VM...
(Boot2Docker) Using switch "Docker"
(Boot2Docker) Creating VHD
(Boot2Docker) Starting VM...
(Boot2Docker) Waiting for host to start...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env Boot2Docker

Once our Docker VM is running; we can simply SSH into it and run the Container; I am going to run the microsoft/dotnet

C:\Users\khurram>docker-machine ssh Boot2Docker
                        ##         .
                  ## ## ##        ==
               ## ## ## ## ##    ===
           /"""""""""""""""""\___/ ===
      ~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ /  ===- ~~~
           \______ o           __/
             \    \         __/
              \____\_______/
_                 _   ____     _            _
| |__   ___   ___ | |_|___ \ __| | ___   ___| | _____ _ __
| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__|   <  __/ |
|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|
Boot2Docker version 1.11.2, build HEAD : a6645c3 - Wed Jun  1 22:59:51 UTC 2016
Docker version 1.11.2, build b9f10c9
docker@Boot2Docker:~$ docker run -it microsoft/dotnet:latest
Unable to find image 'microsoft/dotnet:latest' locally
latest: Pulling from microsoft/dotnet
51f5c6a04d83: Pull complete
a3ed95caeb02: Pull complete
7004cfc6e122: Pull complete
5f37c8a7cfbd: Pull complete
a85114b33970: Pull complete
62c4b050934f: Pull complete
Digest: sha256:7d93320d8be879967149b59ceed280bca70cbdf358a2a990467ca502f0e1a4be
Status: Downloaded newer image for microsoft/dotnet:latest
root@439c959eaa28:/# mkdir hello_world
root@439c959eaa28:/# cd hello_world/
root@439c959eaa28:/hello_world# dotnet new
Created new C# project in /hello_world.
root@439c959eaa28:/hello_world# dotnet restore

And then after some time…

Installed:
    113 package(s) to /hello_world/project.json
root@439c959eaa28:/hello_world# dotnet run
Project hello_world (.NETCoreApp,Version=v1.0) will be compiled because expected outputs are missing
Compiling hello_world for .NETCoreApp,Version=v1.0

Compilation succeeded.
    0 Warning(s)
    0 Error(s)

Time elapsed 00:00:03.7668485


Hello World!
root@439c959eaa28:/hello_world# cat /etc/issue
Debian GNU/Linux 8 \n \l

root@439c959eaa28:/hello_world# exit
docker@Boot2Docker:~$
C:\Users\khurram>docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
microsoft/dotnet    latest              098162c455c7        5 hours ago         576 MB

C:\Users\khurram>docker ps -a
CONTAINER ID        IMAGE                     COMMAND             CREATED             STATUS                      PORTS               NAMES
439c959eaa28        microsoft/dotnet:latest   "/bin/bash"         2 hours ago         Exited (0) 38 seconds ago                       pedantic_chandrasekhar

Resources

Staging Node Application on Windows

Staging Node Application

We can run a Node Application on Windows with IIS; there exists IISNODE that can be used to do exactly this. Simply install Node and IISNode on the server

Create a folder on the file system and an IIS Application Pool for the Node Application. Give IIS APPPOOL\Pool-Name user full access to the folder!

Setup a web site or a virtual folder for our Node application choosing the newly created Application Pool

Create a simple hello.js; making sure that http server is started at process.env.PORT and not some hard coded value. IISNODE will set the environment variable PORT and by default it uses Named Pipes. Create a web.config adding iisnode handler in its configuration/system.webServer/handlers to handle %web%/hello.js requests

We can access our Hello World Node application at http://web-path/hello.js; IIS will automatically spins up node.exe; and we can even change the application file and IISNode picks up the changes and recycle node.exe processes. No special arrangements like GIT Hook / PM2 restarts required.

http://web-path/hello.js is not a good looking URL; we would like to have simply http://web-path and our hello.js should respond. For this we can use Url Rewrite extenstion, once installed, simply add configuration/system.webServer/rewrite section in the web.config to rewrite all the /* requests to hello.js and we will have the desired result. With this arrangement in place we can now run Express.js apps easily!

Resources

Posted by khurram | 0 Comments

Running Node Application in Docker Container on Raspberry Pi

Dockerizing Node

Lets run a Node application in Docker on Raspberry Pi; for the proof of concept; I will be using a simple hello world app and a GIT/SSH setup we made in Staging Node Application on Raspberry Pi. The Docker way of running the application is that we have our “data” and “application” files outside of the container; so that container remains completely disposable. When running the container; we can mount the directory from the Host OS; using this feature we can have our data and application files on the Host OS and they are being used from the Container; something like this:

image

We can continue to have the GIT / NGINX arrangements that we did in Staging Node Application on Raspberry Pi; but now we can run Node and MongoDB (and others) from Containers. We already have made the Node Docker image in the Docker on Raspberry Pi; all we need is to run is so that we mount /home/pi/hello directory into the Node Container and run Node in the Container, doing so we will have the Node server at container’s 3000 port, expose this port to the Host OS’s 3000 port so that NGINX forwards the request to Host OS’s 3000 port when it receive any request at Host OS’ http://ip/node endpoint

pi@raspberrypi:~ $ docker images
REPOSITORY           TAG                 IMAGE ID            CREATED             SIZE
khurram/pi           node                6af338545368        5 hours ago         159 MB
khurram/pi           nano                99f0053b387e        6 hours ago         105.6 MB
resin/rpi-raspbian   jessie              80a737f1a654        7 days ago          80.01 MB
pi@raspberrypi:~ $ cd hello
pi@raspberrypi:~/hello $ ls
hello.js
pi@raspberrypi:~/hello $ docker run -p 3000:3000 -v /home/pi/hello:/hello -it khurram/pi:node
root@381ece8ec01a:/# cd /hello
root@381ece8ec01a:/hello# ls
hello.js
root@381ece8ec01a:/hello# nodejs hello.js
Server running at port 3000

  • -p 3000:3000 is to expose container’s 3000 port and map it to Host OS’s 3000; the port of container where node application will run in the container and host os port where nginx is expected to forward the requests
  • -v localpath:remotepath is to mount the Host OS’s localpath directory as a remotepath in the container
  • -it is for interactive terminal
  • khurram/pi:node is the Node image we created

Once the container is running and we are on its terminal; we can start our node app; we have to leave the terminal running so Node server continue to run; and from another terminal we can issue docker ps to get the list of all the running container

pi@raspberrypi:~ $ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                    NAMES
381ece8ec01a        khurram/pi:node     "/bin/bash"         2 minutes ago       Up 2 minutes        0.0.0.0:3000->3000/tcp   goofy_khorana

  • Note; how the ports are mapped
  • Note the NAMES column; Docker has named our container “dynamically”

We can try http://ip:3000 and http://ip/node URL and our node application should be running there; using the Container Name or Container ID we can stop it. We can issue docker ps –a to list all the containers including those that are stopped

pi@raspberrypi:~ $ docker stop goofy_khorana
goofy_khorana
pi@raspberrypi:~ $ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
pi@raspberrypi:~ $ docker ps -a
CONTAINER ID        IMAGE                       COMMAND                  CREATED             STATUS                       PORTS               NAMES
381ece8ec01a        khurram/pi:node             "/bin/bash"              9 minutes ago       Exited (130) 9 seconds ago                       goofy_khorana
aece7089082d        khurram/pi:node             "-p 3000:3000 -v /hom"   10 minutes ago      Created                                          determined_colden
e5e7005489a2        khurram/pi:nano             "/bin/bash"              6 hours ago         Exited (0) 5 hours ago                           grave_pasteur
5b62a2d14818        resin/rpi-raspbian:jessie   "/bin/bash"              6 hours ago         Exited (0) 6 hours ago                           tiny_feynman

As you can see; our Containers are also getting stored on the Host OS; think of it as the Working Directory in the source control; the server will have code images that we commit and working directory has currently working copy of source code; similarly docker images are the images of container that we committed and containers are the running (or stopped) copies, they also eats up the disk and we should remove the unwanted one; keeping an eye on STATUS we can learn which one we are not using anymore and can remove them using docker rm

pi@raspberrypi:~ $ docker rm goofy_khorana
goofy_khorana
pi@raspberrypi:~ $ docker rm aece7089082d
aece7089082d

We dont always have to get the interactive shell on starting container; if we know what command to run when container is running; we can start our container in the background giving the command to run as parameter; lets create a new container for our Node application (as we have deleted the previously created one) from the Docker Image; something like this

pi@raspberrypi:~ $ docker run -d -p 3000:3000 -v /home/pi/hello:/hello khurram/pi:node nodejs /hello/hello.js
49b347531127cc1d6c07f9b266e9e146afa0c4214c3c13514b7a851a444c525e
pi@raspberrypi:~ $ docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
49b347531127        khurram/pi:node     "nodejs /hello/hello."   9 seconds ago       Up 3 seconds        0.0.0.0:3000->3000/tcp   stupefied_wing
pi@raspberrypi:~ $ curl
http://localhost/node
Hello World from NODE in Container

Restarting Container

Now if we reboot the Raspberry; and give docker ps –a when it comes back; you will notice that our Container is not running anymore

pi@raspberrypi:~ $ docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                            PORTS               NAMES
49b347531127        khurram/pi:node     "nodejs /hello/hello."   2 minutes ago       Exited (143) About a minute ago                       stupefied_wing

This can be taken care using –restart=always as the paramter to docker run; with this; even if our container exits unexpectedly; Docker will restart it; and also start it when machine boots (Docker Daemon gets started)

pi@raspberrypi:~ $ docker run --restart=always -d -p 3000:3000 -v /home/pi/hello:/hello -it khurram/pi:node nodejs /hello/hello.js
575b6cf68407fb08bf0fae895ea1170f5cbc02fba596f903c1901e17aa859747
pi@raspberrypi:~ $ curl
http://localhost/node
Hello World from NODE in Container
pi@raspberrypi:~ $ sudo shutdown -r now

Using docker ps we can see that the second container that we started with restart=always is running on the boot!

pi@raspberrypi:~ $ docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED              STATUS                       PORTS                    NAMES
575b6cf68407        khurram/pi:node     "nodejs /hello/hello."   About a minute ago   Up 4 seconds                 0.0.0.0:3000->3000/tcp   serene_mirz
49b347531127        khurram/pi:node     "nodejs /hello/hello."   6 minutes ago        Exited (143) 5 minutes ago                            stupefied_w
pi@raspberrypi:~ $ curl
http://localhost/node
Hello World from NODE in Container

We can delete the previous container using docker rm, and if we want to protect the Container ports being exposed on Host; we can use iptables!

Restarting Container on changing application files

We know that we need to restart the node process when application files are changed. In this case; we simply can restart the Docker Container, it takes almost the same time. This can be done using docker restart command; but we need to "know" the container name at runtime so that we can use it in our post-receive GIT script; i-e when new code is “pushed” the GIT hook can restart the container. We can have a static known name for our container if we run the container with –name parameter

pi@raspberrypi:~ $ docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
575b6cf68407        khurram/pi:node     "nodejs /hello/hello."   3 hours ago         Up 3 hours          0.0.0.0:3000->3000/tcp   serene_mirzakhani
pi@raspberrypi:~ $ docker stop 575b6cf68407
575b6cf68407
pi@raspberrypi:~ $ docker rm 575b6cf68407
575b6cf68407
pi@raspberrypi:~ $ docker run --restart=always --name hello -d -p 3000:3000 -v /home/pi/hello:/hello -it khurram/pi:node nodejs /hello/hello.js
6774bc75b25e584b9f132bf78894a90789180d868c3176593b96e3f426db4118
pi@raspberrypi:~ $ curl
http://localhost/node
Hello World from NODE in Container
pi@raspberrypi:~ $ docker restart hello
hello
pi@raspberrypi:~ $ curl
http://localhost/node
Hello World from NODE in Container

We just need to add docker restart hello in hello.git/hooks/post-receive; similar to Staging Node Application where we added pm2 restart hello to restart the pm2 application

Resources

Happy Containering!

Docker on Raspberry Pi

Docker allow us to package our application with all its dependencies into a standardized unit; the application run in the Container that has everything it needs to run and this is kept in isolation from the other Container running on the Server. Its architecturally different from Virtual Machine and are more portable and efficient; they share the kernel and run as an isolated process in user space on the host operating system.

image

To run Docker on Raspberry Pi; we either can run premade images (with Host OS) or we can install docker on the Raspbian. The installation package on official repository is bit outdated and will not run with Docker Hub; the official repository from where we can download Container images with ease. Hypriot has made debian installation packages available on their download page from where we can install the latest package (at the time of this writing; its 1.10.3)

To install it; give the following commands on the Raspbian Jessie Lite

$ curl -sSL https://downloads.hypriot.com/docker-hypriot_1.10.3-1_armhf.deb > docker-hypriot_1.10.3-1_armhf.deb
$ sudo dpkg -i docker-hypriot_1.10.3-1_armhf.deb
$ sudo sh -c 'usermod -aG docker $SUDO_USER'
$ sudo systemctl enable docker.service
$ sudo service docker start

Once docker is installed and running; we need to run an image; given underlying CPU platform is different; we cant just go ahead and install any image from Docker Hub. Fortunately there is “resin/rpi-raspbian:jessie” image that we can use. To download and run this image use docker run –it like this

pi@raspberrypi:~ $ docker run -it resin/rpi-raspbian:jessie
Unable to find image 'resin/rpi-raspbian:jessie' locally
jessie: Pulling from resin/rpi-raspbian
242279a37c38: Pull complete
072ccb327ac8: Pull complete
de6504dccd59: Pull complete
a3ed95caeb02: Pull complete
Digest: sha256:534fa5bc3aba67f7ca1b810110fef1802fccf9e52326948208e5eb81eb202710
Status: Downloaded newer image for resin/rpi-raspbian:jessie
root@5b62a2d14818:/#

  • docker run is to run the container
  • -it is to get the interactive terminal when its run
  • root@xxxxx# is the Container shell; note down the value after root@; its the container id

Once we have the container running; we can go ahead and install some package; say NANO using apt-get update and apt-get install nano. When its installed; we need to “commit” the container; think of it similar to source code control system. When we commit the container it creates an image; from which we can start an instance of container similar to how we used resin/rpi-raspbian:jessie image. To commit; exit from the container shell and then issue docker commit command like this

root@5b62a2d14818:/# exit
pi@raspberrypi:~ $ sudo docker commit -m "Added nano" -a "Khurram" 5b62a2d14818 khurram/pi:nano
sha256:99f0053b387ed69f334926726f4ce0fd7c1946e4cc11b65e7a42e6a58eff9685
pi@raspberrypi:~ $

  • 5b62a2d14818 is the container ID that we copied from the container’s shell prompt
  • khurram/pi:nano is the image target; khurram is the user, pi is the name and nano is the tag name

Once committed; we can run it again using docker run specifying image target

$ docker run -it khurram/pi:nano

Once running; we can continue installing other things; node in our case; and when its done we can exit from the container shell and commit the updated container again

  • $ apt-get install nodejs and $ apt-get install npm to install Node and Node Package Manager
  • $ ln –s /usr/bin/nodejs /usr/bin/node to create symbolic link so nodejs can be called as node (npm expects this)

root@e5e7005489a2:/# exit
pi@raspberrypi:~ $ docker commit -m "Added node" -a "Khurram" e5e7005489a2 khurram/pi:node
sha256:6af338545368613b015001afddacc9f8abff5b39d5f2f9111bc643cb47dc87de
pi@raspberrypi:~ $

This way; we should have three images all together by now; one the base raspbian:jessie that we ran first time; and two more that we committed; we can list these images using docker images

pi@raspberrypi:~ $ docker images
REPOSITORY           TAG                 IMAGE ID            CREATED             SIZE
khurram/pi           node                6af338545368        43 seconds ago      159 MB
khurram/pi           nano                99f0053b387e        17 minutes ago      105.6 MB
resin/rpi-raspbian   jessie              80a737f1a654        6 days ago          80.01 MB

  • If you have created some unwanted image; you can delete that using docker rmi image; eg docker rmi khurram/pi:nano

Using docker info we can learn about the currently configured settings of docker; Docker Root Dir is interesting; it tells where Docker is storing all its data including Containers

pi@raspberrypi:~ $ sudo docker info
Containers: 2
Running: 0
Paused: 0
Stopped: 2
Images: 4
Server Version: 1.10.3
Storage Driver: overlay
Backing Filesystem: extfs
Execution Driver: native-0.2
Logging Driver: json-file
Plugins:
Volume: local
Network: bridge null host
Kernel Version: 4.4.9+
Operating System: Raspbian GNU/Linux 8 (jessie)
OSType: linux
Architecture: armv6l
CPUs: 1
Total Memory: 434.7 MiB
Name: raspberrypi
ID: TNHK:5MI5:JGFD:DE3I:B6MX:VXVB:TCMD:ZTQI:IQKO:NH46:6NXP:OW6O
Debug mode (server): true
File Descriptors: 11
Goroutines: 21
System Time: 2016-05-31T10:08:20.649012315Z
EventsListeners: 0
Init SHA1: 0db326fc09273474242804e87e11e1d9930fb95b
Init Path: /usr/lib/docker/dockerinit
Docker Root Dir: /var/lib/docker
WARNING: No swap limit support
WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support
WARNING: No cpuset support

We can checkout the /var/lib/docker to learn what’s there

pi@raspberrypi:~ $ sudo -i
root@raspberrypi:~# cd /var/lib/docker/
root@raspberrypi:/var/lib/docker# ls
containers  image  network  overlay  tmp  trust  volumes

We can copy the docker image to another server using docker save and docker load; docker save makes a tar file and its syntax is docker save –o tar-file image-name and docker load takes a tar file and its syntax is docker load –i tar-file. This way we can copy our running container to another server (or Raspberry in this case) seamlessly and we can expect that it will “just work” given container has everything it needs; no installation, version conflicts etc. By now we must have got an idea; how useful docker container and images are in long run and why its getting so popular. We can run the container from these images or download new image from Hub; and can run multiple containers of image as required.

Monitoring Raspberry Pi

Before commissioning the Raspberry Pi; it would be nice if we setup some monitoring; so we can correlate any issue in the field with device status. This becomes important especially for devices like Raspberry Pi that has limited resources. The simplest and easiest way is to setup SNMP; its the protocol to collect and organize information about the managed devices on IP networks. Given Raspbian is a just another Linux; we can easily setup SNMPD; a SNMP daemon; and can monitor the device remotely or even from within the device. To install SNMPD; issue the following commands; and once installed backup the /etc/snmp/snmpd.conf

$ sudo apt-get install snmpd
$ sudo cp /etc/snmp/snmpd.conf /etc/snmp/snmpd.conf.original

Next; edit the snmpd.conf; remove everything and punch in the following

agentAddress    127.0.0.1:161
rocommunity     public

Restart the snmpd using

$ sudo /etc/init.d/snmpd restart

With above arrangements in place; we are basically running the SNMPD (SNMP Agent) on localhost 161 UDP port and have configured “public” a read only community (password / key). We can now “query” SNMPD using snmp utilities. To install them issue

$ sudo apt-get install snmp

Once installed; issue the following command to query free CPU percentage via snmp and we will have an output like this

$ snmpget -v 1 -c public localhost .1.3.6.1.4.1.2021.11.11.0
iso.3.6.1.4.1.2021.11.11.0 = INTEGER: 90

.1.3.6.1.4.1… is an OID; there are well known OID (Object Identifier) for free CPU; there are other OIDs for things like this; some of which we will use later.

Next we want to “expose” CPU Temperature through SNMP; by default its not there and given device IO is done in Linux through files and SNMPD has an option to get data by running a script and expose it through additional OID. Lets make a script for CPU temperature

$ nano snmp-cpu-temp.sh

Punch in the following

#!/bin/bash
if [ "$1" = "-g" ]
then
        echo .1.3.6.1.2.1.25.1.8
        echo gauge
        cat /sys/class/thermal/thermal_zone0/temp
fi
exit 0

Make the script executable and run it with -g

$ chmod +x snmp-cpu-temp.sh
$ ./snmp-cpu-temp.sh -g
.1.3.6.1.2.1.25.1.8
gauge
49768

The temperature is 49.768 degree Celsius; lets edit snmpd.conf ($ sudo nano /etc/snmp/snmpd.conf) to add this script; make it look like this; have highlighted the new lines

agentAddress    127.0.0.1:161
rocommunity     public
pass            .1.3.6.1.2.1.25.1.8 /bin/sh /home/pi/snmp-cpu-temp.sh

Restart the snmpd and query the OID using snmpget and we will have the value

$ sudo /etc/init.d/snmpd restart
[ ok ] Restarting snmpd (via systemctl): snmpd.service.
$ snmpget -v 1 -c public localhost .1.3.6.1.2.1.25.1.8
iso.3.6.1.2.1.25.1.8 = Gauge32: 50458

As we already have the NGINX installed; we can easily setup MRTG; its light weight and widely used monitoring setup that generate graphs using data gathered over SNMP that we can host in NGINX and view remotely. To install issue this command:

$ sudo apt-get install mrtg

MRTG comes with helper utilities like CFGMAKER and INDEXMAKER; we can use cfgmaker to make mrtg.cfg (in /etc) but given we also want to include our own additional OID; lets make the mrtg.cfg ourselves; take a backup of original /etc/mrtg.cfg; remove everything and punch in the following

WorkDir: /var/www/mrtg
EnableIPv6: no
LoadMIBs: /usr/share/snmp/mibs/UCD-SNMP-MIB.txt

Target[CPU]: 100 - .1.3.6.1.4.1.2021.11.11.0&.1.3.6.1.4.1.2021.11.11.0:public@localhost
Options[CPU]: integer, gauge, nopercent, growright, unknaszero, noo
MaxBytes[CPU]: 100
YLegend[CPU]: CPU %
ShortLegend[CPU]: %
LegendI[CPU]: CPU
Legend1[CPU]: CPU usage
Title[CPU]: Raspberry Pi CPU load
PageTop[CPU]: <H1>Raspberry Pi - CPU load</H1>

Target[Memory]: .1.3.6.1.2.1.25.2.3.1.6.1&.1.3.6.1.2.1.25.2.3.1.6.3:public@localhost
Options[Memory]: integer, gauge, nopercent, growright, unknaszero, noo
MaxBytes[Memory]: 100524288
YLegend[Memory]: Mem - 1K pages
Factor[Memory]: 1024
ShortLegend[Memory]: B
LegendI[Memory]: Physical
LegendO[Memory]: Virtual
Legend1[Memory]: Physical
Legend2[Memory]: Virtual Memory
Title[Memory]: Raspberry Pi Memory Usage
PageTop[Memory]: <H1>Raspberry Pi - Memory Usage</H1>

Target[CPU-temp]: .1.3.6.1.2.1.25.1.7.0&.1.3.6.1.2.1.25.1.8:public@localhost
Options[CPU-temp]: integer, gauge, nopercent, growright, unknaszero, noi
Factor[CPU-temp]: 0.001
MaxBytes[CPU-temp]: 100000
Title[CPU-temp]: CPU temperature on Raspberry Pi
YLegend[CPU-temp]: Temperature °C
ShortLegend[CPU-temp]: °C
Legend2[CPU-temp]: CPU temperature in °C
LegendO[CPU-temp]: CPU temperature
PageTop[CPU-temp]: <H1>Raspberry Pi - CPU Temperature</H1>

Target[Ethernet]: 2:public@localhost
MaxBytes[Ethernet]: 12500000
Title[Ethernet]: Raspberry Pi Ethernet Traffic Usage
PageTop[Ethernet]: <H1>Raspberry Pi - Ethernet Traffic Usage</H1>

mrtg has an option to run it as daemon and it will poll the assigned snmps and make graphs periodically. Next we need script for /etc/init.d through which not only we can run the mrtg as daemon but it will also start across system reboots. We can google the interenet; people out there has already made such script; here is one such script that you can save as /etc/init.d/mrtg ($ sudo nano /etc/init.d/mrtg)

PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
DAEMON="/usr/bin/mrtg"
PARAM=" --user=root /etc/mrtg.cfg --logging /var/log/mrtg.log"
NAME="MRTG"
DESC="Multi Router Traffic Grapher Daemon"

test -f $DAEMON || exit 0
set -e
case "$1" in
start)
        echo -n "Starting $DESC: "
        env LANG=C $DAEMON $PARAM
        echo "$NAME."
        ;;
stop)
        echo -n "Stopping $DESC: "
        killall -9 mrtg
        echo "$NAME."
        ;;
restart|force-reload)
        echo -n "Restarting $DESC: "
        killall -9 mrtg
        sleep 1
        env LANG=C $DAEMON $PARAM
        echo "$NAME."
        ;;
*)
        N=/etc/init.d/$NAME
        echo "Usage: $N {start|stop|restart|force-reload}"
        exit 1
        ;;
esac
exit 0

Once saved; make it executable; create a directory/var/www/mrtg where it will store the graphs and start the daemon. It will give few warnings and its ok when running first time.

$ sudo chmod +x /etc/init.d/mrtg
$ sudo mkdir /var/www/mrtg
$ sudo /etc/init.d/mrtg start

All we need now is to expose the graphs through NGINX; lets edit /etc/nginx/sites-available/default and make it look like this ($ sudo nano /etc/nginx/sites-available/default), have highlighted the new lines

server {
        listen 80 default_server;
        listen [::]:80 default_server;
        index index.html index.htm index.nginx-debian.html;
        server_name _;
        location /mrtg {
                alias /var/www/mrtg;
        }
        location /node {
                proxy_pass
http://localhost:3000;
                proxy_http_version 1.1;
                proxy_set_header Upgrade $http_upgrade;
                proxy_set_header Connection 'upgrade';
                proxy_set_header Host $host;
                proxy_cache_bypass $http_upgrade;
        }
}

Create an index.html using mrtg’s indexmaker utility and restart the nginx. While using indexmaker; it might give permission error; use sudo –i and logout when its done

$ sudo indexmaker /etc/mrtg.cfg >> /var/www/mrtg/index.html
-bash: /var/www/mrtg/index.html: Permission denied
$ sudo -i
# indexmaker /etc/mrtg.cfg >> /var/www/mrtg/index.html
# service nginx restart
# logout

Check the graphs at http://ip/mrtg; after a while it will look something like this

image

Go ahead and click the graph to view the details!

With this in place; we can deploy our applications and send our Raspberries into the field and once they are there we can remotely check how they are performing! We can expose the SNMP to some central monitoring system on the same lines where sys-admins can keep an eye on them; if such arrangement is in place!

Happy coding and deployments!