Welcome to weblogs.com.pk Sign in | Join | Help

Docker on Windows: Docker for Windows

Docker on Windows

If you are using Windows 10 x64 1511 (November Update) and has HyperV support in hardware / OS; you can try out Public Beta of Docker for Windows; it has all the things you need; there is no need to download any binary and keeping them in PATH, no need to set up Boot2Docker, no need to setup NAT or DHCP Server etc, no need of CIFS for mounting Windows folders into the containers. Installing Docker for Windows takes care of all these things; unlike Docker Toolbox that used VirtualBox; it uses HyperV for its MobyLinuxVM running Docker Daemon; installs Docker utilities adding them into the PATH (you should remove previously downloaded binaries to some place not in the PATH after its installation) and has the support of mounting Windows folders into the Containers as well. In short; this is the way to go if you have the supported OS!

image

C:\Users\khurram>docker version
Client:
Version:      1.12.0-rc3
API version:  1.24
Go version:   go1.6.2
Git commit:   91e29e8
Built:        Sat Jul  2 00:09:24 2016
OS/Arch:      windows/amd64
Experimental: true

Server:
Version:      1.12.0-rc3
API version:  1.24
Go version:   go1.6.2
Git commit:   876f3a7
Built:        Tue Jul  5 02:20:13 2016
OS/Arch:      linux/amd64
Experimental: true

To use the Windows folder in the container; right click the Docker whale icon from system tray and enable Shared Drives. Lets modify our docker-compose YML file we created for Dockerizing Mongo and Express for Docker for Windows and “up” our containers thereafter

image

  • It also adds “docker” entry into Windows for the Linux VM and uses HyperV networking, and you can open up the exposed application at friendly URL; in our case http://docker:3000

  • You can learn about IP scheme it has configured from the same settings application’s Network tab

Resources

Posted by khurram | 0 Comments
Filed under: , , ,

docker-compose

Dockerizing Node

When using Docker for some real world application often multiple Containers are required and to build and run them along with their Dockerfiles we need the scripts for building and running them; as realized in Dockerizing Mongo and Express. This becomes hassle and Docker has docker-compose utility that solves exactly this. We can create a “Compose file” (docker-compose.yml file) which is a YAML file; a human readable data serialization format; we configure the application services and its requirements in this file and then using the tool we can create and start all the services using this “compose file”. We define the container environment in a Dockerfile and how they relate to each other and run together in the compose file and then using the docker-compose we can build / run / stop etc them in the single go together.

Lets make a docker-compose.yml file for our Mongo / Express application; our application needs two data volumnes, a docker volume for MongoDB data and the host directory where our Express JS application files are (mounted through CIFS). We need to declare the MongoDB data volume in the compose file. We need two services; one for Mongo and the other for Express (Node); we will define these along with the build entries along with dockerfile entries as we are using alternate file names. We can define image names in there as well. For the HelloExpress; we need to expose the ports and this container also “depends on” the mongo db; with this entry in the compose file; the tool will take care to run it first; we also need to define the links with proper target name as its required given the Express JS application needs a known host name for MongoDB container hard coded in the “connection string” If we don’t define the target name; docker-compose names the container with its own scheme; we can define known names using container_name entries if we want to. Here’s the docker-compose.yml file

version: '2'
volumes:
    mongo-data:
        driver: local
services:
    mongodb:
        build:
            context: .
            dockerfile: Dockerfile.mongodb
        image: khurram/mongo
        #container_name: mongodb
        volumes:
        - mongo-data:/data/db
    helloexpress:
        build:
            context: .
            dockerfile: Dockerfile.node
        image: khurram/node
        #container_name: helloexpress
        volumes:
        - /mnt/srcshare/HelloExpress:/app
        entrypoint: nodejs /app/bin/www
        ports:
        - "3000:3000"
        depends_on:
        - mongodb
        links:
        - mongodb:mongodb

Once the compose file is in place; we can use docker-compose up and it will build + run + attach the required volume and services as defined. We can use –d parameter with docker-compose up to detach

C:\khurram\src\HelloExpress>docker-compose.exe up –d
Creating network "helloexpress_default" with the default driver
Creating helloexpress_mongodb_1
Creating helloexpress_helloexpress_1

C:\khurram\src\HelloExpress>rem Test http://DockerVM:3000

C:\khurram\src\HelloExpress>docker-compose.exe down
Stopping helloexpress_helloexpress_1 ... done
Stopping helloexpress_mongodb_1 ... done
Removing helloexpress_helloexpress_1 ... done
Removing helloexpress_mongodb_1 ... done
Removing network helloexpress_default

Code @ https://github.com/khurram-aziz/HelloExpress is updated accordingly having the docker-compose.yml file; DockerBuild.bat and DockerRun.bat are no longer needed; but I am leaving them there as well so you can compare and see how docker-compose.yml is made using those two scripts!

Resources

Posted by khurram | 0 Comments
Filed under: ,

Dockerizing Mongo and Express

Dockerizing Node

Now that we are familiar with the Docker and how it helps us in high isolation and compartmentalization; lets expand and try out deploying some real world application. I will be using the application that we built for MongoDB and Mongoose; its an Express JS / MongoDB application and we will try deploying it across two Docker containers; one for MongoDB and the other for Express in spirit of Microservice Architecture. As per wikipedia; Microservices are a more concrete and modern interpretation of service-oriented architectures (SOA) used to build distributed software systems. Like in SOA, services in a microservice architecture are processes that communicate with each other over the network in order to fulfill a goal. Also, like in SOA, these services use technology agnostic protocols. Using separate Container for each microservice; we get fine control and can monitor and distribute components of our application at each microservice level.

For MongoDB; lets start an Ubuntu instance; install Mongo and try to run it; we will learn that it needs /data/db directory

image

We can create that in the container but as we know that when container is stopped it loses the data. Its recommended to use Data Volume for such requirement and we will mount one as /data/db. Lets create a Dockerfile for our MongoDB container

FROM ubuntu
MAINTAINER Khurram <khuziz@hotmail.com>

RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927
RUN echo "deb
http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.2 multiverse" | tee /etc/apt/sources.list.d/mongodb-org-3.2.list
RUN apt-get update && apt-get install -y mongodb-org

EXPOSE 27017

ENTRYPOINT ["/usr/bin/mongod"]

Lets create a Dockerfile for the Node JS as well; we will not be including the application code in the Node JS container instead we will use Data Volume for the application files. Note that Node is not being run as ENTRYPOINT or CMD; we will be starting it as a parameter to the container in docker run command and pass the start up JS file as the parameter; this way we can reuse our Node JS container image for different applications; scenarios like running web service in its own container and front end application in separate container

FROM ubuntu
MAINTAINER Khurram <khuziz@hotmail.com>

RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install -y nodejs
RUN apt-get install -y build-essential
RUN apt-get install -y npm

To build the container images; give commands

docker build -t khurram/node -f Dockerfile.node .
docker build -t khurram/mongo -f Dockerfile.mongodb .

  • I have kept different name for the Dockerfile for our containers; as these names are not standard I am passing the file name using –f argument; its done so that I can have both files in one directory
  • Its better to make a BAT / SH script for above commands

Before running the two docker containers; we need two data volumes, one for Mongo and the other for Node application. For the Node application we will use host directory; in our case the directory in Boot2Docker VM; we will use cifs-utils to mount the folder from Windows HyperV Host sharing it on network as discussed in Docker on Windows- Customized Boot2Docker ISO with CIFS; from there on it can act as a host directory in Docker VM and we can use it for data volume. Unfortunately we cant use this arrangement for Mongo as it expects certain features from the file system (for its data locking etc) and mounted directory using cifs-utils doesnt have these features, therefore we will create a volume using docker and use it instead

docker volume create --name mongo-data
mongo-data

docker volume inspect mongo-data
[
    {
        "Name": "mongo-data",
        "Driver": "local",
        "Mountpoint": "/mnt/sda1/var/lib/docker/volumes/mongo-data/_data",
        "Labels": {}
    }
]

To start the Mongo container issue this command

docker run -d -p 27017:27017 -v mongo-data:/data/db --name mongodb khurram/mongo

  • The above created mongo-data volume is passed using –v argument
  • Its mounted as /data/db in the container as required by the Mongo we learned by installing it in a test container
  • The Mongo port is exposed; we can test by connecting to Docker VM from the development machine!

Docker has a Linking feature; using which we can link one or more containers to particular container while starting it; doing so it adds /etc/hosts entry as well as set Environment Variables. Its important that the linking container is given proper name; you will see that /etc/hosts entry and environment variables all depends on it. Lets start the khurram/node instance linking mongodb container that we already have started!

docker run -it -v /mnt/srcshare/HelloExpress:/app --link mongodb:mongodb --name helloexpress khurram/node
root@7be354a7e084:/# cat /etc/hosts
127.0.0.1       localhost
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2      mongodb 75e912d09a6c
172.17.0.3      7be354a7e084
root@7be354a7e084:/# set
BASH=/bin/bash
….
MONGODB_NAME=/helloexpress/mongodb
MONGODB_PORT=tcp://172.17.0.2:27017
MONGODB_PORT_27017_TCP=tcp://172.17.0.2:27017
MONGODB_PORT_27017_TCP_ADDR=172.17.0.2
MONGODB_PORT_27017_TCP_PORT=27017
MONGODB_PORT_27017_TCP_PROTO=tcp

UID=0
_=/etc/hosts
root@7be354a7e084:/# cd /app/
root@7be354a7e084:/app# ls
DockerBuild.bat  DockerRun.bat       Dockerfile.node       HelloExpress.sln  bin     node_modules  package.json  routes
Dockerfile.mongodb  HelloExpress.njsproj  app.js            models  obj           public        views

  • Given it has added /etc/hosts entry; we can simply access the mongodb server with the name in connection string for mongoose.connect() call
  • Note that the information about the mongodb’s exposed port is also available in the environment variables
  • Note that the cifs mounted “local” directory is mounted as volume in the container and we can access its content accordingly

Once the data volumes are in place; and container linking is understood and app.js is updated accordingly for mongoose.connect(); lets clean up and start the fresh instances of our containers

docker stop mongodb
docker stop helloexpress

docker rm mongodb
docker rm helloexpress

docker run -d -v mongo-data:/data/db --name mongodb khurram/mongo
docker run -d -p 3000:3000 -v /mnt/srcshare/HelloExpress:/app --link mongodb:mongodb --name helloexpress khurram/node nodejs /app/bin/www

  • Its better to make a BAT / SH script for the above commands

Code @ https://github.com/khurram-aziz/HelloExpress is updated accordingly having the DockerBuild.bat, DockerRun.bat and Dockerfiles for Mongo and Node

Resources

Docker on Windows: Customized Boot2Docker ISO with CIFS

Docker on Windows

When using Docker in Linux Virtual Machine on Windows (or Mac); especially in development environment; you definitely going to need “some way” to access data on the host OS system (source code / data of your application). Virtual Box has an ability to exposes the user home folder to the VMs and when you create a Boot2Docker VM using docker-machine it mounts the user home folder that you can access; but when using HyperV driver; sadly this is not the case as HyperV is bit more restrictive. The simplest way is to use Windows Shares; and if you have setup a NAT switch making it “Private” and sharing the required folder; you can access it from Linux / Boot2Docker VM. You will need to install cifs-utils

As per Wikipedia; Server Message Block (SMB), one version of which was also known as Common Internet File System (CIFS),operates as an application-layer network protocol mainly used for providing shared access to files, printers, and serial ports and miscellaneous communications between nodes on a network! Boot2Docker is based on Tiny Core Linux and it has notion of extensions that exists as tcz files and we load them using tce-load. For cifs-utils on Boot2Docker; we need to issue the following commands!

wget http://distro.ibiblio.org/tinycorelinux/5.x/x86/tcz/cifs-utils.tcz
tce-load -i cifs-utils.tcz

Once installed we can mount the shared folder from HyperV Host machine using mount; say for \\192.168.10.1\src we will use following commands

sudo mkdir /mnt/srcshare
sudo mount -t cifs //192.168.10.1/src /mnt/srcshare -o user=khurram,pass=password

Any extension we install on Tiny Core Linux gets lost across reboot; and given CIFS is often needed in development environment (especially if using HyperV as Virtualization platform) its better to create a Docker VM using “customized Boot2Docker ISO” Interestingly we can create a Docker Image to create such customized Boot2Docker ISO. Create a Dockerfile with this content

FROM boot2docker/boot2docker

#wget http://distro.ibiblio.org/tinycorelinux/5.x/x86/tcz/cifs-utils.tczRf
#tce-load -i cifs-utils.tcz

RUN echo "\nBoot2Docker with CIFS\n" >> $ROOTFS/etc/motd
RUN curl -L -o /tmp/cifs-utils.tcz $TCL_REPO_BASE/tcz/cifs-utils.tcz && \
unsquashfs -f -d $ROOTFS /tmp/cifs-utils.tcz && \
rm -rf /tmp/cifs-utils.tcz
RUN /make_iso.sh
CMD ["cat", "boot2docker.iso"]

To create an ISO; give these commands

docker build -t khurram/boot2docker:cifs –f YourAboveDockerFile .
docker run –rm khurram/boot2docker:cifs > boot2docker.cifs.iso

  • khurram/boot2docker:cifs is the tag name for Docker Image
  • docker build can take a considerable time; given it makes 2Gb+ image

And then to create a Docker VM using this customized ISO; use docker-machine

docker-machine create --driver hyperv --hyperv-virtual-switch NAT --hyperv-boot2docker-url boot2docker.cifs.iso Cifs

  • NAT is the name of HyperV Virtual Switch
  • Cifs is the name of HyperV VM

image

  • Note the presence of the MOTD we added in the VM; made with the customized Boot2Docker ISO

You can now easily mount the network shares in the Boot2Docker VM and then mount that host directory as a data volume in the docker container using docker run’s –v flag

References

Docker on Windows: Windows Containers

Docker on Windows

Windows Containers are coming to next versions of the Server and Client OSes; Windows Server Containers will have Linux like isolation through namespace and process. Hyper-V Containers uses light weight virtual machine and this can be tried on Windows 10 Insider Builds. You need 14352 or later build.

There is a step by step guide available at https://msdn.microsoft.com/en-us/virtualization/windowscontainers/quick_start/quick_start_windows_10 and following it you can have the Hyper-V Containers running on the Windows 10

Here’s the output of some Docker commands:

PS C:\WINDOWS\system32> docker images
REPOSITORY                TAG                 IMAGE ID            CREATED             SIZE
microsoft/sample-dotnet   latest              28da49c3bff4        6 days ago          918.3 MB
nanoserver                10.0.14300.1016     3f5112ddd185        4 weeks ago         810.2 MB
nanoserver                latest              3f5112ddd185        4 weeks ago         810.2 MB
PS C:\WINDOWS\system32> docker ps -a
CONTAINER ID        IMAGE                     COMMAND                  CREATED             STATUS                      P
ORTS               NAMES
187e8f0bade3        microsoft/sample-dotnet   "dotnet dotnetbot.dll"   12 minutes ago      Exited (0) 11 minutes ago
                   sad_northcutt

PS C:\WINDOWS\system32> docker info
Containers: 1
Running: 0
Paused: 0
Stopped: 1
Images: 2
Server Version: 1.12.0-dev
Storage Driver: Windows filter storage driver
Windows:
Logging Driver: json-file
Plugins:
Volume: local
Network: transparent nat null
Kernel Version: 10.0 14361 (14361.0.amd64fre.rs1_release.160603-1700)
Operating System: Windows 10 Pro Insider Preview
OSType: windows
Architecture: x86_64
CPUs: 8
Total Memory: 15.94 GiB
Name: ENVY
ID: ****************************************
Docker Root Dir: C:\ProgramData\docker
Debug Mode (client): false
Debug Mode (server): false
Registry:
https://index.docker.io/v1/
Insecure Registries:
127.0.0.0/8

References

Posted by khurram | 0 Comments
Filed under: ,

Dockerfile

Dockerizing Node

Docker can build images automatically by reading the instructions from a Dockerfile. Its a text file that contains the commands how to assemble the required image. This can be used as a replacement of manually creating an image from scratch installing required software etc and then exporting and loading it someplace else; the technique we discussed in the first Docker post here. We can simply handover the Dockerfile instead. Lets create a Node Container using the Dockerfile for that simple Hello World thing! Create a Dockerfile; and punch in the following

FROM ubuntu
MAINTAINER Khurram <khuziz@hotmail.com>

RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install -y nodejs
RUN apt-get install -y build-essential
RUN apt-get install -y npm

ADD hello.js /app/hello.js

EXPOSE 3000

WORKDIR /app
CMD ["nodejs", "hello.js"]

  • Using FROM; we are using ubuntu base images; there are many to choose from at Docker Hub / Registry
  • Using RUN; we are giving commands that needs to run to setup the required things in the Container
  • Using ADD; we are adding the application file(s) into the Container; we use ADD and COPY for this
  • Using EXPOSE; we are telling which ports will get exposed; when the container will run using –P; it will expose this port and map to some random available port on the Docker Machine
  • WORKDIR sets the directory for subsequent RUN, CMD, ADD/COPY and ENTRYPOINT etc
  • Using CMD; we are running the NODEJS command to run our application

Once the Dockerfile is in place; we can “compile” it and build the container using docker build

>docker build –t khurram/node:hello .

  • Using –t we are specifying the tag name of the image that will get created
  • The last dot is the context; the directory; where docker build will run; it will look for Dockerfile there (and some other files if we create like .dockerignore etc) and run/compile it from the specified context

image

After a while; our image will get created that we can check using docker images and can run it using docker run

C:\khurram\src\Staging>docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
khurram/node        hello               b35d15d98edb        2 minutes ago       460 MB
microsoft/dotnet    latest              098162c455c7        11 days ago         576 MB
ubuntu              latest              2fa927b5cdd3        2 weeks ago         122 MB

C:\khurram\src\Staging>docker run -d -p 3000:3000 khurram/node:hello
ecebef4649899b5e46eac42aeedf78372998e00b7a37376cda71c53e6d400148

C:\khurram\src\Staging>docker-machine ls
NAME          ACTIVE   DRIVER   STATE     URL                        SWARM   DOCKER    ERRORS
Boot2Docker   *        hyperv   Running   tcp://192.168.10.13:2376           v1.11.2

C:\khurram\src\Staging>curl http://192.168.10.13:3000
Hello World from Node in Container
C:\khurram\src\Staging>docker ps -a
CONTAINER ID        IMAGE                COMMAND             CREATED             STATUS              PORTS                    NAMES
ecebef464989        khurram/node:hello   "nodejs hello.js"   3 minutes ago       Up 3 minutes        0.0.0.0:3000->3000/tcp   hopeful_leakey

Tips and Hacks

  • Just like HTML; the best way to learn Dockerfile tricks is to read others; for instance the Node’s official Dockerfile; you will learn that instead of ubuntu image they are using buildpack-deps:jessie base image which is more lean and result better Container
  • Having RUN command in separate lines result better cache; the layers that gets created can get reused across different images in a better way; for instance having apt-get update in its own line and as a first line will result its own layer and if we create another image for something else; say MongoDB; it will get reused
  • Having meaningful tags for the images are useful determining what’s what in long run
  • There exists CURL for Windows; you can download and place in some folder which is in PATH and use it similar to how you use it in Linux
  • You can get prebuilt “docker.exe” (Docker CLI) on Windows three ways; through Chocolatey, through Docker Toolbox or from Docker Toolbox’s repository. Docker Toolbox uses Docker to build it; from Toolbox’s Windows Dockerfile you can find out where precompiled docker files are; look for RUN curl lines with entry –o dockerbins.zip; you can make a URL and using CURL for Windows easily download that zip file and find the latest docker.exe in it
  • As we are using Boot2Docker VM for the Docker; running the container and exposing its port; expose them to the VM level; if we want to expose it further to the gues OS level; we need to forward VM's port; topic of next post may be!

 

References

Posted by khurram | 0 Comments
Filed under: , ,

Docker on Windows: HyperV, NAT and DHCP Server

Docker on Windows

In the first part, Docker on Windows, we created an Internal Switch in HyperV and shared the external interface so that our Docker VM gets a fixed known IP as well as have internet connectivity. This arrangement might not work all the time; Internet Connection Sharing (ICS) tends to assign the IPs of its own choice and if we want to switch from Wifi to Ethernet for internet connectivity (laptop scenario) it becomes messy. If you are using Windows 10 / 2016 HyperV; we can avoid ICS setup and instead use newly introduced HyperV Interface type NAT. This allow us to have internal ip subnet of our choice for our VMs (Docker VM) and traffic from VMs connected to this interface will get NATed and VMs will have internet connectivity. We can expose the port / service running on VMs externally as well. Open up an Administrative Power Shell and execute the following commands

> New-VMSwitch –Name “NAT” –SwitchType NAT –NATSubnetAddress 192.168.10.0/24
> New-NetNAT –Name “NAT” –InternalIpInterfaceAddressPrefix “192.168.10.0/24”

image

  • “NAT” is the name of the switch
  • 192.168.10.0/24 is the subnet of our choice; it will automatically give 192.168.10.1 IP to the interface that we can use as a gateway for VMs connected to NAT switch

The NAT switch will appear as “Internal” in HyperV’s visual interfaces

image

We used Boot2Docker for setting up the VM for Docker; it need a DHCP server on the internal network; sadly HyperV networking doesn't have such arrangement out of the box, if your host OS is server you can setup DHCP services, but if you are using client OS i-e Windows 10 you will need either a separate VM acting as DHCP server (Linux Core or something like that) or some third party light weight DHCP server application like http://dhcpserver.de that you can run on the guest OS

  • dhcpserver.de has dhcpwiz.exe a wizard that let you create dhcpsrv.ini and dhcpsrv.exe that you can run as system tray application or as Windows Service
  • Dont forget to add the Firewall Rule that wizard let you create in the last step
  • You can add static ip binding to mac address in the ini file like shown below

image

With this arrangement in place; you can have the known static ip of your choice and Boot2Docker will get it from dhcpserver. You might need to regenerate the certificate once this new setup is in place

Reference

Docker on Windows

Docker on Windows

Setting up Docker on Windows is slightly different; as Docker needs Linux kernel and expects certain namespaces for its working. Therefore on Windows; we need to setup a Virtual Machine (VM) as a Docker Host. You can setup any Docker compatible Linux in a VM; boot2docker is a small Linux OS especially made for this purpose. The official way is to use Docker Toolbox; it comes with Docker Engine, Compose, Machine and Kinematic. There is a step by step guide available. It installs Virtual Box and setup boot2docker VM in it.

Docker /w HyperV

I wanted to use HyperV; as I am already using it for other VMs. If you want to use Docker with HyperV; you only need Machine (docker-machine); its a Command Line Interface (CLI) to manage Docker VMs. It lets us create Docker hosts on our computers, cloud providers or remote servers in the data centers. It accomplish this by having a notion of “drivers” and HyperV is supported driver. Get the latest docker-machine binary from its GitHub repository. At the time of this writing its 0.7 and I downloaded x86_64 version; kept it somewhere which was already in my PATH so I can directly call it from anywhere!

Static Ip and Internet Connectivity for Docker VM

The VM for Docker need to have a static ip; docker-machine will generate the certificates for authentication and they are bound to the ip; if Docker VM ip gets changed, we will have to regenerate the certificate every time and it becomes tedious. The Docker VM also need internet connectivity so it can connect to Docker Hub / Registry to download the images on demand. In HyperV; if we have a DHCP server available (Wifi Router scenarios) we can use “External” interface and have the DHCP server assign the static ip bound to the VM’s MAC Address or we can have an internal interface in HyperV and share the internet connection; doing so you will get private ip on the VM and it will have the internet connectivity automatically.

image

Boot2Docker VM

Once our HyperV switch is ready; we can give docker-machine create command and it will download latest boot2docker.iso and configure a VM all in one go!

C:\Users\khurram>docker-machine create --driver hyperv --hyperv-virtual-switch Docker Boot2Docker
Creating CA: C:\Users\khurram\.docker\machine\certs\ca.pem
Creating client certificate: C:\Users\khurram\.docker\machine\certs\cert.pem
Running pre-create checks...
(Boot2Docker) No default Boot2Docker ISO found locally, downloading the latest release...
(Boot2Docker) Latest release for github.com/boot2docker/boot2docker is v1.11.2
(Boot2Docker) Downloading C:\Users\khurram\.docker\machine\cache\boot2docker.iso from https://github.com/boot2docker/boot2docker/releases/download/v1.11.2/boot2docker.iso...
(Boot2Docker) 0%....10%....20%....30%....40%....50%....60%....70%....80%....90%....100%
Creating machine...
(Boot2Docker) Copying C:\Users\khurram\.docker\machine\cache\boot2docker.iso to C:\Users\khurram\.docker\machine\machines\Boot2Docker\boot2docker.iso...
(Boot2Docker) Creating SSH key...
(Boot2Docker) Creating VM...
(Boot2Docker) Using switch "Docker"
(Boot2Docker) Creating VHD
(Boot2Docker) Starting VM...
(Boot2Docker) Waiting for host to start...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env Boot2Docker

Once our Docker VM is running; we can simply SSH into it and run the Container; I am going to run the microsoft/dotnet

C:\Users\khurram>docker-machine ssh Boot2Docker
                        ##         .
                  ## ## ##        ==
               ## ## ## ## ##    ===
           /"""""""""""""""""\___/ ===
      ~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ /  ===- ~~~
           \______ o           __/
             \    \         __/
              \____\_______/
_                 _   ____     _            _
| |__   ___   ___ | |_|___ \ __| | ___   ___| | _____ _ __
| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__|   <  __/ |
|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|
Boot2Docker version 1.11.2, build HEAD : a6645c3 - Wed Jun  1 22:59:51 UTC 2016
Docker version 1.11.2, build b9f10c9
docker@Boot2Docker:~$ docker run -it microsoft/dotnet:latest
Unable to find image 'microsoft/dotnet:latest' locally
latest: Pulling from microsoft/dotnet
51f5c6a04d83: Pull complete
a3ed95caeb02: Pull complete
7004cfc6e122: Pull complete
5f37c8a7cfbd: Pull complete
a85114b33970: Pull complete
62c4b050934f: Pull complete
Digest: sha256:7d93320d8be879967149b59ceed280bca70cbdf358a2a990467ca502f0e1a4be
Status: Downloaded newer image for microsoft/dotnet:latest
root@439c959eaa28:/# mkdir hello_world
root@439c959eaa28:/# cd hello_world/
root@439c959eaa28:/hello_world# dotnet new
Created new C# project in /hello_world.
root@439c959eaa28:/hello_world# dotnet restore

And then after some time…

Installed:
    113 package(s) to /hello_world/project.json
root@439c959eaa28:/hello_world# dotnet run
Project hello_world (.NETCoreApp,Version=v1.0) will be compiled because expected outputs are missing
Compiling hello_world for .NETCoreApp,Version=v1.0

Compilation succeeded.
    0 Warning(s)
    0 Error(s)

Time elapsed 00:00:03.7668485


Hello World!
root@439c959eaa28:/hello_world# cat /etc/issue
Debian GNU/Linux 8 \n \l

root@439c959eaa28:/hello_world# exit
docker@Boot2Docker:~$
C:\Users\khurram>docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
microsoft/dotnet    latest              098162c455c7        5 hours ago         576 MB

C:\Users\khurram>docker ps -a
CONTAINER ID        IMAGE                     COMMAND             CREATED             STATUS                      PORTS               NAMES
439c959eaa28        microsoft/dotnet:latest   "/bin/bash"         2 hours ago         Exited (0) 38 seconds ago                       pedantic_chandrasekhar

Resources

Staging Node Application on Windows

Staging Node Application

We can run a Node Application on Windows with IIS; there exists IISNODE that can be used to do exactly this. Simply install Node and IISNode on the server

Create a folder on the file system and an IIS Application Pool for the Node Application. Give IIS APPPOOL\Pool-Name user full access to the folder!

Setup a web site or a virtual folder for our Node application choosing the newly created Application Pool

Create a simple hello.js; making sure that http server is started at process.env.PORT and not some hard coded value. IISNODE will set the environment variable PORT and by default it uses Named Pipes. Create a web.config adding iisnode handler in its configuration/system.webServer/handlers to handle %web%/hello.js requests

We can access our Hello World Node application at http://web-path/hello.js; IIS will automatically spins up node.exe; and we can even change the application file and IISNode picks up the changes and recycle node.exe processes. No special arrangements like GIT Hook / PM2 restarts required.

http://web-path/hello.js is not a good looking URL; we would like to have simply http://web-path and our hello.js should respond. For this we can use Url Rewrite extenstion, once installed, simply add configuration/system.webServer/rewrite section in the web.config to rewrite all the /* requests to hello.js and we will have the desired result. With this arrangement in place we can now run Express.js apps easily!

Resources

Posted by khurram | 0 Comments

Running Node Application in Docker Container on Raspberry Pi

Dockerizing Node

Lets run a Node application in Docker on Raspberry Pi; for the proof of concept; I will be using a simple hello world app and a GIT/SSH setup we made in Staging Node Application on Raspberry Pi. The Docker way of running the application is that we have our “data” and “application” files outside of the container; so that container remains completely disposable. When running the container; we can mount the directory from the Host OS; using this feature we can have our data and application files on the Host OS and they are being used from the Container; something like this:

image

We can continue to have the GIT / NGINX arrangements that we did in Staging Node Application on Raspberry Pi; but now we can run Node and MongoDB (and others) from Containers. We already have made the Node Docker image in the Docker on Raspberry Pi; all we need is to run is so that we mount /home/pi/hello directory into the Node Container and run Node in the Container, doing so we will have the Node server at container’s 3000 port, expose this port to the Host OS’s 3000 port so that NGINX forwards the request to Host OS’s 3000 port when it receive any request at Host OS’ http://ip/node endpoint

pi@raspberrypi:~ $ docker images
REPOSITORY           TAG                 IMAGE ID            CREATED             SIZE
khurram/pi           node                6af338545368        5 hours ago         159 MB
khurram/pi           nano                99f0053b387e        6 hours ago         105.6 MB
resin/rpi-raspbian   jessie              80a737f1a654        7 days ago          80.01 MB
pi@raspberrypi:~ $ cd hello
pi@raspberrypi:~/hello $ ls
hello.js
pi@raspberrypi:~/hello $ docker run -p 3000:3000 -v /home/pi/hello:/hello -it khurram/pi:node
root@381ece8ec01a:/# cd /hello
root@381ece8ec01a:/hello# ls
hello.js
root@381ece8ec01a:/hello# nodejs hello.js
Server running at port 3000

  • -p 3000:3000 is to expose container’s 3000 port and map it to Host OS’s 3000; the port of container where node application will run in the container and host os port where nginx is expected to forward the requests
  • -v localpath:remotepath is to mount the Host OS’s localpath directory as a remotepath in the container
  • -it is for interactive terminal
  • khurram/pi:node is the Node image we created

Once the container is running and we are on its terminal; we can start our node app; we have to leave the terminal running so Node server continue to run; and from another terminal we can issue docker ps to get the list of all the running container

pi@raspberrypi:~ $ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                    NAMES
381ece8ec01a        khurram/pi:node     "/bin/bash"         2 minutes ago       Up 2 minutes        0.0.0.0:3000->3000/tcp   goofy_khorana

  • Note; how the ports are mapped
  • Note the NAMES column; Docker has named our container “dynamically”

We can try http://ip:3000 and http://ip/node URL and our node application should be running there; using the Container Name or Container ID we can stop it. We can issue docker ps –a to list all the containers including those that are stopped

pi@raspberrypi:~ $ docker stop goofy_khorana
goofy_khorana
pi@raspberrypi:~ $ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
pi@raspberrypi:~ $ docker ps -a
CONTAINER ID        IMAGE                       COMMAND                  CREATED             STATUS                       PORTS               NAMES
381ece8ec01a        khurram/pi:node             "/bin/bash"              9 minutes ago       Exited (130) 9 seconds ago                       goofy_khorana
aece7089082d        khurram/pi:node             "-p 3000:3000 -v /hom"   10 minutes ago      Created                                          determined_colden
e5e7005489a2        khurram/pi:nano             "/bin/bash"              6 hours ago         Exited (0) 5 hours ago                           grave_pasteur
5b62a2d14818        resin/rpi-raspbian:jessie   "/bin/bash"              6 hours ago         Exited (0) 6 hours ago                           tiny_feynman

As you can see; our Containers are also getting stored on the Host OS; think of it as the Working Directory in the source control; the server will have code images that we commit and working directory has currently working copy of source code; similarly docker images are the images of container that we committed and containers are the running (or stopped) copies, they also eats up the disk and we should remove the unwanted one; keeping an eye on STATUS we can learn which one we are not using anymore and can remove them using docker rm

pi@raspberrypi:~ $ docker rm goofy_khorana
goofy_khorana
pi@raspberrypi:~ $ docker rm aece7089082d
aece7089082d

We dont always have to get the interactive shell on starting container; if we know what command to run when container is running; we can start our container in the background giving the command to run as parameter; lets create a new container for our Node application (as we have deleted the previously created one) from the Docker Image; something like this

pi@raspberrypi:~ $ docker run -d -p 3000:3000 -v /home/pi/hello:/hello khurram/pi:node nodejs /hello/hello.js
49b347531127cc1d6c07f9b266e9e146afa0c4214c3c13514b7a851a444c525e
pi@raspberrypi:~ $ docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
49b347531127        khurram/pi:node     "nodejs /hello/hello."   9 seconds ago       Up 3 seconds        0.0.0.0:3000->3000/tcp   stupefied_wing
pi@raspberrypi:~ $ curl
http://localhost/node
Hello World from NODE in Container

Restarting Container

Now if we reboot the Raspberry; and give docker ps –a when it comes back; you will notice that our Container is not running anymore

pi@raspberrypi:~ $ docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                            PORTS               NAMES
49b347531127        khurram/pi:node     "nodejs /hello/hello."   2 minutes ago       Exited (143) About a minute ago                       stupefied_wing

This can be taken care using –restart=always as the paramter to docker run; with this; even if our container exits unexpectedly; Docker will restart it; and also start it when machine boots (Docker Daemon gets started)

pi@raspberrypi:~ $ docker run --restart=always -d -p 3000:3000 -v /home/pi/hello:/hello -it khurram/pi:node nodejs /hello/hello.js
575b6cf68407fb08bf0fae895ea1170f5cbc02fba596f903c1901e17aa859747
pi@raspberrypi:~ $ curl
http://localhost/node
Hello World from NODE in Container
pi@raspberrypi:~ $ sudo shutdown -r now

Using docker ps we can see that the second container that we started with restart=always is running on the boot!

pi@raspberrypi:~ $ docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED              STATUS                       PORTS                    NAMES
575b6cf68407        khurram/pi:node     "nodejs /hello/hello."   About a minute ago   Up 4 seconds                 0.0.0.0:3000->3000/tcp   serene_mirz
49b347531127        khurram/pi:node     "nodejs /hello/hello."   6 minutes ago        Exited (143) 5 minutes ago                            stupefied_w
pi@raspberrypi:~ $ curl
http://localhost/node
Hello World from NODE in Container

We can delete the previous container using docker rm, and if we want to protect the Container ports being exposed on Host; we can use iptables!

Restarting Container on changing application files

We know that we need to restart the node process when application files are changed. In this case; we simply can restart the Docker Container, it takes almost the same time. This can be done using docker restart command; but we need to "know" the container name at runtime so that we can use it in our post-receive GIT script; i-e when new code is “pushed” the GIT hook can restart the container. We can have a static known name for our container if we run the container with –name parameter

pi@raspberrypi:~ $ docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
575b6cf68407        khurram/pi:node     "nodejs /hello/hello."   3 hours ago         Up 3 hours          0.0.0.0:3000->3000/tcp   serene_mirzakhani
pi@raspberrypi:~ $ docker stop 575b6cf68407
575b6cf68407
pi@raspberrypi:~ $ docker rm 575b6cf68407
575b6cf68407
pi@raspberrypi:~ $ docker run --restart=always --name hello -d -p 3000:3000 -v /home/pi/hello:/hello -it khurram/pi:node nodejs /hello/hello.js
6774bc75b25e584b9f132bf78894a90789180d868c3176593b96e3f426db4118
pi@raspberrypi:~ $ curl
http://localhost/node
Hello World from NODE in Container
pi@raspberrypi:~ $ docker restart hello
hello
pi@raspberrypi:~ $ curl
http://localhost/node
Hello World from NODE in Container

We just need to add docker restart hello in hello.git/hooks/post-receive; similar to Staging Node Application where we added pm2 restart hello to restart the pm2 application

Resources

Happy Containering!

Docker on Raspberry Pi

Docker allow us to package our application with all its dependencies into a standardized unit; the application run in the Container that has everything it needs to run and this is kept in isolation from the other Container running on the Server. Its architecturally different from Virtual Machine and are more portable and efficient; they share the kernel and run as an isolated process in user space on the host operating system.

image

To run Docker on Raspberry Pi; we either can run premade images (with Host OS) or we can install docker on the Raspbian. The installation package on official repository is bit outdated and will not run with Docker Hub; the official repository from where we can download Container images with ease. Hypriot has made debian installation packages available on their download page from where we can install the latest package (at the time of this writing; its 1.10.3)

To install it; give the following commands on the Raspbian Jessie Lite

$ curl -sSL https://downloads.hypriot.com/docker-hypriot_1.10.3-1_armhf.deb > docker-hypriot_1.10.3-1_armhf.deb
$ sudo dpkg -i docker-hypriot_1.10.3-1_armhf.deb
$ sudo sh -c 'usermod -aG docker $SUDO_USER'
$ sudo systemctl enable docker.service
$ sudo service docker start

Once docker is installed and running; we need to run an image; given underlying CPU platform is different; we cant just go ahead and install any image from Docker Hub. Fortunately there is “resin/rpi-raspbian:jessie” image that we can use. To download and run this image use docker run –it like this

pi@raspberrypi:~ $ docker run -it resin/rpi-raspbian:jessie
Unable to find image 'resin/rpi-raspbian:jessie' locally
jessie: Pulling from resin/rpi-raspbian
242279a37c38: Pull complete
072ccb327ac8: Pull complete
de6504dccd59: Pull complete
a3ed95caeb02: Pull complete
Digest: sha256:534fa5bc3aba67f7ca1b810110fef1802fccf9e52326948208e5eb81eb202710
Status: Downloaded newer image for resin/rpi-raspbian:jessie
root@5b62a2d14818:/#

  • docker run is to run the container
  • -it is to get the interactive terminal when its run
  • root@xxxxx# is the Container shell; note down the value after root@; its the container id

Once we have the container running; we can go ahead and install some package; say NANO using apt-get update and apt-get install nano. When its installed; we need to “commit” the container; think of it similar to source code control system. When we commit the container it creates an image; from which we can start an instance of container similar to how we used resin/rpi-raspbian:jessie image. To commit; exit from the container shell and then issue docker commit command like this

root@5b62a2d14818:/# exit
pi@raspberrypi:~ $ sudo docker commit -m "Added nano" -a "Khurram" 5b62a2d14818 khurram/pi:nano
sha256:99f0053b387ed69f334926726f4ce0fd7c1946e4cc11b65e7a42e6a58eff9685
pi@raspberrypi:~ $

  • 5b62a2d14818 is the container ID that we copied from the container’s shell prompt
  • khurram/pi:nano is the image target; khurram is the user, pi is the name and nano is the tag name

Once committed; we can run it again using docker run specifying image target

$ docker run -it khurram/pi:nano

Once running; we can continue installing other things; node in our case; and when its done we can exit from the container shell and commit the updated container again

  • $ apt-get install nodejs and $ apt-get install npm to install Node and Node Package Manager
  • $ ln –s /usr/bin/nodejs /usr/bin/node to create symbolic link so nodejs can be called as node (npm expects this)

root@e5e7005489a2:/# exit
pi@raspberrypi:~ $ docker commit -m "Added node" -a "Khurram" e5e7005489a2 khurram/pi:node
sha256:6af338545368613b015001afddacc9f8abff5b39d5f2f9111bc643cb47dc87de
pi@raspberrypi:~ $

This way; we should have three images all together by now; one the base raspbian:jessie that we ran first time; and two more that we committed; we can list these images using docker images

pi@raspberrypi:~ $ docker images
REPOSITORY           TAG                 IMAGE ID            CREATED             SIZE
khurram/pi           node                6af338545368        43 seconds ago      159 MB
khurram/pi           nano                99f0053b387e        17 minutes ago      105.6 MB
resin/rpi-raspbian   jessie              80a737f1a654        6 days ago          80.01 MB

  • If you have created some unwanted image; you can delete that using docker rmi image; eg docker rmi khurram/pi:nano

Using docker info we can learn about the currently configured settings of docker; Docker Root Dir is interesting; it tells where Docker is storing all its data including Containers

pi@raspberrypi:~ $ sudo docker info
Containers: 2
Running: 0
Paused: 0
Stopped: 2
Images: 4
Server Version: 1.10.3
Storage Driver: overlay
Backing Filesystem: extfs
Execution Driver: native-0.2
Logging Driver: json-file
Plugins:
Volume: local
Network: bridge null host
Kernel Version: 4.4.9+
Operating System: Raspbian GNU/Linux 8 (jessie)
OSType: linux
Architecture: armv6l
CPUs: 1
Total Memory: 434.7 MiB
Name: raspberrypi
ID: TNHK:5MI5:JGFD:DE3I:B6MX:VXVB:TCMD:ZTQI:IQKO:NH46:6NXP:OW6O
Debug mode (server): true
File Descriptors: 11
Goroutines: 21
System Time: 2016-05-31T10:08:20.649012315Z
EventsListeners: 0
Init SHA1: 0db326fc09273474242804e87e11e1d9930fb95b
Init Path: /usr/lib/docker/dockerinit
Docker Root Dir: /var/lib/docker
WARNING: No swap limit support
WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support
WARNING: No cpuset support

We can checkout the /var/lib/docker to learn what’s there

pi@raspberrypi:~ $ sudo -i
root@raspberrypi:~# cd /var/lib/docker/
root@raspberrypi:/var/lib/docker# ls
containers  image  network  overlay  tmp  trust  volumes

We can copy the docker image to another server using docker save and docker load; docker save makes a tar file and its syntax is docker save –o tar-file image-name and docker load takes a tar file and its syntax is docker load –i tar-file. This way we can copy our running container to another server (or Raspberry in this case) seamlessly and we can expect that it will “just work” given container has everything it needs; no installation, version conflicts etc. By now we must have got an idea; how useful docker container and images are in long run and why its getting so popular. We can run the container from these images or download new image from Hub; and can run multiple containers of image as required.

Monitoring Raspberry Pi

Before commissioning the Raspberry Pi; it would be nice if we setup some monitoring; so we can correlate any issue in the field with device status. This becomes important especially for devices like Raspberry Pi that has limited resources. The simplest and easiest way is to setup SNMP; its the protocol to collect and organize information about the managed devices on IP networks. Given Raspbian is a just another Linux; we can easily setup SNMPD; a SNMP daemon; and can monitor the device remotely or even from within the device. To install SNMPD; issue the following commands; and once installed backup the /etc/snmp/snmpd.conf

$ sudo apt-get install snmpd
$ sudo cp /etc/snmp/snmpd.conf /etc/snmp/snmpd.conf.original

Next; edit the snmpd.conf; remove everything and punch in the following

agentAddress    127.0.0.1:161
rocommunity     public

Restart the snmpd using

$ sudo /etc/init.d/snmpd restart

With above arrangements in place; we are basically running the SNMPD (SNMP Agent) on localhost 161 UDP port and have configured “public” a read only community (password / key). We can now “query” SNMPD using snmp utilities. To install them issue

$ sudo apt-get install snmp

Once installed; issue the following command to query free CPU percentage via snmp and we will have an output like this

$ snmpget -v 1 -c public localhost .1.3.6.1.4.1.2021.11.11.0
iso.3.6.1.4.1.2021.11.11.0 = INTEGER: 90

.1.3.6.1.4.1… is an OID; there are well known OID (Object Identifier) for free CPU; there are other OIDs for things like this; some of which we will use later.

Next we want to “expose” CPU Temperature through SNMP; by default its not there and given device IO is done in Linux through files and SNMPD has an option to get data by running a script and expose it through additional OID. Lets make a script for CPU temperature

$ nano snmp-cpu-temp.sh

Punch in the following

#!/bin/bash
if [ "$1" = "-g" ]
then
        echo .1.3.6.1.2.1.25.1.8
        echo gauge
        cat /sys/class/thermal/thermal_zone0/temp
fi
exit 0

Make the script executable and run it with -g

$ chmod +x snmp-cpu-temp.sh
$ ./snmp-cpu-temp.sh -g
.1.3.6.1.2.1.25.1.8
gauge
49768

The temperature is 49.768 degree Celsius; lets edit snmpd.conf ($ sudo nano /etc/snmp/snmpd.conf) to add this script; make it look like this; have highlighted the new lines

agentAddress    127.0.0.1:161
rocommunity     public
pass            .1.3.6.1.2.1.25.1.8 /bin/sh /home/pi/snmp-cpu-temp.sh

Restart the snmpd and query the OID using snmpget and we will have the value

$ sudo /etc/init.d/snmpd restart
[ ok ] Restarting snmpd (via systemctl): snmpd.service.
$ snmpget -v 1 -c public localhost .1.3.6.1.2.1.25.1.8
iso.3.6.1.2.1.25.1.8 = Gauge32: 50458

As we already have the NGINX installed; we can easily setup MRTG; its light weight and widely used monitoring setup that generate graphs using data gathered over SNMP that we can host in NGINX and view remotely. To install issue this command:

$ sudo apt-get install mrtg

MRTG comes with helper utilities like CFGMAKER and INDEXMAKER; we can use cfgmaker to make mrtg.cfg (in /etc) but given we also want to include our own additional OID; lets make the mrtg.cfg ourselves; take a backup of original /etc/mrtg.cfg; remove everything and punch in the following

WorkDir: /var/www/mrtg
EnableIPv6: no
LoadMIBs: /usr/share/snmp/mibs/UCD-SNMP-MIB.txt

Target[CPU]: 100 - .1.3.6.1.4.1.2021.11.11.0&.1.3.6.1.4.1.2021.11.11.0:public@localhost
Options[CPU]: integer, gauge, nopercent, growright, unknaszero, noo
MaxBytes[CPU]: 100
YLegend[CPU]: CPU %
ShortLegend[CPU]: %
LegendI[CPU]: CPU
Legend1[CPU]: CPU usage
Title[CPU]: Raspberry Pi CPU load
PageTop[CPU]: <H1>Raspberry Pi - CPU load</H1>

Target[Memory]: .1.3.6.1.2.1.25.2.3.1.6.1&.1.3.6.1.2.1.25.2.3.1.6.3:public@localhost
Options[Memory]: integer, gauge, nopercent, growright, unknaszero, noo
MaxBytes[Memory]: 100524288
YLegend[Memory]: Mem - 1K pages
Factor[Memory]: 1024
ShortLegend[Memory]: B
LegendI[Memory]: Physical
LegendO[Memory]: Virtual
Legend1[Memory]: Physical
Legend2[Memory]: Virtual Memory
Title[Memory]: Raspberry Pi Memory Usage
PageTop[Memory]: <H1>Raspberry Pi - Memory Usage</H1>

Target[CPU-temp]: .1.3.6.1.2.1.25.1.7.0&.1.3.6.1.2.1.25.1.8:public@localhost
Options[CPU-temp]: integer, gauge, nopercent, growright, unknaszero, noi
Factor[CPU-temp]: 0.001
MaxBytes[CPU-temp]: 100000
Title[CPU-temp]: CPU temperature on Raspberry Pi
YLegend[CPU-temp]: Temperature °C
ShortLegend[CPU-temp]: °C
Legend2[CPU-temp]: CPU temperature in °C
LegendO[CPU-temp]: CPU temperature
PageTop[CPU-temp]: <H1>Raspberry Pi - CPU Temperature</H1>

Target[Ethernet]: 2:public@localhost
MaxBytes[Ethernet]: 12500000
Title[Ethernet]: Raspberry Pi Ethernet Traffic Usage
PageTop[Ethernet]: <H1>Raspberry Pi - Ethernet Traffic Usage</H1>

mrtg has an option to run it as daemon and it will poll the assigned snmps and make graphs periodically. Next we need script for /etc/init.d through which not only we can run the mrtg as daemon but it will also start across system reboots. We can google the interenet; people out there has already made such script; here is one such script that you can save as /etc/init.d/mrtg ($ sudo nano /etc/init.d/mrtg)

PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
DAEMON="/usr/bin/mrtg"
PARAM=" --user=root /etc/mrtg.cfg --logging /var/log/mrtg.log"
NAME="MRTG"
DESC="Multi Router Traffic Grapher Daemon"

test -f $DAEMON || exit 0
set -e
case "$1" in
start)
        echo -n "Starting $DESC: "
        env LANG=C $DAEMON $PARAM
        echo "$NAME."
        ;;
stop)
        echo -n "Stopping $DESC: "
        killall -9 mrtg
        echo "$NAME."
        ;;
restart|force-reload)
        echo -n "Restarting $DESC: "
        killall -9 mrtg
        sleep 1
        env LANG=C $DAEMON $PARAM
        echo "$NAME."
        ;;
*)
        N=/etc/init.d/$NAME
        echo "Usage: $N {start|stop|restart|force-reload}"
        exit 1
        ;;
esac
exit 0

Once saved; make it executable; create a directory/var/www/mrtg where it will store the graphs and start the daemon. It will give few warnings and its ok when running first time.

$ sudo chmod +x /etc/init.d/mrtg
$ sudo mkdir /var/www/mrtg
$ sudo /etc/init.d/mrtg start

All we need now is to expose the graphs through NGINX; lets edit /etc/nginx/sites-available/default and make it look like this ($ sudo nano /etc/nginx/sites-available/default), have highlighted the new lines

server {
        listen 80 default_server;
        listen [::]:80 default_server;
        index index.html index.htm index.nginx-debian.html;
        server_name _;
        location /mrtg {
                alias /var/www/mrtg;
        }
        location /node {
                proxy_pass
http://localhost:3000;
                proxy_http_version 1.1;
                proxy_set_header Upgrade $http_upgrade;
                proxy_set_header Connection 'upgrade';
                proxy_set_header Host $host;
                proxy_cache_bypass $http_upgrade;
        }
}

Create an index.html using mrtg’s indexmaker utility and restart the nginx. While using indexmaker; it might give permission error; use sudo –i and logout when its done

$ sudo indexmaker /etc/mrtg.cfg >> /var/www/mrtg/index.html
-bash: /var/www/mrtg/index.html: Permission denied
$ sudo -i
# indexmaker /etc/mrtg.cfg >> /var/www/mrtg/index.html
# service nginx restart
# logout

Check the graphs at http://ip/mrtg; after a while it will look something like this

image

Go ahead and click the graph to view the details!

With this in place; we can deploy our applications and send our Raspberries into the field and once they are there we can remotely check how they are performing! We can expose the SNMP to some central monitoring system on the same lines where sys-admins can keep an eye on them; if such arrangement is in place!

Happy coding and deployments!

Staging Node Application on Raspberry Pi

Staging Node Application

To make things interesting; lets test our Node application on Raspberry Pi running Raspbian. Raspbian; just like Ubuntu; is based on Debian, so the learnings we did in first part can be applied. Raspberry Pi is interesting due to its low cost, credit card sized and Raspbian OS, it can provide PC like computing in the field or workplace needing very little power and this enables lots of new interesting possibilities. PS Raspbian OS is one option; we can try/use other OSes on this little thing!

WP_20160517_11_09_37_Raw_LI

  • Raspberry Pi Model B+
  • You can use any USB phone chargers or battery packs that are conveniently available to power it

Raspberry Pi uses SD Card for its storage; and knowing few tricks can help us. Always go for the Class 10 card to get best I/O performance. Raspbian makes two partition; one is readable in Windows but other is not as its EXT4 partition. We can use Ext2Explore or Linux Reader utilities to read the content of the Linux partition; Ext2Explore is better; as its just single EXE not needing any installation etc. Unfortunately these both provide read only access; and can be used for scenarios where you want to copy/backup contents of some files from the SD card (say /etc/someconfig) Paragon ExtFS for Windows has necessary drivers and interface through which you can mount these EXT* partitions in Windows; similar to USB drives and can read/write files. This can be useful when you need to write some configuration (say defining static IP)

Raspberry Pi uses HDMI / Video out; and not all the time you can have a luxury to mount a display; if we install Raspbian in a headless / GUI-less mode; the default configurations are good enough that we don't need any display and can burn the “image” to the SD Card and on boot it installs it automatically and gets the IP from the DHCP. If we connect it via ethernet to a typical Wi-Fi Router (with ethernet ports) we can get the IP from the router interface and in there can even give static IP to the Raspberry; similar to how we did in the first part. Once we know the IP; we can SSH; its default login/password is pi/raspberry (we should of course change it straight away using $ passwd)

image

By default time zone is set to UTC; to set the time zone use $ sudo dpkg-reconfigure tzdata; Raspbian Lite doesn't come with GIT tools; so we need to install them as well using:

$ sudo apt-get update
$ sudo apt-get install git

All the remaining steps are same as of Part 1 Staging Node Application

Summary of Part 1 Staging Node Application

  • Create ~/hello and ~/hello.git
  • $ git init –bare in ~/hello.git
  • $ nano ~/hello.git/hooks/post-receive and punch in
  • #!/bin/sh
    GIT_WORK_TREE=/home/pi/hello git checkout -f

  • If node is not installed; $ sudo apt-get install –y nodejs and $sudo apt-get install –y npm
  • Install PM2; $ sudo npm install pm2 –g; make node link for nodejs as PM2 expects node; $ ln –s /usr/bin/nodejs /usr/bin/node
  • Make a simple hello world js at client and push it to the server’s ~/hello.git; and it will automatically gets available at ~/hello due to git hook
  • Start it with PM2 $ pm2 start hello.js
  • Add pm2 restart hello into git hooks/post-receive so it restarts the app

At the development machine; we can add multiple “Remote”s to our source code folder and to make our code run on server with any ip; we can give 0.0.0.0 as ip or completely omit it and just give port; doing so our Node application will run on all ips of the machine

image

Changing the code and committing; TortoiseGit will give us option to push and from there we can select where to push the changes!

image

Selecting all will push our changes to both the servers and it will run accordingly. In first part; we used PM2 to “deamonize” our Node application; on reboot; our app will not launch; we need to create a “startup script” for PM2; for this we need to first save the pm2 running configurations and then use pm2 startup option. systemd is available on both latest versions of Raspbian and Ubuntu and we can use pm2 startup systemd

pi@raspberrypi:~/hello $ pm2 save
[PM2] Saving current process list...
[PM2] Successfully saved in /home/pi/.pm2/dump.pm2
pi@raspberrypi:~/hello $ sudo pm2 startup systemd -u pi

                        -------------

   Looking for a complete monitoring and management tool for PM2?
    _                             _        _            _
   | | _____ _   _ _ __ ___   ___| |_ _ __(_) ___ ___  (_) ___
   | |/ / _ \ | | | '_ ` _ \ / _ \ __| '__| |/ __/ __| | |/ _ \
   |   <  __/ |_| | | | | | |  __/ |_| |  | | (__\__ \_| | (_) |
   |_|\_\___|\__, |_| |_| |_|\___|\__|_|  |_|\___|___(_)_|\___/
             |___/

                          Features

                   - Real Time Dashboard
                   - CPU/Memory monitoring
                   - HTTP monitoring
                   - Event notification
                   - Custom value monitoring
                   - Real Time log display

                          Checkout

                   https://keymetrics.io/

                        -------------

[PM2] Spawning PM2 daemon
[PM2] PM2 Successfully daemonized
[PM2] Generating system init script in /etc/systemd/system/pm2.service
[PM2] Making script booting at startup...
[PM2] -systemd- Using the command:
      su pi -c "pm2 dump && pm2 kill" && su root -c "systemctl daemon-reload && systemctl enable pm2 && systemctl start pm2"
Created symlink from /etc/systemd/system/multi-user.target.wants/pm2.service to /etc/systemd/system/pm2.service.
[PM2] Saving current process list...
[PM2] Successfully saved in /home/pi/.pm2/dump.pm2
[PM2] Stopping PM2...
[PM2] Applying action deleteProcessId on app [all](ids: 0)
[PM2] [hello](0) ✓
[PM2] All processes have been stopped and deleted
[PM2] PM2 stopped
[PM2] Done.

Once done; we can issue $ sudo shutdown –r at both machines to reboot them and once they are back we can check if pm2 has started our app by issuing $ pm2 list and if they are; try them out from the client / development machine

image

 

So far we have been exposing Node http server as-is; but in production; we should have a proper web server serving the requests and node should remain internal; lets install NGINX; a popular web and reverse proxy server; and once installed; edit the /etc/nginx/sites-available/default to setup our application in nginx

$ sudo apt-get install nginx
$ sudo cp /etc/nginx/sites-available/default /etc/nginx/sites-available/default.original
$ sudo nano /etc/nginx/sites-available/default

Delete everything and punch in the following to reverse proxy our http://localhost:3000 application at http://ip/node

server {
        listen 80 default_server;
        listen [::]:80 default_server;
        index index.html index.htm;
        server_name _;
        location /node {
                proxy_pass http://localhost:3000;
                proxy_http_version 1.1;
                proxy_set_header Upgrade $http_upgrade;
                proxy_set_header Connection 'upgrade';
                proxy_set_header Host $host;
                proxy_cache_bypass $http_upgrade;
        }
}

Next restart the nginx service

$ sudo /etc/init.d/nginx restart

We can install and configure nginx same way on Raspbian as well; once its installed we will have our application at respective http://ip/node
image

We can now update the node code to listen to just localhost (127.0.0.1); update the code; commit and push to both and applications will restart itself with our git hooks in place!

Staging Node Application

Staging Node Application

Node applications are “usually” deployed on Linux / Unix environments; and if you are not exposed to these environments and its one of the challenges due to which you avoiding Node; then lets take some time out and get our hands dirty a little bit. We will need a Linux Server to stage our app; I will be installing Ubuntu Server in a VM; that will be connected to a typical Wifi Router where DHCP server is already in place. We can give an ip in the DHCP server bound to our VM’s MAC address; this way the server will have a static ip and we can SSH / Browse to it conveniently. Lets make a simple hello.js while our VM gets ready!

image

  • Notice I have initialized Git as well

The Ubuntu Server already has GIT tools on clean installation. On the server I created two folders; hello.git that will be our git repository and the hello folder that will be the “root” of our node application. When we create GIT repository using $ git init –bare; it creates folder structures; the hooks folder is of our interest; because we need to create a “post-receive” script there that will “update” the hello folder @ the server whenever we will “push” the code changes from the “client” (development machine) The script looks like this

#!/bin/sh
GIT_WORK_TREE=/home/khurram/hello git checkout -f

  • We also need to make this script executable using $ chmod +x post-receive

For the “client” (development machine) I already have TortoiseGit; its a fantastic Windows Explorer extension; many folk likes command prompt; but I am aged now and have other things to remember SmileIn TortoiseGit; we can add “Remote”s to the folder from TortoiseGit > Settings context menu; I added my hello.git repository with URL; ssh://192.168.0.100/home/khurram/hello.git

image

Once this remote url is added; we can “push” to it; in TortoiseGit; after committing it gives us the Push option; and doing that it will push the code to the server and there our post-receive script will run and it checkout the repository into the GIT_WORK_TREE folder (our hello folder)

image

Now we are able to push our code to the server repository from where it automatically gets updated to the “Node Application Root folder” we have designated. Lets next install Node on to our server; on Ubuntu for this; we need to run the following commands

$ sudo apt-get install –y nodejs
$ sudo apt-get install –y npm

image

Once these two are installed; we can run our application using

$ nodejs hello.js

The nodejs process will run as long as we are sshing the server; we want to run node as daemon so it continue to run even if we are not sshing and for this we need PM2 that can be installed with command

$ sudo npm install pm2 –g

pm2 expects “node” name as the binary and we have nodejs binary; we can make a node link to nodejs so pm2 doesnt complaint; to start our application; we issue $ pm2 start hello.js giving a fancy output

khurram@ubuntu:~/hello$ whereis nodejs
nodejs: /usr/bin/nodejs /usr/lib/nodejs /usr/include/nodejs /usr/share/nodejs /usr/share/man/man1/nodejs.1.gz
khurram@ubuntu:~/hello$ cd /usr/bin
khurram@ubuntu:/usr/bin$ sudo ln -s nodejs node
khurram@ubuntu:/usr/bin$ ls -al node
lrwxrwxrwx 1 root root 6 May 18 14:15 node -> nodejs
khurram@ubuntu:/usr/bin$ cd ~khurram/hello
khurram@ubuntu:~/hello$ pm2 start hello.js

                        -------------

   Looking for a complete monitoring and management tool for PM2?
    _                             _        _            _
   | | _____ _   _ _ __ ___   ___| |_ _ __(_) ___ ___  (_) ___
   | |/ / _ \ | | | '_ ` _ \ / _ \ __| '__| |/ __/ __| | |/ _ \
   |   <  __/ |_| | | | | | |  __/ |_| |  | | (__\__ \_| | (_) |
   |_|\_\___|\__, |_| |_| |_|\___|\__|_|  |_|\___|___(_)_|\___/
             |___/

                          Features

                   - Real Time Dashboard
                   - CPU/Memory monitoring
                   - HTTP monitoring
                   - Event notification
                   - Custom value monitoring
                   - Real Time log display

                          Checkout

                   https://keymetrics.io/

                        -------------

[PM2] Spawning PM2 daemon
[PM2] PM2 Successfully daemonized
[PM2] Starting hello.js in fork_mode (1 instance)
[PM2] Done.
┌──────────┬────┬──────┬──────┬────────┬─────────┬────────┬─────────────┬──────────┐
│ App name │ id │ mode │ pid  │ status │ restart │ uptime │ memory      │ watching │
├──────────┼────┼──────┼──────┼────────┼─────────┼────────┼─────────────┼──────────┤
│ hello    │ 0  │ fork │ 9335 │ online │ 0       │ 0s     │ 20.238 MB   │ disabled │
└──────────┴────┴──────┴──────┴────────┴─────────┴────────┴─────────────┴──────────┘
Use `pm2 show <id|name>` to get more details about an app

If you have been keen; we ran our http server on 127.0.0.1; we need to run it on the static ip that our server has so that we can test the running application from the client. Lets update the hello.git/hooks/post-receive script and add pm2 restart hello (hello is the app name)

#!/bin/sh
GIT_WORK_TREE=/home/khurram/hello git checkout -f
pm2 restart hello

With above arrangement in place; our application will restart itself whenever new code is pushed. Lets go ahead and update the code at the development machine and push it to the server through GIT which will automatically update the application root (~hello) and restart the application. If we commit + push the code changes through TortoiseGit; it even gives the output of our post-receive script

image

Posted by khurram | 0 Comments

MongoDB and Mongoose

Lets complete our RESTful api with Express using the database; MongoDB is the widely used database engine in Node world given its also a part of MEAN stack! Once its installed; you run its “daemon” using >mongod from installation-folder/bin; the command prompt needs to remain opened; there is also a way to setup MongoDB as Windows Service. You will also need to create c:\data\db folder where MongoDB store the data. >mongo is its command line shell from where you can query the database engine. To use MongoDB from Express; we need to install mongoose that gives us an API within Node to access and work with MongoDB

image

  • To install mongoose; we are specifying dash dash save so its dependency gets added into our package.json
  • Given our Express based API is for Ember front-end; we also need “after” module; more details ahead; so I did npm install after –save for that as well

“Mongoose is a MongoDB object modeling tool designed to work in an asynchronous environment”; to use it; we create model classes using its “Schema API”; think of it as a Table from relational database world. In the Express app; create models/model.js file that will contain these Schema model definitions and add mongoose and our model file in app.js and using Mongoose’ connect() api open a connection to the MongoDB that will get used across the app!

image

  • Notice that using Schema api; we are defining “types” of our model using Mongoose’ SchemaTypes
  • Notice the connection string being used in connect() call; the name of our “database” is invoices

With the above arrangements in place; we can start using MongoDB; we can create “initial” data in the database from the shell; but its convenient if we write the code in our Express app instead; so we call it when/where required; lets add the new function “init” in our RESTful api for that

image

  • Notice the use of connection.db.dropDatabase() call; its done so that even if we call “init” api repeatedly our database remains in the “known initial” state
  • Notice how “Invoice” and “Item” variables are declared using Mongoose’ API and being used later for querying Mongoose; save() calls in this case; the save() also takes a callback; but for our simple case its not being used

Lets finish off by writing out code for remaining api functions; /invoices is interesting; as we are creating the api for Ember app and its default adapter expects the data in JSONAPI.org and for that we need to go one mile extra. For this; we need “after” thats synchronizing the nested queries and their callbacks, we are using to execute all the queries and waiting for their respective callbacks to complete so “after” all these our data is ready to send to the client!

image

Given we are sending all the invoices and their children items data in the single go; we dont need GET of /invoices/ID and GET of /itemsID

image

  • Notice how sentItem is retrieved from request’ body.data; we don't need to de-serialize and its already taken care due to body Parser middleware that we already have configured
  • Notice we are using Mongoose’ findOne() api to query the item from the database and then updating its parameters and saving it back
  • Note MongoDB is storing primary key as _id field and Ember front end is sending primary key of the object in id field
  • Notice also that sentItem from Ember app has detailed attributes in item.attributes as per JSONAPI.org format

Further Readings

Bonus

Known Issues

  • Ember CLI based project comes with live reload; most code changes while Ember Server is running are picked; this is not the case with Express Creator created project and you will need to restart the server yourself if you make any changes in the code
Posted by khurram | 0 Comments
Filed under: ,