Welcome to weblogs.com.pk Sign in | Join | Help


TICK Series

Time Series Databases

Telegraf is the daemon written in Go for collecting, processing, aggregating and writing metrics. It has plugins through which it offers integrations to variety of metric sources. It can pull the metrics from third-party APIs as well and can even listen for metrics via StatsD and Kafka consumer services. It then inserts the collected metrics to InfluxDB and can even push collected metrics data into Graphite, Kafka, MQTT and many others through its output plugins. It also has processor plugins through which we can transform, decorate and filter the collected metrics and we can also aggregate these collected metrics using the aggregator plugins. There are over hundred of these four types of plugins and one can write ones own; these plugins make Telegraf very extendable.

Visit https://github.com/influxdata/telegraf for the complete list of plugins. For this post; I am going to use SNMP plugin. We will be polling Temperature and Network Interface Traffic from Mikrotik Routerboards. Simple Network Management Protocol (SNMP) is an Internet Standard protocol for collecting and organizing information about managed devices on IP networks and for modifying that information to change device behavior. SNMP uses an extensible design which allows applications to define their own hierarchies. These hierarchies, are described as a management information base (MIB). MIBs describe the structure of the management data of a device subsystem; they use a hierarchical namespace containing object identifiers (OID). Each OID identifies a variable that can be read or set via SNMP. We will not be using MIBs; we can use official Telegraf Docker Image as is; otherwise one need to install SNMP-MIBs in the container. The Telegraf SNMP plugin supports both the SNMP-GET and SNMP-WALK. We can retrieve one or more values using SNMP-GET and can use these retrieved values as tags with GET (Field) or WALK (Table) metrices. We will be retrieving the device host name and use it as a tag with temperature as well as interface metrices

Mikrotik Routerboard OS has two temperature OIDs; one for the sensor in its chassis and the other for its CPU temperature. The bandwidth interface information can be retreived by walking the well known OIDs for interface names, their bytes in and out counters. These SNMP configurations are made in the telegraf.conf file’s [[input.snmp]] sections. In the configuration files’ [output] section we specify where we want to push the retrieved metrices. Lets create a telegraf.conf file and add Telegraf service into our Docker Compose file we created in InfluxDB post


  • We are retrieving values from multiple SNMP agents each 60seconds
  • The measurement will be named “rb” for the fields and rb-interfaces for the table
  • “rb” measurement will have two temperature values
  • “rb-interfaces” will have the in/out counter values; the retrieved interface name will be used as tag
  • The hostname of the device will be used as tag in both measurements

Running docker-compose up; we can bring our setup online and after a while; we should be able to see the measurements that Telegraf is pushing to InfluxDB’s telegraf database in the Chronograf. If you are not following “TICK series” posts and has landed on this post directly; please refer to InfluxDB post for details on InfluxDB and Chronograf


  • Notice that Telegraf has also added agent-host label with the IP value of the SNMP Agent
  • The values need to divide by 10 to get the temperature in Celsius; Router OS is doing this so it can give out the fraction part using INTEGER data type through SNMP

In the Docker Compose; we exposed/mapped the InfluxDB HTTP port; we can run the InfluxDB queries from the host directly using CURL etc to debug/see what’s going on


Interestingly; Telegraf has a Prometheus Client Service Output plugin with which we can use it with Prometheus. Prometheus is based on pull model; this plugin starts the HTTP listener where it publishes the retrieved metrices and from where Prometheus can pull. To set it up; lets configure the plugin in telegraf.conf’s output section, bring in Prometheus configuring it to poll from the plugin endpoint. When we will bring things online; Telegraf will start pushing the metrics to InfluxDB as well as make them available at the Prometheus Client endpoint from where Prometheus will start polling accordingly


Once the data is in Prometheus; we can even bring Grafana and can start making Grafana dashboards.

  • Refer to Prometheus blog post for more information


I like Telegraf with its SNMP plugin more than Prometheus’ SNMP-Exporter and having Telegraf in the environment opens up and enables more possibilities. Grafana also supports InfluxDB and we can have the dashboards using both Prometheus and InfluxDB as Time series data sources. While graphing Bandwidth COUNTERs from SNMP in Grafana, you will need to use Prometheus’ rate() function and InfluxDB’s derivative() function.

  • The rate() function calculates the per second average rate of increase of the time series in the range vector given as parameter; below it will give the average rate of increase in 5min. The counter resets are automatically get adjusted
  • The derivative() function returns the rate of change between the subsequent field values and converts the results into the rate of change per unit; given as second parameter; below given we need /sec rate of the bandwidth we are giving 1s as the second parameter

The InfluxDB derivative() function seems more close to how classic RRD/MRTG graph the bandwidth counters


  • Note that Grafana offers a rich InfluxDB query editor; and if you want to can switch to text mode where you can write InfluxDB queries directly
  • Note that writing InfluxDB queries can become cumbersome for system administrators; they will like Prometheus more while developers might find InfluxDB more powerful and feel comfortable due to its RDBMS like queries

If you want to; you can remove Chronograf and even InfluxDB, and can use Telegraf directly with Prometheus / Grafana setup, or you can use Grafana with InfluxDB and not use Chronograf for dashboards. Its totally your preference!


Time Series Databases

InfluxDB is another open source time series database and is written in Go language. It has no external dependency and its data model consists of several key-value pairs called the fieldset. Each point has a timestamp a value and fieldset. Timestamp and fieldset form a tagset and each point gets indexed by its timestamp and fieldset. Collection of tagsets form a series and multiple series can be grouped together by a string identifier to form a measurement. The measurement has a retention policy that defines how data gets downsampled and deleted. InfluxDB has SQL-like query engine having builtin time-centric functions for querying data. “Continuous Queries” can run periodically (and automatically by the database engine) storing results in a target measurement. It can listen on HTTP, TCP, and UDP where it accepts a data using its line protocol that is very similar to Graphite.

The InfluxDB also has a commercial option; which is a distributed storage cluster giving horizontal scalability with storage and queries being handled by many nodes at once. The data gets sharded into the cluster nodes and it gets consistent eventually. We can query the cluster that runs sort of like MapReduce job.

InfluxDB vs Prometheus

InfluxDB has nanosecond resolution while Prometheus has millisecond, InfluxDB supports int64, float64, bool, and string data types using different compression schemes for each one while Prometheus only supports float64. Prometheus approach of High Availability is to run multiple Prometheus nodes in parallel with no eventual consistency; and its Alertmanager than handles the deduplication and grouping. InfluxDB writes are durable while Prometheus buffers the writes in the memory and flushes them periodically (by default each 5min). InfluxDB has Continuous Queries and Prometheus has Recording Rules. Both does data compression and offer extensive integrations; including with each other. Both offers hooks and APIs to extend them further.

Prometheus is simple, more performant and suits more for metrics. Its simpler storage model, simpler query language, alerting and notification functionality suits more to system administrators. That said; Prometheus being a PULL model; the server needs access to the nodes to retrieve the metrices and it might not suite in scenarios like IoT where devices are behind Wifi Gateway; or polling metrices from office machines that are behind NAT. Prometheus doesn't allow recording past data; in case you are extracting some time series data from some hardware logger; but InfluxDB let you record such data. In such situations where Prometheus is not full filling your requirements or where you need RDBMS like functionality against the time series data; we can use InfluxDB

IoT Example

For this post I am using Internet of Thing (IoT) scenario. Lets revisit an olt IoT post in which we used ESP8266 to measure room temperature using the sensor and send the readings to “ThingSpeak” service to view the temperature readings over time in the chart. In this post; we will try to remove ThingSpeak dependency using the “TICK stack”, the InfluxDB is an integral component of. TICK is an open source Time Series Platform for handling metrics and events and it consists of Telegraf, InfluxDB, Chronograf, and Kapacitor open source projects all written in Go language. Chronograph is an administrative user interface and visualization engine. We need it to run InfluxQL; the SQL like queries against the data in InfluxDB. It also offers templates and libraries to build dashboards with real-time visualizations of time series data like Grafana

Some might argue that we can use Prometheus with Pushgateway; the IoT devices can push their metrices to the Pushgateway that get hosted at the known location; its a valid argument and yes we can use it instead; as it acts as a buffer and Prometheus polls the metrices off the gateway periodically; the data sampling will not get reflected in the Time series database. Prometheus client libraries when used with Pushgateway usually sends all the metrices; even if one value is changed and needs the push; this increases network traffic and load on the sender; not something good for IoT scenario. Lastly Pushgateway remembers all the metrices even if they are no longer being pushed from the client; so for instance an IoT device is sending its IP address or host name in the metric; it will get remembered and next time if it gets different IP (usually the case in Wifi/behind NAT) it will get remembered as separate metric in Pushgateway and given Prometheus is polling off Pushgateway it will keep recording no longer required metrices as well. The Pushgateway has an option to group the metrices and we can delete the whole group using Pushgateway HTTP/Web API; but its not very ideal

The most convenient way to spin up the TICK stack is by using Docker. Lets create a simple docker-compose.yml file having InfluxDB and Chronograf and spin it up. We can then access Chronograf and can explore InfluxDB where it has created _internal database logging its own metrics


Lets extract the default “configuration” file of InfluxDB from its docker image first to build upon our configuration


Next enable InfluxDB UDP interface by adding udp section in the influxdb.conf; also map this UDP port to Docker Host and allow incoming UDP traffic to the mapped port in host’s firewall so that our IoT device can send its metrics on the known IP/Port of our Docker host


Now for the IoT firmware in Arduino; we just need to use the WiFiUDP instance from WiFiUDP.h and send the metric data using the InfluxDB line protocol; if we want to name our measurement temperature, and want to send device name, its local ip and raw sensor value along with the calculated temperature; we need to send following string in the udp packet

temperature,device=DEVICENAME,localIP=ITS-IP,sensorValue=S value=T

where S is sensor value and T is calculated temperature; our loop() function in Arduino will look something like this:


Bringing InfluxDB and Chronograf online; the data from IoT device will start logging and we can view the temperature graph in the Chronograf and export the raw data as CSV easily


In real world; Raspberry/Orange PIs can be used in remote cabinets with off the shelf / industrial strength temperature/humaditiy sensors. The boards can be connected to the switches directly due to their ethernet ports. These devices run full fledge Linux; you can access them remotely; run administrative scripts / commands; these boards also have USB ports where you can connect deployed devices administrative / serial ports. No need to send someone with laptops and physically connect in emergencies


  • Pictures taken from internet as reference
Posted by khurram | 0 Comments

Swarm and Prometheus II

Prometheus Series

Time Series Databases

Docker Swarm Series

In the last post; we had our application running and being monitored in the Swarm cluster; but we have few issues and in this post we will try to solve them.

volumesswarm-inbalancedThe first issue is that our containers are not deployed in balanced way; most of our containers are made to run on the Manager Node because these containers need the “configuration files” that we are providing to them using the “volumes” binding. We can remove the placement constraint from our stack deploy compose file; but these containers will not work as the files were not be found on the “worker nodes”. This can be fixed by either using absolute path as source of these configuration files and copying all these files to that particular path on each node (manually or using some script/Git trigger) or we can place these files at some network location that is mounted to the specified location as source on each node. The second way is that we make our own docker images that already has these configuration files; we will have to write Dockerfiles; build them and push them to the registry; similar to our application container; so each Swarm node can get the image when/where required

Docker 17.06 onwards we have something called “Docker Configs”; using which we can these configuration files “outside” the container image; we create these configs using the Docker CLI or Compose files; and these config files gets uploaded to Swarm Manager that encode it and keep it in its “store” and provides an HTTP API using which Swarm Nodes can “download / retrieve” these configs (or config files). The Compose file has the support of these configs and we can remove the volume entries and replace them with config entries.

The second issue is; all the containers that we need to expose so we can access them; like Prometheus and Grafana needs to be at some known location so we can access them using the known node IP/DNS name. This restriction will continue to give Manager heavy Swarm cluster; something that we dont want; instead we want to keep Manager lightweight; and use Workers to do heavy lifting.

Given our containers are exposing HTTP services; we can setup a Reverse Proxy on a known location; usually Manager node

Lets rewrite the Stack Deploy compose file replacing volume with the config entries and adding NGINX a popular reverse proxy. We will have to write another config file for Nginx that we can pass using the Docker Configs. The other benefit by using Docker Config we will get is that we no longer have to copy YML and config files to Swarm Manager; using docker-machine and pointing our environment to appropriate Swarm Manager node; we can deploy our stack from a remote machine (development box or some CI/CD system)

  • Docker Configs can be string or binary data upto 500kb in size; for larger files its better that we create our custom image having the required configuration or content; say you have lot of files in Grafana dashboards; its better to create custom image and push to the registry and use its image path
  • Docker Configs are not kept or transmitted in encrypted format; there is similar feature called Docker Secrets that should be used for keeping sensitive configuration for things like database connection strings or Grafana admin password in our project


Our docker-stack compose file will look like this; and we will end up with a balanced Swarm


For the Nginx reverse proxying we dont need anything special given Swarm does the DNS based service lookup; our service containers can be anywhere and Reverse Proxy will be able to discover and Proxy them straight away


The third issue is; that we are monitoring the nodes; but not the Docker Engine running on it; how many containers are running etc. The Docker Daemon exposes rich information and there exists many solutions online; I linked one in the last post. There is also Google’s cAdvisor that’s very popular in Kubernetes community. cAdvisor exposes itself as web interface and exposes Prometheus metrics out of the box @ /metrics

Given Prometheus is adopted and now officially a Cloud Native Computing Foundation (CNCF) project; Docker Daemon since v1.13 now exposes its metrics for Prometheus. However its wrapped under the experimental flag and sadly my Swarm setup doesnt allow setting required flags for the Docker Daemon. You can try it out with the standard Docker Daemon by setting the experimental flag and setting Metrics endpoint as metrics-addr


You can then give the address of the machine running the Docker Daemon and setup the Prometheus job accordingly. The Grafana dashboard for Docker Daemon are also available that you can use and customize


Posted by khurram | 1 Comments

Swarm and Prometheus

Prometheus Series

Time Series Databases

Docker Swarm Series

To monitor the Docker Swarm and our application running in the cluster with Prometheus; we can craft a v3 Compose File. To continue from the previous post; for the Swarm; we will like to have our Node Exporter running on all nodes. Unfortunately the host networking and host process namespace is not supported when we do docker stack deploy onto the swarm. Node Exporter and the Docker image we are using supports the command line argument and we can tell it that required directories are someplace else and using Docker Volume support can map the host directories to the container and pass the mapped paths to Node Exporter accordingly


  • We will have to expose the Node Exporter port
  • Note that we are deploying the container globally so each participating node get its own container instance; we will end up having the Node metrics at http://swarm-node:9100 url of participating nodes

If we have three nodes and we want to have a friendly host names for them to use in prometheus.yml Job definition; we can use extra_hosts section in our compose file. Further; if we want to access Prometheus we need that its container gets scheduled at some known node; so we have http://some-known-name:port url to reach Prometheus. We can do this using the constraints section in the Prometheus service section of the compose file


In the previous Prometheus post; we made a .NET 4 console app and ran it on the development box. For the Docker Swarm; we need something that can run in the cluster. We can code .NET Core Console and use prometheus-net client library to expose the metrices. Our app will be deployed in the cluster so it will have multiple instances running; each container instance will get its own IP address and Prometheus label the metric with this information; but its a good idea that we also include our own labels to identify which metric is coming from where.


  • multi-stage-buildI am using machine, custom_gauge and custom_counter matrices from the previous post; however for the Swarm; have added name and os labels using the prometheus-net client library. These labels will be given Machine Name and its OS Version values

One important thing to deploy the custom Docker image on the Swarm is that it needs to be “available” or “accessible” on all Swarm Nodes; this can be easily done by having a Docker Registry. Docker now has Multi Stage Build option and we can avoid to give access to Docker Registry to development box where we are building the Docker image or having Jenkins (or similar build environment). The Swarm node can compile and build .NET application on its own using this Multi Stage Build thing; there exist seperate SDK and RUNTIME official docker images

  • For this to work; we need to upload all the required files to the Swarm node where we will initiate this multi-stage build thing. We can either SCP or use GIT (or something like it) for this
  • You can still go the traditional way by building the image on the development box and pushing from there to your docker registry; or tar-zip it from the development box and import in each node individually; whichever way you choose; all the Swarm Nodes need to either have or know a way to access the required image to run

Now if we bring our stack up using docker stack deploy and deploy the Host Stats dashboard (see the previous Prometheus post); we can view and monitor the Node Exporter matrices of each Swarm Node


However our custom application’s dashboard is not how we want; our application is deployed and running on all nodes (that we can verify either from Docker CLI or using Visualizer; see first Docker Swarm post) but Prometheus is not retrieving matrices from all these nodes simultaneously instead it is getting it from one node first and then from second and so on. This is happening because we had “netcoreconsole:8000” as target entry in the job and Docker Swarm is doing DNS round robin load balancing. Note that the instance label that Prometheus is adding is same but our custom label name is different for three nodes


Prometheus has Service Discovery options; we can use Docker Swarm service discovery support; the Swarm exposes a special tasks.service dns entry and it will have IP addresses of all the associated containers of the service. Instead of using static_configs entry we can use dns_sd_configs entry with this special DNS entry and Prometheus will discover all the nodes and start retrieving the matrices


We can confirm that it has discovered all the containers from the Service Discovery interface; and graphing our metrics we should see values coming from all the participating nodes in parallel now


This should automatically get reflected in Grafana as well


We now dont need to expose the Prometheus; we can deploy it on the Swarm anywhere; the Grafana will discover it in the Swarm using the Service DNS entry that Swarm makes available. We only need Grafana to be running at the known location in the Swarm

I remained focus to our custom application and its monitoring when running in Swarm. If you want to monitor your Swarm; take a look at https://stefanprodan.com/2017/docker-swarm-instrumentation-with-prometheus; an excellent work done by Prodan; and he has made all that work available on Github as well

Posted by khurram | 0 Comments


Prometheus Series

Time Series Databases

A time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data. Examples of time series are heights of ocean tides, counts of sunspots, and the daily closing value of the Dow Jones Industrial Average. A time series database (TSDB) is a software system that is optimized for handling time series data, arrays of numbers indexed by time (a datetime or a datetime range). In some fields these time series are called profiles, curves, or traces. A time series of stock prices might be called a price curve. A time series of energy consumption might be called a load profile. A log of temperature values over time might be called a temperature trace. – Wikipedia


Prometheus is an open-source systems monitoring and alerting toolkit; most of it components are written in Go, making them easy to build and deploy as static binaries. Prometheus joined the Cloud Native Computing Foundation in 2016 as the second hosted project, after Kubernetes. It provides multi-dimensional data model with time series data identified by metric name and key/value pairs. It collects time series using HTTP pulls, HTTP targets can be discovered via service discovery or configuration files. It also features a query language and it comes with the web interface where you can explore the data using its query language and execute and plot it for casual / exploration purposes.

Picture Credit: https://prometheus.io/docs/introduction/overview

There are client libraries that we can use to add instrumenting support and integration with Prometheus Server. https://prometheus.io/docs/instrumenting/clientlibs has the list of these libraries for different languages and platform. There is also a push gateway for scenarios where adding HTTP endpoint to the application/device/node is not possible. There are standalone “exporters” that can retrieve metrics from popular services like HAProxy, StatsD, Graphite etc; these exporters have HTTP endpoint where they make this retrieved data available from where Prometheus Server can poll. Prometheus Server also exposes its own metrics and monitor its own metrics. It stores the retrieved metrics into local files in a custom format but also optionally can integrate with remote storage systems.

Node exporter is a Prometheus exporter for hardware and OS metrics exposed by *nix kernels, its written in Go; WMI exporter is the recommended exporter for Windows based machines that uses WMI for retrieving metrics. There are many exporters available. https://prometheus.io/docs/instrumenting/exporters has the list

Alertmanager is a seperate component that exposes its API over HTTP; Prometheus Server sends alerts to it. This component supports different alerting channels like Email, Slack etc and takes care of alerting concerns like grouping, silencing, dispatching and retrying etc.

Grafana is usually used on top of Prometheus; an open source tool that provides beautiful monitoring and metric analytics and dashboard features. Grafana has notion of data sources from where it collects data; and Prometheus is supported out of box.

For this post; lets consider we have a .NET 4 app; running on an old Windows 2003 box; may be because its integrated with some hardware whose drivers are not available for latest version of Windows restricting ourselves to continue to use this legacy .NET framework version. We want to modernize our app by adding monitoring support and may be some sort of Web API so we can make new components elsewhere and integrate to it. In .NET 4 applications if we want to have a HTTP endpoint; we can either use WebServiceHost class from System.ServiceModel.Web library intended for WCF endpoints (but we can do text/html with it) or there exists an older version of Microsoft.AspNet.WebApi.SelfHost package on Nuget. For our scenario this nuget package suits more as it will enable us to expose Web Api in our application as well.


In our application Main/Startup code; we need to configure and run the HttpSelfHostServer; to have /status and /metrics pages along with /api http endpoint our configuration will look something like what’s shown in the picture; and then we can simply add a StatusController inheriting from the ApiController and write the required action methods


Dont forget to allow the required port in Windows firewall; as later Prometheus will be accessing this http endpoint from a remote machine (from Docker Host/VM)

For the Prometheus; we need to expose our application’s metrics data in the text/plain content type. The format is available at https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md; I am just exposing three test metrics. Once we have it running; we can setup Prometheus components and Docker Containers are great way to try out new thing and there exists official images for Prometheus components. I am going to use the following Docker Compose file and using Docker for Windows (or Linux in some VM etc) can bring all these components online with just docker-compose up. For details read http://weblogs.com.pk/khurram/archive/2016/07/11/docker-compose.aspx


In my setup; I have a Node exporter that will expose the Docker VM/Host metrics, an Alertmanager, Prometheus Server and Grafana. All the configuration files for Prometheus components are YML files similar to docker-compose and are included in the repository. These files along with the .NET 4 Console project is available at https://github.com/khurram-aziz/HelloDocker under Prometheus folder

  • Note that Node exporter container is run with host networking and host process namespace so that it can get the metrics of the host and bind its http endpoint on the host ip address.
  • Prometheus is configured according to my Docker for Windows Networking setting, if its different for you; or you are using Linux host to run the Docker; change it accordingly. You will need to change target addresses of Node exporter and Custom jobs in prometheus.yml


The Alertmanager is being configured through its config file in a way that if some service that it is polling gets down for two minutes or the Node exporter reports that CPU is 10% or more for two minutes; it will send an alert on Slack. You need to specify the Web hook URL in the config file. Yon can change or add more rules as per your requirements. If you are not using Slack; and want the good old Email alerts; there are documentation and tutorials available online

  • Important thing to note is; you can have Alertmanager setup to send alerts to some database as well; in case you want to log “incidents” for SLA or something

If we have all the things setup properly and running; we can check our custom app metrics url and explore Prometheus through its Web interface


The Grafana will also be up with “First Dashboard” showing our metrics in the time graph; feel free to play with the dashboard and see that Grafana has the rich Prometheus support, along with its query auto completion etc


Grafana dashboards can be imported and exported as JSON files and there are many official and community built dashboards available that we can easily import and start using. For our Node exporter; we can easily find some nice already made dashboard there. We just need the dashboard ID (or its JSON) file and we can easily import it into our Grafana setup


For importing from Grafana.com; we just need the ID of the dashboard; and update/configure the data source


With few clicks; we will have a beautiful looking dashboard showing the Node exporter metrics


We know that Containers are immutable; and our customizations will be lost; unless we are mounting some volume and keeping the data there; or after edit/tweak the dashboard; or creating a new dashboard for our custom metrics; from Dashboard Settings we can export the JSON file. This JSON file then can be imported into our Docker Container when it is built and these dashboards will get provisioned automatically. The Grafana Container in our setup is configured accordingly; you can check Grafana YML and dashboards folder. Simply create a dashboard as per your liking and demand and then export the JSON file and remove its “id” and place the JSON file into the dashboards folder. This Grafana Provisioning feature was introduced in v5

In the repository I have just included one dashboard for our custom metrics; as an exercise try importing some dashboard from Grafana.com first, export its JSON file, place it in dashboards folder and rebuild the container!


Happy Monitoring!

Repository: https://github.com/khurram-aziz/HelloDocker

Browserifying AngularJS Material

Frontend Development Series

AngularJS Series

In the previous AngularJS post; we used the NUGET package to get the AngularJS; and in Frontend Development Series we have explored Node/NPM, Bower, Gulp and Browserify and how they all can be used and supported in Visual Studio 2017. We can use AngularJS with modern Javascript approach (ES6/ES2015); and can use Gulp + Browserify to bundle our front-end application. Instead of writing some sample application from scratch; lets use the very famous AngularJS Material; that is a User Interface Component framework and a reference implementation of Google’s Material Design Specification. Its Getting Started guide has a step by step tutorial creating a demo application that’s available on Github. There are difference branches having different versions of the guide like ES6 and Typescript

If we use its ES6 version and bring it into our existing ASP.NET Application; our project will look like this


  • I have added required NPM packages that the tutorial has details; and the packages required for our Gulp task
  • The steps of setting up Gulp and binding its task with Visual Studio Build is shown/discussed in the previous post of Frontend Development Series

We are bundling our AngularJS application and copying the angular.js and angular-material.css required files using the Gulp. We will need to refactor JS files in users/components updating the template URLs etc. The HTML from the tutorial is going to Material.aspx with appropriate CSS links and Script reference according to the Gulp task


AngularJS features Components that we can use to develop component based application structure. AngularJS component is a special kind of directive that uses simple configuration and creating them is very straight forward for instance in this sample application; the users-list component is created and used like html tag; the component has a template that can be HTML that can be composed using the other available components. Angular Material comes with such many components that render themselves according to Google Material Design. The usersList component is using these Angular Material components. This is similar to how we have been using ASP.NET Server/User Controls at server side in past; but this is now all being done at client side now.


Now if we want more than one AngularJS application in the site; we will like that angular.js is not bundled into each frontend application. We probably would like to use Angular Material in multiple applications as well. The pattern is that we create Vendor-JS file; thats basically a bundle having all these libraries that we are using in different application and have separate bundle for our each front-end applications. This can be easily done using Browserify and Gulp; we will be using one instance of browserify to create the vendor-js bundle using .require() and another instance(s) to bundle our front-end applications using .external() against the same array of such vendor files that we want to bundle separately.


  • Note that I have not included angular.js in the vendor-js bundle; as if we are using the plain angular elsewhere we will like not to include it in the vendor bundle so same angular.js file is used everywhere (that browser might already have cached) I am using the same browserify-shim technique that we talked about in the previous Frontend Development Series post

In our app.js we no longer need to import the Angular Material libraries that we did earlier so they become part of the application bundle


  • angular-material is still needed; or else ngMaterial use in angular.module() will give an error at runtime; the angular-material.js will not get included in the bundle; its just to keep the JS Transpiler for proper ngMaterial reference in the generate JS code

The code is available in angularjs branch at https://github.com/khurram-aziz/WebApplication47



Also check react branch at same repository and notice how babelify react preset is used to compile JSX templates

Posted by khurram | 0 Comments
Filed under:

Frontend Development–Browserify

Frontend Development Series

Vue.js Series

As the front-end code grow; we need an option to encapsulate our Javascript based work and need Javascript modules that only exposes required API. With this; we also need “Module Loaders” as Javascript had no notion of module and tracking module dependency and loading their Javascript files appropriately (loading jQuery before jQuery UI etc) can quickly become messy. Before Ecma Script 6 aka Ecma Script 2015; there were no standard way of defining modules; there were techniques or pattern and their respective loaders; Asynchronous Module Definition (AMD), CommonJS, Universal Module Definition (UMD) and System.register were popular. Node uses CommonJS format (require module thing). RequireJS and SystemJS are popular module loaders, RequireJS uses AMD and SystemJS supports AMD, CommonJS and UMD

ES6/ES2015 has now its own native module format and can load the modules accordingly, but sadly loading the modules are not supported in the browser. This is where Module bundlers comes; they work as Module Loader but by bundling the app and its module into a single Javascript file that we can then load into the browser like any traditional Javascript file. Browserify and Webpack are popular Module bundler.

For this post; lets make a simple Vuejs component; but first lets try to use Vuejs as an EcmaScript module; for this we will need something like this


  • Note that for Vuejs we need to include it from the dist folder as we are not compiling the “View” and NPM default Vue package only includes the “Runtime” component and not (View) Compiler

To compile our EcmaScript module and “bundle” Vue and our Javascript app into single file so we can load it into the browser; we will need Browserify for bundling; Babelify (that uses Babel) to transpile the ES6/ES2015 code and an associated Babel Preset; so we can use the app in our dear good old Internet Explorer as well. We can add required NPM packages as development dependencies (—save-dev while installing) and have the following gulp task that browserify, transform using babelify (that will use babel to transpile using es2015 preset), bundle and save the file as a single Javascript file into the Scripts folder from where we are loading it into the browser


Above our own code was not a Module; we were just using Vue.js as an EcmaScript Module. There are other techniques like transpiling and loading the Javascript modules in the browser; but for us, people coming from .NET background; gulp suits more as it fit perfectly with building the ASP.NET project and using the Task Runner’s option of mapping Gulp tasks into .NET project’s Build step that we saw in the previous post.

Vue.js has an interesting Single File Component option; in which we can have CSS, our View HTML and required Vue.js based Javascript code in the single .vue file that we can “consume” with Webpack and Browserify bundlers (build tools). The .vue file has three section, <style></style> section having the CSS style code for the component, the <template></template> section having the HTML code for the view and <script></script> section having the Javascript code. We can use pre-processors to write the CSS and View code in their required format. Here is one such simple .vue single file component


  • Note that we still need a Javascript file that is loading Vue and our Single File Component (as ES Module) and gluing them into the HTML/ASPX page using the global Vue instance like before; such Javascript module / code is generally called boot-loader that is establishing the environment that all subsequent code requires to run and it is the entry code that will get executed.

For .vue to work with our Browserify / Gulp setup; we need Vueify; that is a Browserify transform for Vue.js components. NPM install it and use it the browserify() call in the gulp task


  • The order of transforms is important; we first need to babelify and then vueify while browserify-ing :)
  • We also need babel-plugin-transform-runtime; its a Babel Transform Runtime that Vueify uses

We can now re-write our Validation work from the previous Vue.js post as a single file component

We need to load the vee-validate as ES module and use Vue.use() in our bootloader


  • VeeValidate uses the ES2015 promises; and even though we have transpile the code to ES5; we need the ES6 Promise Polyfill so our app can work in IE; a polyfill is a browser fallback, made in JavaScript, that allows functionality you expect to work in modern browsers to work in older browsers, e.g., to support canvas (an HTML5 feature) in older browsers.

If we have multiple Javascript applications in a single “web project” needing common Javascript modules and we are bundling our frontend application we end up having these common modules repeating in the bundles increasing their size. For instance if we have two Vue apps; or .vue single file component; the Vue.js (and other common modules like VeeValidate) will get bundled repeatedly. To solve this; lets include the Vue.js file (and other such common modules) through Bower and add it into the HTML/ASPX page like very begining so we have a single Vue.js file


Next we need to instruct Browserify not to bundle the Vue component when it comes across it; this is done using Browserify-Shim. Its shims a non CommonJS module using with an identifier under which the module has attached itself with the global window object. For example jQuery attaches with $ and Vue.js as Vue. Browserify-Shim is a Browserify Transform and instead of configuring it in the Gulpfile like we have been doing till now; I am configuring it using the Package.json. This is a popular technique that instead of configuring the module when being called; we set its configuring in the NPM package.json file from where the module picks up its configuration when invoked anywhere in the application.


  • Note that I am Browserify-ing our two .vue single file component individually in the Gulp task; we can have the .vue files and their bootloaders in a single folder and can write a gulp task that process each files in the specific folder instead of writing gulp task code for each component. Refer to https://stackoverflow.com/questions/40273966/gulp-bundle-browserify-on-multiple-files for such solution
  • Note that I am copying Vue.js file from the bowser_components to Script folder where ASPX/HTML is expecting it in the Gulp Task

But before we do this; lets refactor our two browserify()s; we are doing babelify transform that is doing Babel with es2015 preset; we can refactor it; and use Babel’s .babelrc configuration file. These dot files is another popular configuration technique for the Node modules. Bower also support dot configuration file; .bowerrc


We learned about Javascript Modules, ES6’ native module support and why we need to transpile it using Babel and how Babel uses preset to target its transpilations. Why we need Browserify and how we do its transforms and use Babelify and Vueify. How we can use BrowserifyShim to not include common modules across multiple applications and instead include them ourselves to keep the bundles of our applications small and including common modules ourselves in ASPX/HTML. We also saw two popular approaches of Node modules configurations; the package.json and dot configuration files.

The code is available in vue branch @ https://github.com/khurram-aziz/WebApplication47

Happy Javascripting!

Posted by khurram | 0 Comments
Filed under: ,

Modern Frontend Development

WebApps - Yesterday - TodayThanks to Node and Javascript Frameworks; the frontend development landscape has changed a lot and things are in continuous flux. For .NET developers; adapting to this might be hard to digest because we have been “addicted” to use one single thing that we were used to dictate; but if you have been Linux / Open Source enthusiast; this trend is very welcoming.

If you are not using Visual Studio 2017; switching to Command Prompt is the option; things are much improved in Visual Studio 2017 and its frequent updates. Visual Studio 2017 now supports Node.js and you can create npm configuration file (package.json), Bower configuration file (bower.json) and Gulp configuration file (gulpfile.js) from Add new item dialog. Bower is similar to NPM; and Node community use it to get front end artifacts like CSS, Javascript files. You can get them through NPM of course; but they use NPM to get Node modules; like Angular CLI


Visual Studio 2017’s Bower support even goes beyond; you can right click the project and choose Manage Bower Packages..similar to Manage NuGet packages and it will open up NuGet like package manager graphical user interface


From where you can browse and install required Bower packages. Bower downloads the packages into "bower_components” folder and from there we can copy over the required files to our project and add them, but these are manual steps and this approach is not “future proof” (what if we forget copying files after updating the package)


This is where “Task Runner” comes in; Gulp and Grunt are two popular Task runners in Node community and they both are supported in Visual Studio 2017. We get the gulp using NPM; add the NPM Configuration file and open it up; add gulp in “dependencies” section. Visual Studio 2017 will give you intellisense. Next add the Gulp Configuration file from Add New Item


In the gulpfile.js; we can add a task copy-files; and using gulp.src and pipe it to gulp.dest; copy over the files from node_modules or bower_components. Once the task is in place; we can bind it to project’s build; so this gulp task will run whenever you will build your project in the Visual Studio


Visual Studio is shipped with its own Node.js; and we can access it from Package Manager Console as well; so we dont have to leave Visual Studio. Sadly there is no built in terminal like Visual Studio Code; but we can install some third party addin to get this functionality


Next we will “try” to use Visual Studio 2017 with ASP.NET backend and Angular

Just Angular = Angular 2/2+; and it doesnt mean Angular 1 / Angular JS

Angular needs no introduction; with its v2 onwards it now has its own Command Line Interface (CLI) similar to what we saw for Ember CLI. Node.js is prerequisite and if its installed and configured; simply follow its Quick Start to get Angular through NPM and setup Angular project through its CLI (ng new yourapp). Once you have the app generated; switch to Visual Studio and your ASP.NET project as routine. Before doing anything; I will recommend that you setup the path of your “global” nodejs in the External Web Tools; and prioritize it; as Angular CLI needs more recent version of the NodeJS and Visual Studio’s Node might not work; especially when calling ng build through gulp


Next copy over the Angular generated files to your project root; leaving behind any git related files and node_modules folder. Note that it has its own package.json and it will overwrite any existing package.json you might already have in the ASP.NET project. After copying the files; re-add the required package in the package.json; for instance we need to re-add gulp.


Restore the Nodejs packages that Angular CLI and Angular needs by right clicking the package.json (NPM Configuration) and choosing Package Restore option; it will take few moments. Make sure you can use Angular CLI (ng) from the Package Manager Console by issueing ng –version; if it is; issue ng build and it will compile the Angular app (Typescript) generate the dist folder; having index.html and the javascript files for our Angular app. We can now copy this HTML and Script references from dist\index.html to the ASPX page where we want the Angular app


We can set our ASPX page as startup and write a Gulp task to run the ng build and copy over the javascript files to the project’s Scripts folder from where ASPX page is referencing them


We are now end up having ASP.NET project that uses NuGet package manager for the tools and libraries needed to build and run the backend, NPM package manager to build the Angular based frontend application and run necessary tasks and Bower package manager that keep tracks of our client side libraries and artifacts.

  • We can now continue to use the Angular CLI; either from Package Manager Console or from some terminal (third party addin) and can build our front end and back end sides in the same project; however if we want to use Visual Studio as an editor to edit the Angular CLI generated artifacts; we will have to manually add these artifacts into the project

The code is available in "angular" branch at https://github.com/khurram-aziz/WebApplication47

Posted by khurram | 0 Comments

Data Validation-Vue.js

npmData Validation Series

In the previous post we have learned about how we can do validation on the client side on the client model or view model and also on the server using System.ComponentModel.DataAnnotations based attribute and bubble any errors back to the client side through the web api. We used Knockout.Validation for this work and noted that though its a duplicated effort but having a client side validation bring more interactivity in the forms. Vue.js is another Javascript framwork that has gained quite a popularity. To make this post interesting; I decided to use NPM to get the Vue.js and its validation plugin and choose some plugin that present some alternative approach for client side validation than what we saw in the Knockout post.

I have selected VeeValidate plugin for Vue.js; that similar to jQuery.Validate does input fields validation and doesn't do Knockout.Validation style model validation, and is very true to Vue.js approach of having more focus on template / html than the Knockout approach of more focus on the Javascript code. Its not available through NUGET; instead we need to get it from Node Package Manager (NPM). So we need to have Node + NPM installed and configured.

Even though Vue.js is available at NUGET; and we can download VeeValidate from their website or CDN and place it manually in our project’s Scripts folder just like we had been doing in past; but similar to NUGET; we should embrace Node/NPM, it has first class expereince in Visual Studio Code and it also plays nicely in Visual Studio 2017 . If you are using older Visual Studio; we can still use it; I will recommend to install “Power Commands” that will give you open the command prompt option.


Once you are in the command prompt; you can simply start giving NPM commands like npm init to generate the package.json file followed by npm install vue –save to install Vue.js and npm install vee-validate –save for VeeValidate

You will end up with the required Javascript libraries in dist folders of their respective libraries under mode_modules. Showing all files in the project; you can find it and copy to the “Scripts” folders manually (for time being). Optionally you can also include the project.json file that NPM use.

Given we have copy the files and included in the project; we dont “yet” need npm init as pre-requisite of compiling/building the project. It was just a lengthier but official way to get the files. We can stay updated this way in future.

Once we have the required files in place; we can code the web page using Vue.js and VeeValidate comparable to what we did in the Knockout. It will be something like this

  • Notice how VeeValidate use v-validate attribute on the input fields to define the rules that are VeeValidate specific that you can learn from VeeValidate website
  • We can have two rules in v-validate and the rules are seperates by |
  • VeeValidate uses EcmaScript 6 Promises; and it will not work in IE; so we have to polyfill it
  • For captcha; we need to write custom VeeValidate rule according to its spec; my custom validation rule code is little different than what their documentation says; its because I wanted my page / script to work in IE11 and I had to work around the Prmosies so IE11 doesnt complain

As you have seen; VeeValidate approach is different than Knockout.Validation and it might suite more when the web designer writing the html can understand and live with additional attributes. The drawback is the model is left unvalidated and if it is hydrated, passed around or submitted to web api without UI binding the fields can go unchecked and only server side validation will work. Further; we cannt change validation rules at runtime so easily; for instance in Knockout example; the location was made required at runtime using button because all the validations are being added through Javascript and we can add/remove rules easily. So which approach you prefer?

The code is available at https://github.com/khurram-aziz/WebApplication46

Posted by khurram | 0 Comments
Filed under: ,

Angular JS

Angular JS is the famous Javascript library for Single Page Applications from Google. Now the v1.x version is called Angular JS and the v2 onwards are called Angular or Angular 2/2+. It is also the frontend part of the “MEAN” stack; where MongoDB is used as data store, Node as the server plus toolchain, Express JS as the web framework and Angular as the frontend framework. In this post however we will stick with our good old ASP.NET for the backend! Angular 2 has breaking change and code written for Angular JS will not be compatible with Angular 2+; so use Angular JS in new projects with caution. For the small projects and enterprise scenarios; I personally still prefer Angular JS; because it is a much simpler to get started with, there is no heavy stack requirement or need to change any build pipeline and one can start using it in the existing projects / code base just like other lightweight Javascript framework/libraries like Knockout. Angular 2+ is totally different beast on the other hand

In the ASP.NET project; we can get Angular JS using NUGET; the framework comes in couple of libraries and available on NUGET as respective Angular JS.* packages. For the first experiment; we just need AngularJS.Core and AngularJS.Route packages. For the Angular JS application; we add Angular JS specific attributes in html tags; usually starts with ng; for the simple Angular JS application we need to add ng-app attribute giving it some name; this name becomes the application name that we can refer to for declaring the variable in Javascript representing our application using angular.module(). Angular JS.Route helps us to make our application; single page application (SPA); it can load and navigate around without page reload. In Angular JS terms; the module is called ngRoute and we mention it as a requirement when declare the variable of our application. Afterwords; we can call ourVariable.config and it will give us the $routeProvider variable. Angular does this with injection; so we have to name the variable accordingly to get our hands to the router. Once we have its reference we can call its when() api and tell the router which HTML to load on which URL. The URLs are referred in the HTML with #!something and for the when() method they get referenced as /something. Here is one simple AngularJS.Route based Javascript application and instead of loading external html pages; I am using inline templates giving them the ids according to the htm page urls so instead of Angular hitting back the server looking for the external resources it display the content from the templates; giving ASP.NET MultiView like functionality but all at the client side. These templates need to be inside the ng-app html tag to work

If you are working in Visual Studio 2013; you can improve the Javascript Intellisense expereince by installing AngularJS.Intellisense NUGET package; but sadly its no longer being updated and just support Angular JS 1.3; what you can do is; install the package first and extract out the angular.intellisense.js file out of the package and then uninstall it; install AngularJS NUGET in routine and manually place the angular.intellisense.js file along your angular.js file and Visual Studio 2013 will picks it up.


For the next example; we will be integrating our Angular JS app with the ASP.NET Web Api backend. Angular JS app can have different sub section with their own controllers; in this next example we will define a div with ng-controller defined and then later will create an associated Angular Controller using module.controller(). ng-init attribute can be defined along with ng-controller having the method name that will get called on initialization; similar to Page_Init. We can use this option to load initial required data from the server through Web Apis. We will also need $scope and $http modules; for which we dont need any additional library file to include (like router earlier); these modules gets injected automatically for us and all we need is declare the properly named variables in the respective controller() call. We can use $scope to declare the variables and methods that we need in the application that we can “consume” accordingly in the html. $http module lets us communicate with the http web apis. Before Angular JS 1.4; $http.get had success and error methods and now we have then() that takes two delegates; the first for success and the second for error. The response variables are passed to these delegate from which we can then access data, status and headers object. We can also use any other existing client code for communicating with the web api; for instance jQuery.ajax(). Lets first add few methods into our Web Api Controller


Here’s the code of our simple master-detail application

  • Note that loadStates() returns the string array and how its bound to the drop down and loadCities() returns the array of object from which we are binding the city (the string part) for drop down options. Note how data-ng-options and data-ng-model is used for binding the lists and getting back selected value
  • Note how selected state is passed to the loadCities() using the params
  • Note that button clicks will cause post backs; thats why we are passing on the special variable $event from the view back to our functions where we simply calls preventDefault() so postbacks dont happen
  • Note how ng-show is used to bound the visibility of “detail” portion of the form

The code is available at https://github.com/khurram-aziz/WebApplication46

Posted by khurram | 0 Comments
Filed under: ,

Data Validation–Knockout

Data Validation Series

In the previous posts we learned about how we can do client side validation using jQuery.Validation against a server side .NET model having System.ComponentModel.DataAnnotations attributes. There are two issues; first lot of spaghetti html code is generated “automagically” that web developers don't like; they want to have more control over the html or better they want to hand craft the html and Javascript so their tools give them complete visualization and intellisense so they can further tweak it as per their likings; and second the validation is at UI level; the html input controls are being validated and if you want a client side “model” (in Javascript) that you want to use in some client side frameworks (MVC/MVVM etc) and want to use the validated model against some Web Apis; you need “something else”; “something more” and “something modern” (Javascripty)

For this post; I am going to use Knockout and will try to implement client validation rules on the Knockout view model object; here’s our model that we will be working with.

First we need a Web Api Controller; and as an example we are just considering “Create” case from CRUD and I am not going to write any data layer code for sake of simplicity.


  • I have used CameCasePropertyNamesContractResolver for Newtonsoft Json Serialization; so we can have our “view model” in client side Javascript code in camel case (JavaScript naming) and the server side class can continue to be Pascal Case (C# naming)
  • I am returning BadRequest along with ModelState object when model state is invalid; and we can handle this “exception” / error at client side to display the model validation errors from the server

We will be using Knockout.Validation NUGET package; a validation plugin for Knockout; installing its NUGET package will also install knockoutjs package that you should update afterword to the latest version. For the visualization, we are going to use Knockout Template Binding so along with input control the errors can be displayed associated to particular control. Further a variety of validation rules are being defined; though similar to our server side validation rules but varied enough to cover good ground to feel what the Knockout Validation plugin has to offer. To display the server side model state errors; I have added an array and populating it accordingly


Once we have the model errors in the array; we can display them through Knockout. For the rest implementation details I am attaching the complete html and javascript code below

  • captcha and password verification rules show how you can use Javascript based validation logic
  • First name if not entered will only be checked at server side so we can confirm to see model state errors work is working
  • Note the age is marked required using the HTML5 and not in the Knockout.Validation; not all browsers will support it but it will get caught at server

The good thing about Knockout.Validation is that all the “validation logic” is “defined” by hand in the Javascript; so if web development team has web designer and javascript developers; they can work seamlessly together with good separation of concern. However; this is duplicated effort; we now have to maintain the validation logic not only at the backend but also the frontend; but then this is something that we have to do anyways. If we dont want to duplicate this effort; given the Knockout.Validation plugin is not polluting the HTML; we can either have HtmlHelper style helper in MVC or Web Form Controls that can generate the required Javascript code extending the Knockout Model View adding validation rules; a topic of separate post perhaps. Another approach can be a template based code generation that generates the HTML + Knockout Javascript using the server side model that web designers and developers can further tweak or enhance as per their likings.

The code is available at https://github.com/khurram-aziz/WebApplication46

Posted by khurram | 0 Comments
Filed under: ,

Data Validation–Web Forms

Data Validation Series

In the previous posts we learned about how we can do client side validation using System.ComponentModel.DataAnnotations based attributes on the server side .NET classes. These techniques emerged in ASP.NET MVC stack; but these attributes were made part of .NET framework and with .NET 4.5 we can use these in the web forms as well. Lets take this UserModel server side class with data annotations as an example


After .NET 4.5; web forms data controls now also support MVC style model binding and we also no longer required “data source” controls and instead we can specify CRUD methods directly. For the testing; lets take a FormView and use its “Insert View” with Select and Insert methods defined in its code behind. We can still use the designer; creating a “temporary” object data source with code behind class’ CRUD methods and binding the formview to it to generate the templates and then simply remove the object data source and use the new .NET 4.5 functionality and specify the model and CRUD methods in the formview directly. All we need to add is; the ValidationSummary control at top and our data annotation based validation will work; we can use this.ModelState.IsValid after calling this.TryUpdateModel in the code behind to check if the model state conforms the validation rules. We will end up with something like this


This is only server side validation; and ValidationSummary control will list down the model validation issues when the form gets submitted to the server. What if we want client side validation? We can use the jQuery based unobtrusive validation techniques just like MVC. First we need to add Microsoft.jQuery.Unobtrusive.Validation NUGET package that has the dependency on the jQuery.Validation; this client side Javascript library from Microsoft enables to define additional attributes on the html input controls and use this meta data to validate the input controls using the jQuery.Validation and jQuery. However our formview template also needs update and we need to add the required meta data using data-val* attributes for this client side validation to work. We can either learn about these additional attributes and manually add them according to our server side model rules or we can simply use “Daynamic Data” control that does this automatically; all we need is add DynamicDataTemplatesCS NUGET package and use DynamicEntity control in the formview template. Also add required Javascript libraries


  • Update jQuery.Validation NUGET package after Microsoft.jQuery.Unobtrusive.Validation to fix the Javascript errors
  • We can also use ASP.NET bundling feature and create a bundle of these Javascript libraries and add it instead; this MVC feature is also available in Web forms post 4.5

If we check the generated code; we can learn about the data-val* attributes that Dynamic Data control has automatically generated for us using the server side model attributes. It also attached the warning spans with each input control and make them visible if its associated input control has any validation rule. It is quite a monkey html code that we didnt had to write!


Posted by khurram | 0 Comments
Filed under: ,

AVL Tree

In the last project I needed AVL Tree inspired data structure. The base class library doesn't expose any thing that could be extended so I setup a Test Project to establish some building blocks. AVL Tree is a self balancing binary search tree, the first of this kind. The basic idea is after adding the node into the Binary Search Tree, the parent node is checked recursively till root node for either left or right sub-trees have similar height or not; and if not; required rotation is done to balance the heights. The height of the node is its distance in term of children from the root. Here is the first test case for an AVL Tree; that we need to satisfy, and using Visual Studio’s auto class/method generation we end up with such an AVL Tree class.


Given AVL Tree is a specialized Binary Search Tree and Binary Search Tree is specialized Binary Tree; lets setup this class hierarchy and make these classes Generics aware


Now that we have signatures of the required classes; lets start adding some code into them. First of all for the BinaryTreeNode<T>; lets also add Parent property that will be helpful for us next. Lets also add Height property that will tell the number of generation that exists as children and grand children of that specific node. It can be easily coded using recursion.


For the BinaryTree<T> abstract class; we can add BreadthFirst and DepthFirst methods that for traversing the binary tree. I am using C#’s Action<T> delegate; so we can reuse it in the same class for Clear() and in inherited classes.


For the BinarySearchTree<T>; we need to override Contains and Remove from our abstract BinaryTree<T> class. Also note that we have specialized the “T” with condition that it should implement IComparable so that we can add it in our Binary Search Tree (BST) according to the algorithm of BST. The Add(T) method is kept virtual so we can override it subclass; AvlTree later. This method returns the newly added node. The rest is implementation detail that one can learn in Data Structure books or Wikipedia; I am also sharing the whole project through Github so you can look it up there as well

For the AvlTree, the BreadthFirst() is just now calling the BreadthFirst from BinaryTree<T> base class with the delegate that simply populates the List and return it as an array later. Its not production ready approach as its duplicating the data; and we can / should implement Iterators Pattern For the Add; we simply will class base class Add that will insert the data into the tree and returns the node. The parent node the newly added node is now need to check and balance upto the root node for the AVL Tree to work as expected, that we will do next.


For the AvlTree, we need a balanceFactor(Node) that will tell us if the given node is “left heavy” or “right heavy”; it can do this now easily using the “Height” property that we have already established. If the balanceFactor() returns >1 then the node is left heavy and if <-1 then it is right heavy and we need to “balance” it using rotateLeft, rotateRight, rorateLeftRight or rotateRightLeft. If balanceFactor is –1, 0 or 1; then the difference of height between left and right is just one generation / height that we can ignore. These are AVL tree technical specifications that one can read about on Wikipedia etc.


  • Single rotations (rotateLeft and rotateRight) are done to the given node.
  • Double rotations are done to the child node and then the given node; for rotateLeftRight; we rotateLeft the left child first and then rotateRight the given node; and for rotateRightLeft we do rotateRight the right child and then rotateLeft the given node

For the single rotation; we select the pivot node and then swap the parents and children according to the rotation type. Double rotations can be implemented calling single rotation methods accordingly.


  • The node being rotated parent has the child link to it that gets dormant after the rotation. We are not setting this in the rotation methods instead it is left as responsibility of the caller of these methods as they “know” either to change left or right child of the parent and can refer the parent node easily before calling these rotation methods

Now for the balance() we need to call balanceFactor() to check if the node is left or right heavy.

  • If its left heavy and balanceFactor of its left node is also left heavy then we will simply do rotateRight else we need to do rotateLeftRight
  • If its right heavy and balanceFactor if its right node is left heavy then we will do rotateRightLeft else we will simply do rotateLeft

We also need to set the appropriate child of the parent after the rotation and need to keep balancing parent node recursively till we reach the root node. The root node also need to balance if required


As you can see; using the modern languages and their framework libraries; we can implement such classic data structures and build upon these to more exciting data structures. The project is available at https://github.com/khurram-aziz/HelloDataStructures

Posted by khurram | 0 Comments
Filed under:

MVC5: Minimal Provider for ASP.NET Identity

ASP.NET once came with Membership and Role Providers, that we used and abused in past, then came Simple Membership Provider with Razor and Webmatrix that exposed a simpler API but still used the same providers behind the scene. Finally with Visual Studio 2013 and ASP.NET 4.5.1 they introduced ASP.NET Identity that offers a modern replacement of these old providers. ASP.NET Identity has modern API and exposes plug-ability and different levels. It uses Entity Framework; and if you want to store your users in database of your choice, you just need to replace SQL Entity Framework provider that gets configured out of the box with the provider of your database. You can also go deeper and implement the required interfaces and use your custom classes that don't use Entity Framework and in this post we will do exactly this; we will explore what minimal classes we need to get things going. We have a pet MVC project that always becomes the test bed for anything MVC related. It already has gone through MVC3 and MVC4 upgrades, and it was time that we replace its Membership and Role providers with ASP.NET Identity and upgrade it to MVC5

For the MVC5 upgrade; I would recommend that you create a new separate project as the new template comes with Bootstrap goodness, and then simply copy over your models, views and controllers. We availed this opportunity replacing old ASPX views to newer Razor based CSHTML views cleaning up some old mess.

If you have created a new MVC5 project; then you will need to delete ApplicationUser and ApplicationDbContext (if you intend not to use Code First Entity Framework that gets configured by default out of the box) from Models namespace. You also need to delete ApplicationUserManager class from App_Start\IdentityConfig.cs. For our implementation; we need to implement IUser<T> and IUserStore<T> and need to setup UserManager and SigninManager classes. This is very well documented at Overview of Custom Storage Providers for ASP.NET Identity. The minimal required code will look something like this

Once these files are in place; we will need to use their Create() methods in App_Start\Startup.Auth.cs for app.CreatePerOwinContext methods instead of original User and Signin Managers. We will also need to rename the Manager classes in Controllers\AccountController and Controllers\ManageControler

Email Confirmation

If you want to have Email Confirmation option; you should read Account Confirmation and Password Recovery with ASP.NET Identity (C#) where its documented. For this to work; your store needs to implement IUserEmailStore<T> and IUserTokenProvider<T, U>. You will also need a helper class implementing IIdentityMessageService for Email and SMS and add this code into UserManager’s static Create() method

manager.RegisterTwoFactorProvider("Phone Code", new PhoneNumberTokenProvider<YourAppUser> { MessageFormat = "Your security code is {0}" });
manager.RegisterTwoFactorProvider("Email Code", new EmailTokenProvider<YourAppUser> { Subject = "Security Code", BodyFormat = "Your security code is {0}" });
manager.EmailService = new YourEmailService();
manager.SmsService = new YourSmsService();
var dataProtectionProvider = options.DataProtectionProvider;
if (dataProtectionProvider != null)
  manager.UserTokenProvider = userStore; //userStore instance thats implementing the IUserTokenProvider

You will also need to change code in AccountController accordingly for Register and Login scenario and add required new views and change existing Register, Profile views for Email / SMS

  • The ASP.NET Identity was open sourced and the code is available at https://aspnetidentity.codeplex.com; being open source, you can always at the code and fix your provider accordingly
  • Having your own UserManager and UserStore, you have a choice to either override UserManager’s methods that are virtual and hook them directly to your additional code in UserStore; for instance when creating user through UserManager base class it demands that Store also implement IUserPasswordStore<T>; if you want to avoid this; you can simply override Create() methods of base UserManager and call your UserStore methods directly. Similarly you can avoid Token Providers and override Email/Sms methods; point is; you have choice either override required UserManager methods or implement required interfaces. Having code from codeplex helps to peek into whats happening in the base classes
  • Don't confuse it with ASP.NET Identity Core for ASP.NET Core thats available at https://github.com/aspnet/identity
Posted by khurram | 0 Comments
Filed under:

Dotnet Core :: PostgreSQL

postgresql Dotnet Core Series

Given, Dotnet Core can run on Linux; and in the series we have been exploring different aspects of having a Microservice based Containerized Dotnet Core application running on a Linux in Docker Containers. SQL Server has been the defacto database for .NET application, there even exists SQL Server for Linux (in a Public Preview form at the time of this post) and there is even an official image of SQL Server for Linux for Docker Engine that we can use; and connect our existing beloved SQL Tools to connect to it; but it needs 3.25GB memory and its an evaluation version.


db-pg-pgadminPostgreSQL is ACID compliant transactional object-relational database available for free on Windows, Linux and Mac. Its not as popular as mySQL; but it does provide serveral indexing functions, asynchronous commit, optimizer, synchronous and asynchronous replication that makes it technically more solid choice. Given its available for Windows; we can install it on the Windows development machines along with pgAdmin; the SQL Management Studio like client tool.

Entity Framework Core

Entity Framework Core is a cross platform data access technology for Dotnet Core. Its not EF 7 or EF6.x compatible; its developed from scratch and supports many database engines through Database Providers. Npgsql is an excellent .NET data provider for PostreSQL (Their GitHub Repositories) and it supports EF Core. All you need to do is install the Npgsql.EntityFrameworkCore.PostgreSQL NUGET package using the dotnet core CLI (dotnet add package Npgsql.EntityFrameworkCore.PostgreSQL) It will bring along the EF Core and Npgsql libraries into your project.

We can now write our Entity classes and a DbContext class. For Npgsql, in OnConfiguring override, we will use UseNpgsql instead of UseSqlServer with DbContextOptionsBuilder passing on required PostgreSQL connection string. Here’s one entity class and context file I made for testing!

We can use the Context class with all the LINQ goodness; similar to SQL Server; for instance here’s the controller class

And here’s displaying the products in the View

  • If the entities are in different namespace; either import the namespace in the web.config file in the Views folder; or add the namespace in particular View by adding the using at the top @using ECommMvc.Models;

Entity Framework Core Command Line Tools

EF Core .NET Command Line Tools extends Dotnet CLI; and add ef commands to the dotnet (CLI) We need to add Microsoft.EntityFrameworkCore.Tools.Dotnet and Microsoft.EntityFrameworkCore.Design NUGET packages using dotnet add package NUGET; dotnet restore them and then add PackageReference using Design and DotnetCliToolReference using the Tools.Dotnet package; and you should end up having the dotnet ef commands in the project


Using the ef commands; we can add the Migration (dotnet ef migration add Migration-Name), remove it; update the database (dotnet ef database update) and more. Once we have the Migrations in place; we can continue to evolve our Entities and Database accordingly.


Using the Migrations; we can Seed our database as well; we can create a Migration naming Seed; and add the required seeding code in the migration’s CS file

When deploying into the Docker Container; we often need “side kick” container that “seeds” the cache or database (for details see Dockerizing PHP + MySQL Application Part 2); as when container is started we get the clean slate. Given the Migrations code become part of the MVC project; and in .NET Core; there is a Program.cs the entry point where Kestrel / MVC is initialized; we can add more code there as well. We can use the Context that’s in place and Migrations and update the database (that will initialize and seed)


Now with database work in place and Docker building techniques shown in previous posts (Redis Clients -- ASP.NET Core and Jenkins); we can have v2 Compose file or v3 Compose file (for Docker Swarm) and deploy our .NET Core MVC application that’s using Redis as Caching and PostgreSQL as Database into Docker Containers running on Linux node(s)

Project is available at https://github.com/khurram-aziz/HelloDotnetCore
More Posts Next page »