Welcome to weblogs.com.pk Sign in | Join | Help

Modern Frontend Development

WebApps - Yesterday - TodayThanks to Node and Javascript Frameworks; the frontend development landscape has changed a lot and things are in continuous flux. For .NET developers; adapting to this might be hard to digest because we have been “addicted” to use one single thing that we were used to dictate; but if you have been Linux / Open Source enthusiast; this trend is very welcoming.

If you are not using Visual Studio 2017; switching to Command Prompt is the option; things are much improved in Visual Studio 2017 and its frequent updates. Visual Studio 2017 now supports Node.js and you can create npm configuration file (package.json), Bower configuration file (bower.json) and Gulp configuration file (gulpfile.js) from Add new item dialog. Bower is similar to NPM; and Node community use it to get front end artifacts like CSS, Javascript files. You can get them through NPM of course; but they use NPM to get Node modules; like Angular CLI

package.jsonbower.json

Visual Studio 2017’s Bower support even goes beyond; you can right click the project and choose Manage Bower Packages..similar to Manage NuGet packages and it will open up NuGet like package manager graphical user interface

manage-bowerpackages

From where you can browse and install required Bower packages. Bower downloads the packages into "bower_components” folder and from there we can copy over the required files to our project and add them, but these are manual steps and this approach is not “future proof” (what if we forget copying files after updating the package)

bower-packages

This is where “Task Runner” comes in; Gulp and Grunt are two popular Task runners in Node community and they both are supported in Visual Studio 2017. We get the gulp using NPM; add the NPM Configuration file and open it up; add gulp in “dependencies” section. Visual Studio 2017 will give you intellisense. Next add the Gulp Configuration file from Add New Item

npm-intellisensegulpfile.js

In the gulpfile.js; we can add a task copy-files; and using gulp.src and pipe it to gulp.dest; copy over the files from node_modules or bower_components. Once the task is in place; we can bind it to project’s build; so this gulp task will run whenever you will build your project in the Visual Studio

gulp.js

Visual Studio is shipped with its own Node.js; and we can access it from Package Manager Console as well; so we dont have to leave Visual Studio. Sadly there is no built in terminal like Visual Studio Code; but we can install some third party addin to get this functionality

npm-install-gulp

Next we will “try” to use Visual Studio 2017 with ASP.NET backend and Angular

Just Angular = Angular 2/2+; and it doesnt mean Angular 1 / Angular JS

Angular needs no introduction; with its v2 onwards it now has its own Command Line Interface (CLI) similar to what we saw for Ember CLI. Node.js is prerequisite and if its installed and configured; simply follow its Quick Start to get Angular through NPM and setup Angular project through its CLI (ng new yourapp). Once you have the app generated; switch to Visual Studio and your ASP.NET project as routine. Before doing anything; I will recommend that you setup the path of your “global” nodejs in the External Web Tools; and prioritize it; as Angular CLI needs more recent version of the NodeJS and Visual Studio’s Node might not work; especially when calling ng build through gulp

external-web-tools

Next copy over the Angular generated files to your project root; leaving behind any git related files and node_modules folder. Note that it has its own package.json and it will overwrite any existing package.json you might already have in the ASP.NET project. After copying the files; re-add the required package in the package.json; for instance we need to re-add gulp.

angular

Restore the Nodejs packages that Angular CLI and Angular needs by right clicking the package.json (NPM Configuration) and choosing Package Restore option; it will take few moments. Make sure you can use Angular CLI (ng) from the Package Manager Console by issueing ng –version; if it is; issue ng build and it will compile the Angular app (Typescript) generate the dist folder; having index.html and the javascript files for our Angular app. We can now copy this HTML and Script references from dist\index.html to the ASPX page where we want the Angular app

ng-build

We can set our ASPX page as startup and write a Gulp task to run the ng build and copy over the javascript files to the project’s Scripts folder from where ASPX page is referencing them

hello-angular

We are now end up having ASP.NET project that uses NuGet package manager for the tools and libraries needed to build and run the backend, NPM package manager to build the Angular based frontend application and run necessary tasks and Bower package manager that keep tracks of our client side libraries and artifacts.

  • We can now continue to use the Angular CLI; either from Package Manager Console or from some terminal (third party addin) and can build our front end and back end sides in the same project; however if we want to use Visual Studio as an editor to edit the Angular CLI generated artifacts; we will have to manually add these artifacts into the project

The code is available in "angular" branch at https://github.com/khurram-aziz/WebApplication47

Data Validation-Vue.js

npmData Validation Series

In the previous post we have learned about how we can do validation on the client side on the client model or view model and also on the server using System.ComponentModel.DataAnnotations based attribute and bubble any errors back to the client side through the web api. We used Knockout.Validation for this work and noted that though its a duplicated effort but having a client side validation bring more interactivity in the forms. Vue.js is another Javascript framwork that has gained quite a popularity. To make this post interesting; I decided to use NPM to get the Vue.js and its validation plugin and choose some plugin that present some alternative approach for client side validation than what we saw in the Knockout post.

I have selected VeeValidate plugin for Vue.js; that similar to jQuery.Validate does input fields validation and doesn't do Knockout.Validation style model validation, and is very true to Vue.js approach of having more focus on template / html than the Knockout approach of more focus on the Javascript code. Its not available through NUGET; instead we need to get it from Node Package Manager (NPM). So we need to have Node + NPM installed and configured.

Even though Vue.js is available at NUGET; and we can download VeeValidate from their website or CDN and place it manually in our project’s Scripts folder just like we had been doing in past; but similar to NUGET; we should embrace Node/NPM, it has first class expereince in Visual Studio Code and it also plays nicely in Visual Studio 2017 . If you are using older Visual Studio; we can still use it; I will recommend to install “Power Commands” that will give you open the command prompt option.

power-commands

Once you are in the command prompt; you can simply start giving NPM commands like npm init to generate the package.json file followed by npm install vue –save to install Vue.js and npm install vee-validate –save for VeeValidate

You will end up with the required Javascript libraries in dist folders of their respective libraries under mode_modules. Showing all files in the project; you can find it and copy to the “Scripts” folders manually (for time being). Optionally you can also include the project.json file that NPM use.

Given we have copy the files and included in the project; we dont “yet” need npm init as pre-requisite of compiling/building the project. It was just a lengthier but official way to get the files. We can stay updated this way in future.

Once we have the required files in place; we can code the web page using Vue.js and VeeValidate comparable to what we did in the Knockout. It will be something like this

  • Notice how VeeValidate use v-validate attribute on the input fields to define the rules that are VeeValidate specific that you can learn from VeeValidate website
  • We can have two rules in v-validate and the rules are seperates by |
  • VeeValidate uses EcmaScript 6 Promises; and it will not work in IE; so we have to polyfill it
  • For captcha; we need to write custom VeeValidate rule according to its spec; my custom validation rule code is little different than what their documentation says; its because I wanted my page / script to work in IE11 and I had to work around the Prmosies so IE11 doesnt complain

As you have seen; VeeValidate approach is different than Knockout.Validation and it might suite more when the web designer writing the html can understand and live with additional attributes. The drawback is the model is left unvalidated and if it is hydrated, passed around or submitted to web api without UI binding the fields can go unchecked and only server side validation will work. Further; we cannt change validation rules at runtime so easily; for instance in Knockout example; the location was made required at runtime using button because all the validations are being added through Javascript and we can add/remove rules easily. So which approach you prefer?

The code is available at https://github.com/khurram-aziz/WebApplication46

Posted by khurram | 0 Comments
Filed under: , ,

Angular JS

Angular JS is the famous Javascript library for Single Page Applications from Google. Now the v1.x version is called Angular JS and the v2 onwards are called Angular or Angular 2/2+. It is also the frontend part of the “MEAN” stack; where MongoDB is used as data store, Node as the server plus toolchain, Express JS as the web framework and Angular as the frontend framework. In this post however we will stick with our good old ASP.NET for the backend! Angular 2 has breaking change and code written for Angular JS will not be compatible with Angular 2+; so use Angular JS in new projects with caution. For the small projects and enterprise scenarios; I personally still prefer Angular JS; because it is a much simpler to get started with, there is no heavy stack requirement or need to change any build pipeline and one can start using it in the existing projects / code base just like other lightweight Javascript framework/libraries like Knockout. Angular 2+ is totally different beast on the other hand

In the ASP.NET project; we can get Angular JS using NUGET; the framework comes in couple of libraries and available on NUGET as respective Angular JS.* packages. For the first experiment; we just need AngularJS.Core and AngularJS.Route packages. For the Angular JS application; we add Angular JS specific attributes in html tags; usually starts with ng; for the simple Angular JS application we need to add ng-app attribute giving it some name; this name becomes the application name that we can refer to for declaring the variable in Javascript representing our application using angular.module(). Angular JS.Route helps us to make our application; single page application (SPA); it can load and navigate around without page reload. In Angular JS terms; the module is called ngRoute and we mention it as a requirement when declare the variable of our application. Afterwords; we can call ourVariable.config and it will give us the $routeProvider variable. Angular does this with injection; so we have to name the variable accordingly to get our hands to the router. Once we have its reference we can call its when() api and tell the router which HTML to load on which URL. The URLs are referred in the HTML with #!something and for the when() method they get referenced as /something. Here is one simple AngularJS.Route based Javascript application and instead of loading external html pages; I am using inline templates giving them the ids according to the htm page urls so instead of Angular hitting back the server looking for the external resources it display the content from the templates; giving ASP.NET MultiView like functionality but all at the client side. These templates need to be inside the ng-app html tag to work

If you are working in Visual Studio 2013; you can improve the Javascript Intellisense expereince by installing AngularJS.Intellisense NUGET package; but sadly its no longer being updated and just support Angular JS 1.3; what you can do is; install the package first and extract out the angular.intellisense.js file out of the package and then uninstall it; install AngularJS NUGET in routine and manually place the angular.intellisense.js file along your angular.js file and Visual Studio 2013 will picks it up.

angular-intellisense

For the next example; we will be integrating our Angular JS app with the ASP.NET Web Api backend. Angular JS app can have different sub section with their own controllers; in this next example we will define a div with ng-controller defined and then later will create an associated Angular Controller using module.controller(). ng-init attribute can be defined along with ng-controller having the method name that will get called on initialization; similar to Page_Init. We can use this option to load initial required data from the server through Web Apis. We will also need $scope and $http modules; for which we dont need any additional library file to include (like router earlier); these modules gets injected automatically for us and all we need is declare the properly named variables in the respective controller() call. We can use $scope to declare the variables and methods that we need in the application that we can “consume” accordingly in the html. $http module lets us communicate with the http web apis. Before Angular JS 1.4; $http.get had success and error methods and now we have then() that takes two delegates; the first for success and the second for error. The response variables are passed to these delegate from which we can then access data, status and headers object. We can also use any other existing client code for communicating with the web api; for instance jQuery.ajax(). Lets first add few methods into our Web Api Controller

webapi

Here’s the code of our simple master-detail application

  • Note that loadStates() returns the string array and how its bound to the drop down and loadCities() returns the array of object from which we are binding the city (the string part) for drop down options. Note how data-ng-options and data-ng-model is used for binding the lists and getting back selected value
  • Note how selected state is passed to the loadCities() using the params
  • Note that button clicks will cause post backs; thats why we are passing on the special variable $event from the view back to our functions where we simply calls preventDefault() so postbacks dont happen
  • Note how ng-show is used to bound the visibility of “detail” portion of the form

The code is available at https://github.com/khurram-aziz/WebApplication46

Posted by khurram | 0 Comments
Filed under: , ,

Data Validation–Knockout

Data Validation Series

In the previous posts we learned about how we can do client side validation using jQuery.Validation against a server side .NET model having System.ComponentModel.DataAnnotations attributes. There are two issues; first lot of spaghetti html code is generated “automagically” that web developers don't like; they want to have more control over the html or better they want to hand craft the html and Javascript so their tools give them complete visualization and intellisense so they can further tweak it as per their likings; and second the validation is at UI level; the html input controls are being validated and if you want a client side “model” (in Javascript) that you want to use in some client side frameworks (MVC/MVVM etc) and want to use the validated model against some Web Apis; you need “something else”; “something more” and “something modern” (Javascripty)

For this post; I am going to use Knockout and will try to implement client validation rules on the Knockout view model object; here’s our model that we will be working with.

First we need a Web Api Controller; and as an example we are just considering “Create” case from CRUD and I am not going to write any data layer code for sake of simplicity.

webapi

  • I have used CameCasePropertyNamesContractResolver for Newtonsoft Json Serialization; so we can have our “view model” in client side Javascript code in camel case (JavaScript naming) and the server side class can continue to be Pascal Case (C# naming)
  • I am returning BadRequest along with ModelState object when model state is invalid; and we can handle this “exception” / error at client side to display the model validation errors from the server

We will be using Knockout.Validation NUGET package; a validation plugin for Knockout; installing its NUGET package will also install knockoutjs package that you should update afterword to the latest version. For the visualization, we are going to use Knockout Template Binding so along with input control the errors can be displayed associated to particular control. Further a variety of validation rules are being defined; though similar to our server side validation rules but varied enough to cover good ground to feel what the Knockout Validation plugin has to offer. To display the server side model state errors; I have added an array and populating it accordingly

ko-displaying-serverside-model-errors

Once we have the model errors in the array; we can display them through Knockout. For the rest implementation details I am attaching the complete html and javascript code below

  • captcha and password verification rules show how you can use Javascript based validation logic
  • First name if not entered will only be checked at server side so we can confirm to see model state errors work is working
  • Note the age is marked required using the HTML5 and not in the Knockout.Validation; not all browsers will support it but it will get caught at server

The good thing about Knockout.Validation is that all the “validation logic” is “defined” by hand in the Javascript; so if web development team has web designer and javascript developers; they can work seamlessly together with good separation of concern. However; this is duplicated effort; we now have to maintain the validation logic not only at the backend but also the frontend; but then this is something that we have to do anyways. If we dont want to duplicate this effort; given the Knockout.Validation plugin is not polluting the HTML; we can either have HtmlHelper style helper in MVC or Web Form Controls that can generate the required Javascript code extending the Knockout Model View adding validation rules; a topic of separate post perhaps. Another approach can be a template based code generation that generates the HTML + Knockout Javascript using the server side model that web designers and developers can further tweak or enhance as per their likings.

The code is available at https://github.com/khurram-aziz/WebApplication46

Posted by khurram | 0 Comments
Filed under: , ,

Data Validation–Web Forms

Data Validation Series

In the previous posts we learned about how we can do client side validation using System.ComponentModel.DataAnnotations based attributes on the server side .NET classes. These techniques emerged in ASP.NET MVC stack; but these attributes were made part of .NET framework and with .NET 4.5 we can use these in the web forms as well. Lets take this UserModel server side class with data annotations as an example

usermodel

After .NET 4.5; web forms data controls now also support MVC style model binding and we also no longer required “data source” controls and instead we can specify CRUD methods directly. For the testing; lets take a FormView and use its “Insert View” with Select and Insert methods defined in its code behind. We can still use the designer; creating a “temporary” object data source with code behind class’ CRUD methods and binding the formview to it to generate the templates and then simply remove the object data source and use the new .NET 4.5 functionality and specify the model and CRUD methods in the formview directly. All we need to add is; the ValidationSummary control at top and our data annotation based validation will work; we can use this.ModelState.IsValid after calling this.TryUpdateModel in the code behind to check if the model state conforms the validation rules. We will end up with something like this

formview

This is only server side validation; and ValidationSummary control will list down the model validation issues when the form gets submitted to the server. What if we want client side validation? We can use the jQuery based unobtrusive validation techniques just like MVC. First we need to add Microsoft.jQuery.Unobtrusive.Validation NUGET package that has the dependency on the jQuery.Validation; this client side Javascript library from Microsoft enables to define additional attributes on the html input controls and use this meta data to validate the input controls using the jQuery.Validation and jQuery. However our formview template also needs update and we need to add the required meta data using data-val* attributes for this client side validation to work. We can either learn about these additional attributes and manually add them according to our server side model rules or we can simply use “Daynamic Data” control that does this automatically; all we need is add DynamicDataTemplatesCS NUGET package and use DynamicEntity control in the formview template. Also add required Javascript libraries

DynamicDataTemplateCS

  • Update jQuery.Validation NUGET package after Microsoft.jQuery.Unobtrusive.Validation to fix the Javascript errors
  • We can also use ASP.NET bundling feature and create a bundle of these Javascript libraries and add it instead; this MVC feature is also available in Web forms post 4.5

If we check the generated code; we can learn about the data-val* attributes that Dynamic Data control has automatically generated for us using the server side model attributes. It also attached the warning spans with each input control and make them visible if its associated input control has any validation rule. It is quite a monkey html code that we didnt had to write!

data-val

Posted by khurram | 0 Comments
Filed under: ,

AVL Tree

In the last project I needed AVL Tree inspired data structure. The base class library doesn't expose any thing that could be extended so I setup a Test Project to establish some building blocks. AVL Tree is a self balancing binary search tree, the first of this kind. The basic idea is after adding the node into the Binary Search Tree, the parent node is checked recursively till root node for either left or right sub-trees have similar height or not; and if not; required rotation is done to balance the heights. The height of the node is its distance in term of children from the root. Here is the first test case for an AVL Tree; that we need to satisfy, and using Visual Studio’s auto class/method generation we end up with such an AVL Tree class.

testclass

Given AVL Tree is a specialized Binary Search Tree and Binary Search Tree is specialized Binary Tree; lets setup this class hierarchy and make these classes Generics aware

classes-1

Now that we have signatures of the required classes; lets start adding some code into them. First of all for the BinaryTreeNode<T>; lets also add Parent property that will be helpful for us next. Lets also add Height property that will tell the number of generation that exists as children and grand children of that specific node. It can be easily coded using recursion.

binarytreenode

For the BinaryTree<T> abstract class; we can add BreadthFirst and DepthFirst methods that for traversing the binary tree. I am using C#’s Action<T> delegate; so we can reuse it in the same class for Clear() and in inherited classes.

binarytree

For the BinarySearchTree<T>; we need to override Contains and Remove from our abstract BinaryTree<T> class. Also note that we have specialized the “T” with condition that it should implement IComparable so that we can add it in our Binary Search Tree (BST) according to the algorithm of BST. The Add(T) method is kept virtual so we can override it subclass; AvlTree later. This method returns the newly added node. The rest is implementation detail that one can learn in Data Structure books or Wikipedia; I am also sharing the whole project through Github so you can look it up there as well

For the AvlTree, the BreadthFirst() is just now calling the BreadthFirst from BinaryTree<T> base class with the delegate that simply populates the List and return it as an array later. Its not production ready approach as its duplicating the data; and we can / should implement Iterators Pattern For the Add; we simply will class base class Add that will insert the data into the tree and returns the node. The parent node the newly added node is now need to check and balance upto the root node for the AVL Tree to work as expected, that we will do next.

bst

For the AvlTree, we need a balanceFactor(Node) that will tell us if the given node is “left heavy” or “right heavy”; it can do this now easily using the “Height” property that we have already established. If the balanceFactor() returns >1 then the node is left heavy and if <-1 then it is right heavy and we need to “balance” it using rotateLeft, rotateRight, rorateLeftRight or rotateRightLeft. If balanceFactor is –1, 0 or 1; then the difference of height between left and right is just one generation / height that we can ignore. These are AVL tree technical specifications that one can read about on Wikipedia etc.

avl-1

  • Single rotations (rotateLeft and rotateRight) are done to the given node.
  • Double rotations are done to the child node and then the given node; for rotateLeftRight; we rotateLeft the left child first and then rotateRight the given node; and for rotateRightLeft we do rotateRight the right child and then rotateLeft the given node

For the single rotation; we select the pivot node and then swap the parents and children according to the rotation type. Double rotations can be implemented calling single rotation methods accordingly.

avl-2

  • The node being rotated parent has the child link to it that gets dormant after the rotation. We are not setting this in the rotation methods instead it is left as responsibility of the caller of these methods as they “know” either to change left or right child of the parent and can refer the parent node easily before calling these rotation methods

Now for the balance() we need to call balanceFactor() to check if the node is left or right heavy.

  • If its left heavy and balanceFactor of its left node is also left heavy then we will simply do rotateRight else we need to do rotateLeftRight
  • If its right heavy and balanceFactor if its right node is left heavy then we will do rotateRightLeft else we will simply do rotateLeft

We also need to set the appropriate child of the parent after the rotation and need to keep balancing parent node recursively till we reach the root node. The root node also need to balance if required

avl-3

As you can see; using the modern languages and their framework libraries; we can implement such classic data structures and build upon these to more exciting data structures. The project is available at https://github.com/khurram-aziz/HelloDataStructures

Posted by khurram | 0 Comments
Filed under:

MVC5: Minimal Provider for ASP.NET Identity

ASP.NET once came with Membership and Role Providers, that we used and abused in past, then came Simple Membership Provider with Razor and Webmatrix that exposed a simpler API but still used the same providers behind the scene. Finally with Visual Studio 2013 and ASP.NET 4.5.1 they introduced ASP.NET Identity that offers a modern replacement of these old providers. ASP.NET Identity has modern API and exposes plug-ability and different levels. It uses Entity Framework; and if you want to store your users in database of your choice, you just need to replace SQL Entity Framework provider that gets configured out of the box with the provider of your database. You can also go deeper and implement the required interfaces and use your custom classes that don't use Entity Framework and in this post we will do exactly this; we will explore what minimal classes we need to get things going. We have a pet MVC project that always becomes the test bed for anything MVC related. It already has gone through MVC3 and MVC4 upgrades, and it was time that we replace its Membership and Role providers with ASP.NET Identity and upgrade it to MVC5

For the MVC5 upgrade; I would recommend that you create a new separate project as the new template comes with Bootstrap goodness, and then simply copy over your models, views and controllers. We availed this opportunity replacing old ASPX views to newer Razor based CSHTML views cleaning up some old mess.

If you have created a new MVC5 project; then you will need to delete ApplicationUser and ApplicationDbContext (if you intend not to use Code First Entity Framework that gets configured by default out of the box) from Models namespace. You also need to delete ApplicationUserManager class from App_Start\IdentityConfig.cs. For our implementation; we need to implement IUser<T> and IUserStore<T> and need to setup UserManager and SigninManager classes. This is very well documented at Overview of Custom Storage Providers for ASP.NET Identity. The minimal required code will look something like this

Once these files are in place; we will need to use their Create() methods in App_Start\Startup.Auth.cs for app.CreatePerOwinContext methods instead of original User and Signin Managers. We will also need to rename the Manager classes in Controllers\AccountController and Controllers\ManageControler

Email Confirmation

If you want to have Email Confirmation option; you should read Account Confirmation and Password Recovery with ASP.NET Identity (C#) where its documented. For this to work; your store needs to implement IUserEmailStore<T> and IUserTokenProvider<T, U>. You will also need a helper class implementing IIdentityMessageService for Email and SMS and add this code into UserManager’s static Create() method

manager.RegisterTwoFactorProvider("Phone Code", new PhoneNumberTokenProvider<YourAppUser> { MessageFormat = "Your security code is {0}" });
manager.RegisterTwoFactorProvider("Email Code", new EmailTokenProvider<YourAppUser> { Subject = "Security Code", BodyFormat = "Your security code is {0}" });
manager.EmailService = new YourEmailService();
manager.SmsService = new YourSmsService();
var dataProtectionProvider = options.DataProtectionProvider;
if (dataProtectionProvider != null)
{
  manager.UserTokenProvider = userStore; //userStore instance thats implementing the IUserTokenProvider
}

You will also need to change code in AccountController accordingly for Register and Login scenario and add required new views and change existing Register, Profile views for Email / SMS

  • The ASP.NET Identity was open sourced and the code is available at https://aspnetidentity.codeplex.com; being open source, you can always at the code and fix your provider accordingly
  • Having your own UserManager and UserStore, you have a choice to either override UserManager’s methods that are virtual and hook them directly to your additional code in UserStore; for instance when creating user through UserManager base class it demands that Store also implement IUserPasswordStore<T>; if you want to avoid this; you can simply override Create() methods of base UserManager and call your UserStore methods directly. Similarly you can avoid Token Providers and override Email/Sms methods; point is; you have choice either override required UserManager methods or implement required interfaces. Having code from codeplex helps to peek into whats happening in the base classes
  • Don't confuse it with ASP.NET Identity Core for ASP.NET Core thats available at https://github.com/aspnet/identity
Posted by khurram | 0 Comments
Filed under:

Dotnet Core :: PostgreSQL

postgresql Dotnet Core Series

Given, Dotnet Core can run on Linux; and in the series we have been exploring different aspects of having a Microservice based Containerized Dotnet Core application running on a Linux in Docker Containers. SQL Server has been the defacto database for .NET application, there even exists SQL Server for Linux (in a Public Preview form at the time of this post) and there is even an official image of SQL Server for Linux for Docker Engine that we can use; and connect our existing beloved SQL Tools to connect to it; but it needs 3.25GB memory and its an evaluation version.

db-sqlserver-docker

db-pg-pgadminPostgreSQL is ACID compliant transactional object-relational database available for free on Windows, Linux and Mac. Its not as popular as mySQL; but it does provide serveral indexing functions, asynchronous commit, optimizer, synchronous and asynchronous replication that makes it technically more solid choice. Given its available for Windows; we can install it on the Windows development machines along with pgAdmin; the SQL Management Studio like client tool.

Entity Framework Core

Entity Framework Core is a cross platform data access technology for Dotnet Core. Its not EF 7 or EF6.x compatible; its developed from scratch and supports many database engines through Database Providers. Npgsql is an excellent .NET data provider for PostreSQL (Their GitHub Repositories) and it supports EF Core. All you need to do is install the Npgsql.EntityFrameworkCore.PostgreSQL NUGET package using the dotnet core CLI (dotnet add package Npgsql.EntityFrameworkCore.PostgreSQL) It will bring along the EF Core and Npgsql libraries into your project.

We can now write our Entity classes and a DbContext class. For Npgsql, in OnConfiguring override, we will use UseNpgsql instead of UseSqlServer with DbContextOptionsBuilder passing on required PostgreSQL connection string. Here’s one entity class and context file I made for testing!

We can use the Context class with all the LINQ goodness; similar to SQL Server; for instance here’s the controller class

And here’s displaying the products in the View

  • If the entities are in different namespace; either import the namespace in the web.config file in the Views folder; or add the namespace in particular View by adding the using at the top @using ECommMvc.Models;

Entity Framework Core Command Line Tools

EF Core .NET Command Line Tools extends Dotnet CLI; and add ef commands to the dotnet (CLI) We need to add Microsoft.EntityFrameworkCore.Tools.Dotnet and Microsoft.EntityFrameworkCore.Design NUGET packages using dotnet add package NUGET; dotnet restore them and then add PackageReference using Design and DotnetCliToolReference using the Tools.Dotnet package; and you should end up having the dotnet ef commands in the project

db-eftools

Using the ef commands; we can add the Migration (dotnet ef migration add Migration-Name), remove it; update the database (dotnet ef database update) and more. Once we have the Migrations in place; we can continue to evolve our Entities and Database accordingly.

Seeding

Using the Migrations; we can Seed our database as well; we can create a Migration naming Seed; and add the required seeding code in the migration’s CS file

When deploying into the Docker Container; we often need “side kick” container that “seeds” the cache or database (for details see Dockerizing PHP + MySQL Application Part 2); as when container is started we get the clean slate. Given the Migrations code become part of the MVC project; and in .NET Core; there is a Program.cs the entry point where Kestrel / MVC is initialized; we can add more code there as well. We can use the Context that’s in place and Migrations and update the database (that will initialize and seed)

Docker

Now with database work in place and Docker building techniques shown in previous posts (Redis Clients -- ASP.NET Core and Jenkins); we can have v2 Compose file or v3 Compose file (for Docker Swarm) and deploy our .NET Core MVC application that’s using Redis as Caching and PostgreSQL as Database into Docker Containers running on Linux node(s)

Project is available at https://github.com/khurram-aziz/HelloDotnetCore

Jenkins

jenkins-logo

Docker Swarm Series

Dotnet Core Series

Jenkins is an open source “automation server” developed in Java. Its more than a typical build system / tool, its additional features and plugins; can help to automate the non-human part of the software development process. Its a server based system that needs a servlet container such as Tomcat. There are number of plugins available for integration with different version control systems and databases, setting up automated tests using the framework of your choice including MSTest and NUnit and doing lot more during the build than compiling the code. If Java and Servlet Containers are not your "thing"; running Jenkins in the Docker provides enough black box around these that getting started with it cant be more simpler. There is an official image on the Docker Hub and we just need to map its two exposed port; one for its web interface and the other that its additional agents uses to connect to the server. All that needs to run its instance is docker run –p 8080:8080 –p 50000:50000 jenkins; we can optionally map the container’s /var/jenkins_home to some local folder as well. To learn more options that can be set using the environment variables like JVM options; visit https://hub.docker.com/_/jenkins

We can use Jenkins to build Dotnet Core projects; all we need is to install the Dotnet Core SDK on the system running the Jenkins. For the Container; we can expand the official image and write a Dockerfile to install the required things; but first we need to check which Linux jenkins image is based on and for that; do this

jenkins-image-linux

Knowing that its Debian 8 and its running the things under jenkins user; lets make a Dockerfile for our custom Jenkins docker image

  • Note that we used information from https://www.microsoft.com/net/core#linuxdebian on how to install the latest Dotnet Core SDK on Debian
  • Note that we added git plugin as per guideline at https://github.com/jenkinsci/docker where this official image is maintained; they have provided the install-plugins.sh script; and we can install the plugins while making the image; and we will not have to reinstall these plugins when running the Jenkins

If we build this Dockerfile and tag the image as dotjenkins; we can run it using docker run –rm –p 8080:8080 –p 50000:50000 dotjenkins; running on a console is required so we can get the initial admin password from the info that gets emitted; its required on the first run setup wizard when we will open the http://localhost:8080

  • Visit Dockerfile; if you need the heads up on how to build Docker image from such file
  • During the setup; you can choose Custom Plugins; and then select none; and we will have the minimalist Jenkins ready to build the Dotnet Core project from Git Repository

You can setup a test project; giving https://github.com/khurram-aziz/HelloDocker as Git path that has a Dotnet Core MVC application

jenkins-git

 

 

jenkins-build-shelljenkins-build-shell-stepAnd then can use Execute shell Build step type and enter the familiar dotnet restore and dotnet build commands to build the Dotnet Core application with the Jenkins

Once the test job is setup; we can build it; and it will download the code from Git and build it as per the steps we have defined. We can see the Console output from the web interface as well!

jenkins-build-consoleoutput

 

If you are following the Dotnet Core Series; in the last post; Docker Registry; we also needed to build the mvcapp Docker Container after publishing the Mvc application. In that post; the developer had to have the Docker for Windows installed, as to build the Docker image; we need the Docker Daemon; and we also needed the access of the Docker Registry so we can push the Mvc application as the Docker Container from where the Swarm Nodes will pick it when System Administrator will deploy the “Stack” on the Docker Swarm. We can solve this using the Jenkins; that it can not only automate this manual work; but also we will neither need Docker for Windows at the developer machine nor will need to give access of Docker Registry to the developer.

To build Docker Container Image; from Jenkins running in Docker Container; we first need to technically assess how it can be done.

  • We need to study the Dockerfile of Jenkins at its GitHub Repo, as its creating jenkins user with uid 1000 and running the things under this user.
  • We will need the Docker CLI tools in the container
  • We will need the access of Docker Daemon in the container so that it can build the Docker Images using the Docker daemon

Lets make a separate Dockerfile first to technically assess it without the Jenkins overhead.

If we build and tag the above Dockerfile as dockcli; we can run it as docker run -v /var/run/docker.sock:/var/run/docker.sock -it dockcli

  • Note that we exposed /var/run/docker.sock file as VOLUME in the Dockerfile; so that we can map it when running the container passing the docker.sock file of the Docker Daemon where its “launching” this way we dont need to run the Docker Daemon in the Container and we can “reuse” the Docker Daemon where our image will run; there exists “Docker in Docker” image using which we can run the Docker Daemon inside the Container; but we dont need it here
  • We created a jenkins user similar to how its made and configured in the official Jenkins image
  • We need to add jenkins into sudoers (and for that we also need to install sudo first) so we can access docker.sock using sudo; else it will not have enough permissions

With these arrangements we can access the Docker Daemon from with in the Docker Container!

jenkins-sudo

Now lets add back this work to our dotjenkins Dockerfile; so we can create the Docker Image after the Dotnet Core build in the Jenkins. Here’s the final Dockerfile

Lets run our customized Jenkins image using docker run --name jenk --rm -p 8080:8080 -v /var/run/docker.sock:/var/run/docker.sock dotjenkins

We can now add Docker Image Creation step in our project build steps

jenkins-build-dockerstep

  • Note the usage of Jenkins variable as the tag for the image we are creating

If we do a Jenkins build; we will see the Docker step output in the Console Output and shortly afterwords we will have the required image in the Docker Daemon where Jenkins Container is running

jenkins-build-dockeroutput

We can use docker-compose to define all the related switches needed to run the docker image so it becomes a simple docker-compose up command. On the similar lines; we can now add additional step to push the generated Docker image to the Docker Registry!

Posted by khurram | 0 Comments
Filed under: , , ,

Docker Registry

Docker Swarm Series

Dotnet Core Series

In the “Redis Clients :: ASP.NET Core” post; we made a minimalist ASP.NET Core MVC application that uses Redis; we made v2 Docker Compose file that we used to deploy our application on the Docker as two Containers; one running Redis Server and the other running the ASP.NET Core application. Even though we used Redis and Distributed Caching; but our application still was deployed on the single host. Given its based on a micro-service Docker friendly architecture, we can write a v3 Compose file and using Stack Deploy command we can deploy it on a multi-host Docker Swarm setup. Here’s one such v3 compose file

  • If you are following the Dotnet Core Series; you might have noticed that build and restart options are gone from the v2 compose file that we made in the previous post. This is because if deploying to Docker Swarm; those two are not supported

Before doing stack deploy; given we need a “custom” image; we need to ensure that this image exists with the Docker Daemon and for this we obviously need to first publish our Dotnet Core app; and then we can build the Docker image from the Dockerfile against the Docker Daemon.

registry-docker-build

  • I have used docker-compose to build the image above; as have the Docker Client and Docker Machine in Linux Subsystem configured against the Swarm and given the Linux Subsystem is still “beta” due to https://github.com/Microsoft/BashOnWindows/issues/1123 we cannot build the image against a remote Docker Daemon (Swarm Node in this case); however docker-compose works fine; and we can use it to build the image

Once our custom image is made; we can do Stack Deploy using the v3 Compose file we made earlier

The mvcapp container needs to be globally deployed but it gets deployed only on the one node against which we made it:

registry-stack-deploy

We need to make that mvcapp image available to the remaining participating nodes as well; we either build the image against each Docker Daemon of the Swarm nodes; or we can setup a “Docker Registry” where we can make our image available and then update the v3 compose file and redeploy the stack and all the swarm nodes will get the image from the “Registry”. There exists an open source Docker Registry as an official Docker Container image; and running it just a docker run away

registry-docker-run

There is an excellent TL;DR; at https://docs.docker.com/registry; you basically tag the existing custom image prefixing the registry url; in our case we will tag mvcapp as 192.168.10.14:5000/mvcapp; and then push it; and Docker Daemon will upload the image to the Registry; similar to Docker Hub! But it would not work

registry-https-error

The problem is; that our Registry is not “secured” and Docker Daemon by default only likes secured remote registries; but we can add such unsecured remote registries as “trusted” in the docker daemon configuration. It depends what setup you are using; for instance I am using RancherOS to run the Docker Engines; and to add the remote registries; I have to do this:

Once the registry is added; we can push our registry tagged image from where other nodes can download and consume it when required

registry-insecure-registry

All we need to do now is update our v3 Compose File for the Docker Stack Deploy to use the our-registry/mvcapp image instead of just mvcapp so that all nodes can get it from “our-registry” address

  • Notice mvcapp image is changed accordingly
  • Notice the newly added Healthcheck section; it done so that if Redis Container is not available; the MVC app will break; this can happen if the node running the Redis container is not available; the Swarm will reschedule the Redis Container soemwhere else; and due to this healthceck; the MVC containers will also get rescheduled picking new Redis IP. This was done because the MVC app is talking to Redis using the IP and not the host name; details are in Redis Clients -- ASP.NET Core post.

Lets redeploy the stack, and we will have our mvcapp containers running on all the participating nodes; each node downloading the required image from the Registry automatically

registry-stack-mvcapp

Now to try a failure recovery; lets turn off the swarm3 node; and Docker Swarm should be able to recover automatically

registry-swarm3-failure

Check https://docs.docker.com/registry/recipes for other use cases of Docker Registry

If we are using Docker for Windows; we can add our insecure Registry into its Daemon; and once added; can make a docker container tagging directly for the registry and push it from the development machine/environment from where it can be used/picked by Swarm Nodes accordingly; giving us seamless deployment to cluster experience!

registry-docker-for-windows

Posted by khurram | 0 Comments
Filed under: , ,

Redis Clients :: ASP.NET Core

Redis Series

Dotnet Core Series

In the “Redis Clients” post; we explored what it takes to use Redis and how it can be helpful in our applications. We also utilized Redis datatypes and abstractions and how it can be used for page / visitor counters required in web applications. In the “Dotnet Core” post we saw that given the open source version of Dotnet now works in Linux; we can deploy the Dotnet Core applications into the Docker Containers. We even made the simple ASP.NET Core application and connect it to Redis and experienced that we can use Redis as the Distributed Cache backend using the Microsoft.Extensions.Caching.Redis.Core Nuget package that uses the StackExchange.Redis Redis Client library.

They also opened source the MVC framework and we can setup the ASP.NET Core MVC project using the Dotnet Core CLI; dotnet new mvc. We can have Middlewares; that handle the requests and responses. You can learn more about Middlewares at https://docs.microsoft.com/en-us/aspnet/core/fundamentals/middleware and there is a StartTimeHeader middleware at https://docs.microsoft.com/en-us/aspnet/core/performance/caching/distributed that uses the Distributed Caching feature of ASP.NET Core and Redis as the backend. On the same lines, for the Visitor Counter using Redis that we did in “Redis Clients”; we can have a RedisVisitorMiddleware that does its counting and we can have this cross cutting concern handled separately in its own class that can be glued into the MVC application in the Startup.cs

If we are using Micro Services Architecture and the final application will get deployed on Containers; the Redis Server will be running on a remote node; unfortunately we cant use the host name for Redis Cache Server and it will throw Platform Not Supported exception. We will have to resolve the host name and give its IP for the Redis Configuration in Startup.cs; we can have the static property for RedisConnection in Startup.cs; something like this

The hit counter code can be moved to relevant Controller / Action that can use Startup.RedisConnection static property to access Redis. We can use the same property to setup the Redis Distributed Caching provider

We can use Distributed Caching for Page Caching. If there is any Page that takes considerable time to “generate” and content of that page is not dynamic or change rarely; we can use Redis to store the generated page and reuse it from there. ASP.NET Core MVC has concept of Views and Partial Views; we can use Distributed Caching to cache them; for the testing; lets setup a Partial View in ~Views/Shared/ and for proof of concept we can use Thread.Sleep emulating time that it takes to generate.

Tag Helpers in ASP.NET Core MVC enables server side code to participate in creating and rending HTML elements in Razor files. You can learn about them at https://docs.microsoft.com/en-us/aspnet/core/mvc/views/tag-helpers/intro and there is a DistributedCacheTagHelper (code @ https://github.com/aspnet/Mvc/blob/dev/src/Microsoft.AspNetCore.Mvc.TagHelpers/DistributedCacheTagHelper.cs) that we can use for Partial View Caching.

Now to deploy our application into the Dockers; if we are doing it on a single host and having two containers; we will have the following v2 Compose file that we can use with Docker Compose tool

The Dockerfile for our MVC application will be something like this:

  • Before building the Docker image using the above Dockerfile; we need to have the published application in the "output" folder; by running; dotnet restore (if required) and dotnet publish -c Release -o output
Posted by khurram | 0 Comments
Filed under: , , ,

Installing Visual Studio 2017

You can create an offline installation files for Visual Studio 2017; the steps are documented at https://docs.microsoft.com/en-us/visualstudio/install/create-an-offline-installation-of-visual-studio The good thing this time is that we can run the vs_sku.exe –layout subsequently to update the installation files.

If you are seeing that installing or modifying the Visual Studio 2017; from the offline folder is still downloading the content from the internet; there can be two reasons; either the certificates are not installed as documented in the URL above; or you installed it in the “online” mode the first time; so uninstall it first…and then reinstall from the offline folder after importing the certificates. Once done this way; modifying the installation later adding more workloads etc; it will continue to use the contents from the offline folder

The tip is; that you delete everything in the %TEMP% folder before installation; and then do –layout folder thing (—lang en-US) and it will update the offline folder and create a log file in %TEMP% folder; check the log file ensuring that its saying everything went smoothly. Install JDK and Android SDK / NDK yourself if you want them at folders of your choice (in case you are using Android Studio and dont want multiple JDK/SDKs) and unselect JDK and Android SDK etc during installation (workload details)

Secondly; if you are planning to uninstall 2015; uninstall the Dotnet Core Preview tools first and then uninstall Visual Studio 2015

Posted by khurram | 0 Comments
Filed under:

Redis Clients

Redis Series

For the Redis clients; imagine we have an e-commerce platform having a Python based component that does some analysis to show which product or campaign / deal to show on the main page; these results are posted / updated into the Redis Server from where the Asp.Net Core application picks them.

For the Python client; we need pip; a PyPa recommeded tool for installing Python packages; on Ubuntu this can be done using the following command

sudo apt-get update && sudo apt-get install python-dev python-pip

Once pip is available; give this command

sudo pip install redis

Now to connect to Redis Server; we will have a code like this

  • You can see that we are simply adding the product ids and offer ids into the cache; the web interface will retrieve the data from the database and render it accordingly. If we want to; the web application can cache the rendered HTML as well and reuse it to save database trips for performance improvement

We can install Python on Windows development machine and use Visual Studio Code; there is a nice Python extension available at https://marketplace.visualstudio.com/items?itemName=donjayamanne.python that provides linting, intellisense and what not

redisclient-python

For the .NET Core; we can use https://www.nuget.org/packages/StackExchange.Redis Nuget package that’s .NETStandard compatible. This is very famous Redis client library from StackOverFlow guys and its code is available at https://www.nuget.org/packages/StackExchange.Redis

If our application is ASP.NET Core; we can instead use https://www.nuget.org/packages/Microsoft.Extensions.Caching.Redis.Core package; which is Distributed cache implementation of Microsoft.Extensons.Caching.Distributed.IDistributedCache using Redis; its an interface for Distributed cache mechanism basked into the ASP.NET Core to improve the performance and scalability of the applications. This package uses Strong Name version of StackExchange.Redis and to add it into the ASP.NET Core application; use dotnet add package Microsoft.Extensions.Caching.Redis.Core command

For our simple proof of concept; given the command dotnet new web in your project folder

It creates a very minimalist Hello World web application; to use the Static Files; give dotnet add package Microsoft.AspNetCore.StaticFiles command. For using Session; give dotnet add package Microsoft.AspNetCore.Session; and finally give dotnet add package Microsoft.Extensions.Caching.Redis.Core command. Restore the packages using dotnet restore and change the Startup.cs to this

  • redisclient-dotnetAs per StackExchange.Redis recommendation; we can reuse the ConnectionMultiplexer instance; therefore its defined as the static variables
  • Its initialized in the static constructor with ConfigurationOptions through which we defined the Redis Server and its password information
  • In the ConfigureService(IServiceCollection); the Redis Caching Extensions is added; and Redis Servir and password information is again specified while adding it
  • The Session service is also added in the ConfigureService according to its requirements
  • In the Configure(IApplicationBuilder, IHostingEnvironment, ILoggerFactory) method that gets called by the .NET Core runtimes for HTTP request pipeline; we are attaching the StaticFiles and Session extensions as per ASP.NET Core’s app.Use* conventions

We are using Redis in the code above in our ASP.NET Core application for two purposes; unique visitor counter and page hit counter. Page Hit Counter is the simple INCR Redis command. For Unique visitor we are using Cookie and a Sets support of Redis; SADD for adding the visitor and SCARD to determine the length of the set. StackExchange.Redis APIs are StringIncrement, SetAdd and SetLength respectively. Using Sets; we dont have to worry about duplicates as Redis automatically takes care of it and we can continue to add into the set with same id and it will not allow duplicates.

Dotnet Core

Dotnet Core Series

This post is a quick lap around Dotnet Core; especially on Linux and Containers. Dotnet Core is an open source .NET implementation and is also available for different flavors of Linux. We know how cool .NET is and how great it is now to use C# to develop and deploy applications on the black screen OSes :) As long as you are using fairly newer Linux distributions you are able to install Dotnet Core. Installation information and downloads are available at https://www.microsoft.com/net/core; there are currently 1.0 LTS version and 1.1 CURRENT version available. At the time of writing; 1.0.4 and 1.1.1 versions are the most recent available at https://www.microsoft.com/net/download/linux

If you want to create, build and package the code; you need SDK; else if you already have a compiled application available to run; only RUNTIME is sufficient. SDK installs the the Runtime as well. They have released v1 as SDK recently; and if you installed the SDK earlier; you might have the “preview” SDK; you can check it using dotnet binary with –version

dotnet-preview

They initially opted for JSON based project file (similar to NPM); which gets created when dotnet new was used that creates the Hello World Dotnet Core console application

dotnet-preview-structure

  • The lock file gets created on dotnet restore

We do dotnet restore; that restores the dependencies defined in the project.json from Nuget; an online library distribution service. And then we can do dotnet build and dotnet run to build and run our application. If we want a minimalist Hello World web application in Dotnet Core; we can use Microsoft.AspNetCore.Server.Kestrel package from Nuget that is a HTTP server based on libuv; we define this package dependency in project.json and then change the Program.cs file to this

using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Http;
public class Program
{
    public static void Main()
    {
        new WebHostBuilder()
                .UseKestrel()
                .UseUrls("http://127.0.0.1:3000")
                .Configure(a => a.Run(c => c.Response.WriteAsync("Hello World!")))
                .Build()
                .Run();
    }
}

Finding and adding Nuget package reference in the JSON file was a manual work; there is a Visual Studio Code extension that we used in the Zookeeper post that we can use to find / add Nuget package dependencies into project.json like Kestrel above if we are using Visual Studio Code; which is also an open source editor. This is all now not required with the brand new non preview (now released) SDK.

The SDK version is 1.x; and there are two runtimes; 1.x LTS and 1.1 CURRENT; the Dotnet Core 1.1 SDK is 1.x SDK :)

dotnet-install-sdk

    dotnet-new

Installing SDK; install the Runtimes as well

With released SDK; when we do dotnet new to create the project; it now creates a CSPROJ file thats XML and is very clean / minimal similar to JSON; given you didnt specified F# as the language

dotnet-structure

  • dotnet binary now can create different types of project; including web; so we dont have to do anything special for the web project
  • We also dont need any special extension of Visual Studio Code to add Nuget references; we can use dotnet binary to add Nuget packages using dotnet add package Nuget-Package-Name; this means that even if we are not using any editor; we can do this easily using the SDK only; very useful in Linux Server environments where there is usually no GUI!

Now lets switch gear and try to build a simple Docker Container for Dotnet Core web application. We will use dotnet new web similar to the screenshot; this web application will be connecting to the Redis Server and for this; we need some .NET library that's also compatible with Dotnet Core; StackExchange.Redis is one such library; to add this package into our Dotnet Core web project; we will issue

dotnet add package StackExchange.Redis

  • Don't forget to restore the packages after adding them

We will not do anything further for this post; we will simply publish the Release build of our application into the “output” folder using dotnet publish –c Release –o output

And then create a Dockerfile with following content

FROM microsoft/dotnet:1.1-runtime
WORKDIR /app
COPY output .
ENV ASPNETCORE_URLS http://+:80
EXPOSE 80
ENTRYPOINT ["dotnet", "Redis.dll"]
  • Before building the container; the application should be published into the output folder that will get included into the /app directory in the container    
  • Dotnet core uses an environment variable ASPNET_CORE_URLS and setup the Kestrel accordingly; here we are running our web application at http://*:80; meaning at port 80; the default HTTP port on all the ips of the containers
  • We need to expose container’s 80 port as well

We can build this Docker image using docker build –t some-tag .

Once the image is created; we can run it using docker run and mapping its 80 port; something like

docker run –rm –p 5000:80 some-tag

And we can access our Hello World Dotnet Core web application at http://localhost:5000

dotnet-docker

Posted by khurram | 0 Comments

Redis

Redis Series

redis-logoREmote DIctionary; or Redis is an open source data structure server; its a key-value database and can be used as NoSQL database, cache and message broker.

redis-cliIts distinguishing feature is that we can store data structures such as strings, hashes, lists, sets, sorted sets, bitmaps, hyperloglogs and geospatial indexes. It also offers functions around the data structures for instance range queries for sorted sets, radius queries for geospatial. It has replication support built in and we can have master-slave based tree like Redis cluster. It has Least Recently Used based Eviction / Cache Expiration mechanism along with transaction support. There is also Lua scripting support as well. Redis typically has all the data in the memory but it also persists it on to the disk for durability; it journal its activity so in case of any failure only few seconds of data get lost; it can write data to file in the background using the journal and we can also snapshot the in memory data.

We can get Windows optimized Redis releases from https://github.com/MSOpenTech/redis/releases that are maintained by https://msopentech.com; a Microsoft subsidiary; they had AppFabric product that had Redis like Caching component; it seems they dont have any plans to continue it any further given they are now Open source friendly company and instead is offering Windows optimized Redis through GitHub; and its great!

I simply run the installer and it did everything “Windows way” the binaries are in Program Files; and there is also Redis service defined; we can configure it as desired; run it from Administrative command prompt. Similar to ZooKeeper; it comes with redis-cli that we can use to connect to local Redis server. There are plethora of commands that we can play with using the CLI. Some of them are shown in the screenshot.

We can use keys command to query the keys and del to delete them. SET command has nx parameter; if specified; it will only set the key value if its not defined. There is also xx parameter; if specified; it will only set the key value if key already exists. These are useful when multiple clients want to set the same key. SET also has ex and px parameters where we define expiration time of the key in seconds and milliseconds respectively

  • GETSET is an interesting command; it sets the new value and retrieve the old value in the single go; useful for resetting counters!

redis-keys-set  redis-keys-expiration

  • We can give multiple key names while deleting

The keys and values can be maximum of 512Mb in size, keys can be any binary data; string, integer or even file content; but its recommended to use appropriate sized keys with type colon value colon something else; for example user:khurram etc

Using MGET and MSET we can retrieve and set multiple keys; useful for reducing latencies. We can use EXPIRY existing-key seconds to set the cache expiry of existing key; and use TTL key; to know the remaining time for cache expiry.

For the lists; there are LPUSH (Left / Head) and RPUSH; using which we can push multiple values against single key (Lists). We can use LPUSH/RPUSH key val1 val2 … to push multiple values at once. LRANGE is used to retrieve the values and takes start and end index parameters. We can give –1 as parameter for last index, –2 as second last; so to retrieve whole list we will use LRANGE list 0 -1

  • The lists can be used for Producers / Consumer scenarios; RPOP exists especially for Consumers; and when list is empty; it will return null
  • There is also LPOP but not used in Producer / Consumer; Producer should use LPUSH and Consumer RPOP
  • BRPOP and LRPOP are Blocking versions of RPOP and LPOP; and instead of polling; consumers can uses BRPOP

LTRIM is similar to LRANGE; but it trims the remaining values; we can use it before pushing the data and it will only take the defined elements

Given Redis is a network server; we should secure our Redis; we should use iptables / firewall so clients from known locations can connect to it; there’s also a security section in the conf file; on Windows; the conf file is passed as parameter to the service binary and its in Program Files\redis; we can open it up and enable authentication

  • Additionally you can run the service under specific login, giving required permissions to run as service, can listen on network and NTFS permissions. Its always a good idea to run the services (and especially network services) under a login with just enough permissions. Take a look at http://antirez.com/news/96 how one can compromise Redis in few seconds

redis-conf

Redis will not let read/write data unless client authenticate themselves first

redis-auth

You can see that similar to ZooKeeper; Redis can be used as foundational service in modern distributed applications. Similar to ZooKeeper; the application workers connect to Redis server over network and there are libraries for many languages; from C/C++ to Java/C#, Perl to Python, ActionScript to NodeJS and Go. In the next post; we will build some client applications

More Posts Next page »