Monday, 12 January 2015

Handling Validation Errors with AngularJS and ASP.NET MVC

The Problem


One of the first things you’ll notice doesn’t work very well when you integrate AngularJS and ASP.NET MVC is forms validation.
We used to have it so good with unobtrusive validation, things just worked. Now we need to get used to a new validation system, as well as tie our existing server side in.
Typically we’d have two types of validation messages to show. A global validation message for things we don’t expect or aren’t related to any one piece of the page. Eg “An unexpected error occurred” or “Cannot reach the database”, etc…
The other type of validation is of course forms validation, whereby each field has it’s own validation message.
In an effort to reuse as much existing code as I could, I’ve come up with the following solution.

The Solution

Serialize some of the ModelState to capture validation errors and always use my error handling nuget package: SSW.ErrorHandler
First up, lets look at an example of this in action. Here is a basic controller for saving a dog:
public ActionResult Save(EditVM model)
{
    if (!_dogLogic.AcceptableName(model.Name))
    {
        ModelState.AddModelError(string.Empty, "That is not a suitable name for a dog, please choose a new one.");
    }

    if (!ModelState.IsValid)
    {
        return JsonFormResponse();
    }

    // Perform Success Actions
    var dog = _dogLogic.Save(model);

    return JsonFormResponse();
    
    // You can also do the following if you want to return actual data
    // return Json(dog.Id);
}
Here we do three things. First we run the model through any custom validation we have. You should always run all validation over your model before returning to the user to ensure they can fix all errors in one go.
Next we check if there are any validation errors using ModelState.IsValid. This will catch any errors from attributes on our model, as well as the custom validation we just performed.
Lastly, if there were no errors, we perform the success logic.
What’s this JsonFormResponse you ask? Let’s dig in:
protected ActionResult JsonFormResponse(JsonRequestBehavior jsonRequestBehaviour = JsonRequestBehavior.DenyGet)
{
    if (ModelState.IsValid)
    {
        return new HttpStatusCodeResult(200);
    }

    var errorList = new List<JsonValidationError>();
    foreach (var key in ModelState.Keys)
    {
        ModelState modelState = null;
        if (ModelState.TryGetValue(key, out modelState))
        {
            foreach (var error in modelState.Errors)
            {
                errorList.Add(new JsonValidationError()
                {
                    Key = key,
                    Message = error.ErrorMessage
                });
            }
        }
    }

    var response = new JsonResponse()
    {
        Type = "Validation",
        Message = "",
        Errors = errorList
    };
        
    Response.StatusCode = 400;
    return Json(response, jsonRequestBehaviour);
}
The classes referenced above:
public class JsonResponse
{
    public string Type { get; set; }
    public string Message { get; set; }
    public IEnumerable<JsonValidationError> Errors { get; set; }

    public JsonResponse()
    {
        Errors = new List<JsonValidationError>();
    }
}

public class JsonValidationError
{
    public string Key { get; set; }
    public string Message { get; set; }
}
Added to my BaseController, the class from which all my other controllers inherit, is the JsonFormResponse method. This method returns a nice (200 OK) response if there are no errors. If there are errors, it breaks down the ModelState and serialises them into a nice standardised reponse. For Example, if the Name property of our EditVM model from above was missing, we could expect to see the following response:
{
    Type: "Validation",
    Message: "",
    Errors: [
        {
            Key: "Name",
            Message: "The field Name is required."
        },
        {
            Key: "",
            Message: "That is not a suitable name for a dog, please choose a new one."
        }
    ]
}
Remember that we always return all errors, so when the name is missing, it also fails the AcceptableName logic test.
So what do you do with this JSON response? Well lets take a look at my error handler in my Angular controller:
$http.post("/Dog/Save/" + dog.DogId, postData).success(function() {
    // Add your success stuff here
}).error(function(data, status, headers, config) {
    handleErrors(data);
});

function updateErrors(errors) {
    $scope.errors.formErrors = {};
    $scope.errors.pageError = "";

    if (errors) {
        for (var i = 0; i < errors.length; i++) {
            $scope.errors.formErrors[errors[i].Key] = errors[i].Message;
        }
    }
}

$scope.handleErrors = function (data) {
    if (data.Errors) {
        updateErrors(data.Errors);
    } else if (data.message) {
        $scope.errors.pageError = data.message;
    } else if (data) {
        $scope.errors.pageError = data;
    } else {
        $scope.errors.pageError = "An unexpected error has occurred, please try again later.";
    }
};
At the top is the http call to the server. Next is the updateErrors function that spins through our JsonFormResponse JSON data to assign the errors to the appropriate properties. Finally we have the handleErrors method. This method determines which error system the response came from, starting with our JsonFormResponse, followed by the SSW.ErrorHandler package. After that it just checks to see if the response has anything and binds it to our message, and lastly if there is no data in the response it returns a generic error message.
Last but not least we turn to the client side and put up our two types of validation messages. Firstly the field validation:
<input type="text" ng-model="dog.Name" />
<span class="help-block" ng-if="errors.formErrors.Name">{{errors.formErrors.Name}}</span>
and at the bottom the global validation:
<div class="alert alert-danger" ng-if="errors.pageError">
    <p>{{errors.pageError}}</p>
</div>
Right all done. I know it’s a fair bit of work at the moment, but I’m sure the great minds at Microsoft are already looking at how to facilitate a new Angular Unobtrusive validation. Until then we’ll make do with our own custom Angular + MVC validation combo!

Saturday, 10 January 2015

[How-to] Creating Highly Available Message Queues using RabbitMQ


Every day in the world of modern technology, high availability has become the key requirement of any layer in a technology. Message broker software has become a significant component of most stacks. In this article, we will discuss how to create highly available message queues using RabbitMQ.
RabbitMQ is an open source message broker software (also called message-oriented middleware) that implements the Advanced Message Queuing Protocol (AMQP). RabbitMQ server is written in the Erlang programming language.

The RabbitMQ Cluster

Clustering connects multiple nodes to form a single logical broker. Virtual hosts, exchanges, users and permissions are mirrored across all nodes in a cluster. A client connecting to any node can see all the queues in a cluster.
[Tweet "#RabbitMQ tolerates the failure of individual nodes. Nodes can be stopped and started in a cluster."]
Clustering enables high availability of queues and increases the throughput.
A node can be a Disc node or RAM node. RAM node keeps the message state in memory with the exception of queue contents which can reside on a disk if the queue is persistent or too big to fit into memory.
RAM nodes perform better than Disc nodes because they don’t have to write to a disk as much as disk nodes. But, it is always recommended to have disk nodes for persistent queues.
We’ll discuss how to create and convert RAM and Disk nodes later in the post.

Prerequisites:

  1. Network connection between nodes must be reliable.
  2. All nodes must run the same version of Erlang and RabbitMQ.
  3. All TCP ports should be open between nodes.
We have used CentOS for the demo. Installation steps may vary for Ubuntu and OpenSuse. In this demo, we have launched two m1.small servers in AWS for master and slave nodes.

1. Install Rabbitmq

Install Rabbitmq in master and slave nodes.
$ yum install rabbitmq-server.noarch

2. Start Rabbitmq

/etc/init.d/rabbitmq-server start

3. Create the Cluster

Stop RabbitMQ in Master and slave nodes. Ensure service is stopped properly.
/etc/init.d/rabbitmq-server stop
Copy the file below to all nodes from the master. This cookie file needs to be the same across all nodes.
$ sudo cat /var/lib/rabbitmq/.erlang.cookie
Make sure you start all nodes after copying the cookie file from the master.
Start RabbitMQ in master and all nodes.
$ /etc/init.d/rabbitmq-server start
Then run the following commands in all the nodes, except the master node:
$ rabbitmqctl stop_app
$ rabbitmqctl reset
$ rabbitmqctl start_app
Now, run the following commands in the master node:
$ rabbitmqctl stop_app
$ rabbitmqctl reset
Do not start the app yet.
The following command is executed to join the slaves to the cluster:
$ rabbitmqctl join_cluster rabbit@slave1 rabbit@slave2
Update slave1 and slave2 with the hostnames/IP address of the slave nodes. You can add as many slave nodes as needed in the cluster.
Check the cluster status from any node in the cluster:
$ rabbitmqctl cluster_status
By default, the cluster stores messages on the disk. You can also choose to store Queues in Memory.
You can have a node as a RAM node while attaching it to the cluster:
$ rabbitmqctl stop_app
$ rabbitmqctl join_cluster --ram rabbit@slave1
It is recommended to have at least one disk node in the cluster so that messages are stored on a persistent disk and can avoid any loss of messages in case of a disaster.
The performance of RAM nodes are a little better than disk nodes, and gives you better throughput.

4. Set the HA Policy

The following command will sync all the queues across all nodes:
$ rabbitmqctl set_policy ha-all "" '{"ha-mode":"all","ha-sync-mode":"automatic"}'
Ex: Policy where queues whose names begin with "ha." are mirrored to all nodes in the cluster: $ rabbitmqctl set_policy ha-all "^ha\." '{"ha-mode":"all"}'

5. Test the Queue mirror

We are going to run a sample python program to create a sample queue. You need the below packages installed from where you want to run the program.
Install python-pip
$ yum install python-pip.noarch
Install Pika
$ sudo pip install pika==0.9.8
Create send.py file and copy the content below. You need to update the “localhost” with name/ip of master/slave node.
#!/usr/bin/env python
import pika
connection = pika.BlockingConnection(pika.ConnectionParameters(
              'localhost'))
channel = connection.channel()
channel.queue_declare(queue='hello')
def callback(ch, method, properties, body):
   print " [x] Received %r" % (body,)
channel.basic_consume(callback,
                     queue='hello',
                     no_ack=True)
print ' [*] Waiting for messages. To exit press CTRL+C'
channel.start_consuming()
Run the python script using the command:
$ python send.py
This will create a Queue (hello) with a message on the RabbitMQ cluster.
Check if the message is available across all nodes.
$ sudo rabbitmqctl list_queues
Now, create a file named receive.py and copy the content below.
#!/usr/bin/env python
import pika
connection = pika.BlockingConnection(pika.ConnectionParameters(
              'localhost'))
channel = connection.channel()
channel.queue_declare(queue='hello')
def callback(ch, method, properties, body):
   print " [x] Received %r" % (body,)
channel.basic_consume(callback,
                     queue='hello',
                     no_ack=True)
print ' [*] Waiting for messages. To exit press CTRL+C'
channel.start_consuming()
Run the script and check the Queue in either the slave or master:
$ sudo rabbitmqctl list_queues

6. Setup Load Balancer

Now, we have multiple MQs running in a cluster. All are in sync and have the same queues.
How do we point our application to the cluster? We can’t point our application to a single node. If a node fails, we need a mechanism to auto failover to other nodes in cluster. There are multiple ways to achieve it. But, we prefer to use the load balancer.
There are two advantages in using the load balancer:
  1. High availability
  2. Better network throughput because the load is evenly distributed across nodes.
Create a load balance in front of it and map the backend MQ instance. You can choose either HAProxy or Apache or Nginx or any hardware load balancer you use in your organization.
If the servers are running in AWS inside a VPC, then choose internal load balancer. Update the application to point to the load balancer end point.

Angular Tutorial (Update to Angular 7)

As Angular 7 has just been released a few days ago. This tutorial is updated to show you how to create an Angular 7 project and the new fe...