Friday 18 March 2016

asynchronous WebAPI

Introduction

In this article I want to show, how you can improve server performance dramatically by using Microsoft asynchronous WebAPI. The text shows, how a Microsoft WebAPI service backend can be improved to handle request asynchronously and so to increase the number of concurrent clients the server can handle. You can boost the user experience of client applications with asynchronous WebAPI services by being more reactive and fluent in the user interface. This document will not describe the client side in deep, focus is on the performance of the server side. In the following section, I write a very simple web service using Microsoft WebAPI, measure the performance of concurrent requests to that service and compare the results to a second scenario, where we will make the service asynchronous.

Environment

My test machine was a Windows 8.1 PC running on Intel Core™ i7 CPU with 6 cores at 3.24 GHz and with 12 GB RAM. The WebAPI is hosted within the local IIS 8. The simulated clients also run on this machine. One can say that this is not really a realistic scenario for simulating concurrent users and measure server performance. But it will be enough to show some problems and how they can be solved with asynchronous WebAPI services.

The Server

Service Implementation

The server code is minimalistic. The code snipped below shows the implementation we need. There is only a WebAPI service that supports the GET HTTP method in a WebAPI controller named LongOperationController. This service simulates a long running operation by calling Thread.Sleep(). The service waits for 2 seconds until it returns and a response message is generated. So a client waits at least for 2 seconds when calling the service.
Notice: the service response time is a little bit longer than 2 seconds. Serialization/deserialization and transport of HTTP messages over the wire takes time. So the service response time depends on network latency.
public class LongOperationController : ApiController
{
    // GET api/<controller>
    [HttpGet]
    publicvoid DoLongRunningOperation()
    {
        Thread.Sleep(2000);
    }
}

Thread Pool Configuration

There is another variable in the scenario that has to be considered: the thread pool size. As an Asp.Net WebAPI service hosted in an IIS, the application has an own thread pool with a limited number of available threads. The default numbers of threads in the pool are calculated based in the number of cores in the CPU to find an optimum value. But they can be changed! There are two types of threads in a thread pool:
Worker Threads are used for active work e.g. when pushing a work item into the thread pool. Client requests are handled in this way. Each request is handled by an own thread from the thread pool (as far as there are enough worker threads available in the pool).
Completion Port Threads are used to wait for asynchronous I/O operations to finish (e.g. when accessing storage/disc, receive data from network, at least when using the awaitkeyword in the WebAPI service implementation). Completion port threads are not doing a lot. They simply wait and block until they receive a signal.
The number of available worker threads in the thread pool affects the performance of our WebAPI services. If there are too many concurrent client request, all worker threads are busy. New client requests must be queued. As soon as a worker thread is available again, the request is taken from the queue and processed. Using the queue is time expensive, so performance problems can occur.
The maximum number of threads in the thread pool can be increased in order to avoid those performance issues. The problem with this solution is that each thread takes about 1 MB of RAM. So maybe you have to scale the machines memory. 1000 extra threads means 1 GB extra memory! That is not the way to go here.
For our test scenario, we set the maximum number of threads in the pool to a small value: 200. So we can simulate massive concurrent client requests and provoke performance problems at the server (due to the small number of worker threads, not all client request can be handled). While simulating, we can observe what happens with the server performance/response times.
To query or set the numbers of threads the code below can be used. The ThreadPool class provides some static services to query and set the minimum, maximum and available number of worker and completion port threads.
// Variables used to store the available completion port thread counts. 
int workerThreads, completionPortThreads;  
// Variables used to store the min. and max. worker thread counts. 
int minWorkerThreads, maxWorkerThreads;  
// Variables used to store the min. and max. completion port thread counts. 
int minCompletionPortThreads, maxCompletionPortThreads;    

// Query the number of available threads in the app pool. 
ThreadPool.GetAvailableThreads(out workerThreads, out completionPortThreads);  
// Query the minimum number of threads in the app pool. 
ThreadPool.GetMinThreads(out minWorkerThreads, out minCompletionPortThreads);  
// Query the maximum number of threads that can be in the app pool. 
ThreadPool.GetMaxThreads(out maxWorkerThreads, out maxCompletionPortThreads);    

// Set the maximum number of threads available in the app pool. 
ThreadPool.SetMaxThreads(workerThreads, completionPortThreads); 
With the first service implementation and the thread pool configuration on board, we are ready with the server side and can implement the client code.

The Client

The code below shows the implementation of the client. The method creates a bunch of threads. The number of threads is given by the parameter requests in order to simulate different numbers of concurrent client requests. Each thread calls the WebAPI service shown in the previous chapter and waits for the response. After initializing all threads in the for-loop, all threads are started to the same time (more or less) and the method waits until all threads are completed. That is, when all threads called the WebAPI service and got a response (successful or not).
private static void RunConcurrentRequest(int requests) 
{ 
    var tasks = new List<Task>(); 

    for (var index = 0; index < requests; index++) 
    { 
        tasks.Add(Task.Factory.StartNew(() => 
        { 
            var url = "http://localhost:8080/AsyncWebApi/api/LongOperation"; 
            WebClient client = new WebClient(); 
            client.DownloadString(url); 
        })); 
    } 

    Task.WaitAll(tasks.ToArray()); 
} 
The first scenario should show the problems with synchronous services. Therefore, the method above is called with different parameters: 1, 10, 20, 30, 40, 50, 100, 200, 300, 400, 500, 1000, 2000 and 3000. That means, e.g. for value 1000, that 1000 requests are send to the DoLongRunningOperation WebAPI service to the same time.
While running this test scenario, performance of the ASP.NET WebAPI service was measured with the Performance Monitor (perfmon.exe). Please read my previous blog post on how to measure performance with the Microsoft Performance Monitor (http://robinsedlaczek.wordpress.com/2014/04/07/measure-performance-of-asp-net-web-applications-with-microsoft-performance-monitor-2/). The results will be explained in the next chapter.

Performance

To measure the number of requests per second, number of failed requests per second and the total number of failed requests that were handled by the WebAPI service, the appropriate performance counters have been configured in the performance monitor tool. The image below shows the results of measurements while the client code was running. The red box marks the timeframe when the client code from above was executed. The green line shows how many requests are handled per second. The server handles the request in a nearly constant way for 1 – 500 concurrent requests.
Problems occur with 1000, 2000 and 3000 concurrent requests. The blue line shows the number of failed requests per second. 1000 and more requests cannot be handled by the server in this scenario. As described above, there are only 200 worker threads at the server and the DoLongRunningOperation service takes 2 seconds for execution. So, if there are 200 and more request operations running to the same time at the server, the number of worker threads exceeds and requests are queued. If requests cannot be queued, they fail. The blue line shows that a lot of requests failed in the 1000 and more concurrent requests scenarios. The total number of not handled requests increases shown by the red line.
The next chapter shows how this situation can be improved by using asynchronous WebAPI services.

Figure 1: Performance of synchronous WebAPI services
Figure 1: Performance of synchronous WebAPI services

The asynchronous Service Implementation

To improve the performance of the server in this test scenario, the WebAPI service DoLongRunningOperation will be made asynchronous. The code below shows how to change the service implementation. First, “Async” is appended to the method name following the naming conventions in the .Net framework.
The operation is simply marked with the async keyword and the return type is changed to Task. That is all to make the service asynchronous. But there is a little bit more in this case. Since there is no asynchronous stuff done in this method, the code won’t compile. A task has to be returned or an awaitable operation has to be called. So the Thread.Sleep() method cannot be used here anymore. Instead, the awaitable Task.Delay() method is used.
Awaiting the Task.Delay() method means that DoLongRunningOperationAsync immediately returns when Task.Delay() is called. And so the thread that handles the current requests will be available in the thread pool again and can handle other requests. The Task.Delay() operation runs in a separate thread. When it finishes, the response is created and returned to the client.
An asynchronous WebAPI service does not act asynchronous from a client point of view (e.g. using a callback). The client has to wait until the operation has been finished. That is, in this scenario, when the delay of 2 seconds is over and the response is returned. The client thread is blocked until this operation completes. Asynchronous WebAPI services means, that those services do not block the worker threads in the server thread pool. So the server can handle more requests.
// GET api/<controller> 
[HttpGet]  
public async Task DoLongRunningOperationAsync()  
{   
    await Task.Delay(2000);  
} 
Changing the server code as described results in better performance of the server. The next chapter explains those results for the same test scenario (sequence of concurrent requests) from above.

Performance

The image below shows the performance measurement results when sending 1, 10, 20, 30, 40, 50, 100, 200, 300, 400, 500, 1000, 2000 and 3000 concurrent requests to the WebAPI service. Comparing to the graph of the first measurements above, the plot below looks a lot smoother. There are several points that have been improved with the asynchronous WebAPI service:
  1. There are no failed requests. As mentioned above, asynchronous calls in the WebAPI service are done in other background threads. So worker threads can be released earlier and they are available in the thread pool again to handle other requests. 200 worker threads in the thread pool seem to be enough to handle all requests in the test sequence from 1 to 3000 concurrent requests. In the first scenario the limit has been reached with 500 concurrent requests.
  2. More requests per second can be handled by the WebAPI service. The image below shows that 50 requests per second (more or less) are handled during the test period. The results of the first test scenario shows, that only 4-5 requests per second can be handled (until 1000 concurrent requests are send, where requests begin to fail).
  3. The test sequence is handled faster. In the scenario above it takes about 40 minutes to run the test sequence from 1 to 500 concurrent requests. The graph below shows that the sequence from 1 to 400 concurrent requests are handled within 4 minutes. After 8 minutes the server handled the sequence from 1 to 1000.

Figure 2: Performance of asynchronous WebAPI services
Figure 2: Performance of asynchronous WebAPI services

Conclusion

The test scenarios show that server performance can be improved dramatically by making WebAPI services asynchronous using async and await. Requests are handled faster, more requests can be handled and so there are less requests that fail. Making WebAPI services asynchronous is as easy as possible when using the async and await language features of C#.
If you have any questions or if you need assistance with your code, do not hesitate to contact me! I would be happy if you have comments, critics or opinions to this post. Please let me know!

No comments:

Post a Comment

Angular Tutorial (Update to Angular 7)

As Angular 7 has just been released a few days ago. This tutorial is updated to show you how to create an Angular 7 project and the new fe...