Saturday, August 2, 2014

Queuing the data/request

Non-blocking... hey just call me whenever the result is ready.. I don't want to wait here.

Whenever we talk about blocking (synchronous) or non-blocking (asynchronous), obviously it is related to the multi-threading and queue.

Thread is created to execute some codes without blocking the main thread (using the time slice of a CPU or run the thread in a separate CPU core). This improves the responsiveness of the program (i.e., the main thread).

In the server program, it needs to serve many clients concurrently. So, the server program have to queue the requests and let the client go (i.e., without asking them to wait for the result). Queue is always first in first out (i.e., FIFO).
  • The worker thread will always process the request that come first. The result will be send to the client through callback (please refers to the Push Design article earlier).
  • On the other hand, the client request  will be appended at the end of the queue. And the client will wait for the result through callback.
With this processing order, all requests will be served based on its' "request time".

Things will become more complicated with the following design:
  • Thread pool (i.e, there are multiple worker threads that handle the request) - you can find many open source C# thread pool libraries which smartly create more threads when there are many requests and reduces the number of threads when the number of requests reduce. Some will even create threads based on the number of CPU core.
  • Request can be prioritized - with prioritization, it allows the urgent request to jump queue even though it came in late. You can imagine that the server program has multiple queues (one for each priority) and the highest priority will have threads to standby to serve the urgent request.
OK. Let's continue the previous Push Design topic.

You need "command queue" and "callback data queue"

With WCF, the socket programming becomes easier. But, without the "non-blocking" design in mind, the communication process will make the server or the client unresponsive. The unresponsiveness will be severe when the number of concurrent clients increase and the amount of data to be transmit become larger. To alleviate this problem, you need to implement thread pool and queue into both server program and client program.

In the client program, you need this:
  • Command queue - when the user click "submit request to the server", the command (written in WCF) should go into a "command queue" (this queue is residing at the client site). Then, one worker thread will send this request to the server. Since we are designing "non-blocking" program, the worker thread should not wait for response from the server. It will continue to send the next request/command to the server until the queue is empty. From the user experience, the user will feel that clicking the submit button does not freeze the screen (this is something good). For example, the user is using Internet browser to open multiple tabs and each tab is requesting different web pages.
  • Callback data queue - once server completed and the request and sends the result back to the client (through callback), the client should store the result into a queue. This is the second queue that you need aside from the command queue. Upon receiving the response, the worker thread should dispatch the result to the respective "caller" (which could be a screen) until the queue is empty.

In the server program you need this:
  • Command queue - when the client program sends a request, it will be appended to the queue (this queue is residing at the server site). The client should not wait for the result or else they could be blocking the server (this could end up with resource contention problem where multiple clients are competing for the same resource). A worker thread will pickup this request and do all the necessary process. Upon completion, it will append the result to the callback data queue and let another worker thread to dispatch the result to the client.
  • Callback data queue - same as the client program, this is another queue aside from the command queue. The purpose of this queue is to let the command worker thread to process the rest of the requests immediately after one request has been completed. Making a callback from to the client might face latency problem (i.e, not really "realtime" but some unexpected network traffic out there). With a thread that only handles the callback, even though the network connection is slow it won't affect the command work thread (the processing time will be maintaining). Now, the callback worker thread will take it's own sweet time to send the result to the client. No worry about the process time. No worry about the limited command worker thread in the pool.
The command queue and callback data queue should work in conjunction with thread pool. You may have one thread pool to take care one queue or one thread pool that take cares all the queues.