Showing posts with label Optimization. Show all posts
Showing posts with label Optimization. Show all posts

Tuesday, October 4, 2016

Using partial HTML UI + ASHX to speed up page loading

Just to share with everyone that I developed many LOB app (line of business application) and I'm still developing new LOB for my clients. LOB is very different from blog engine, corporate websites and static websites. In LOB, we can happily ignore the contents to be "readable" by SEO because some of the contents were loaded by AJAX.

As per my last blog dated 26th-June-2016, I mentioned the new strategy: JQuery + AJAX + partial HTML UI. Now, the question is how is partial HTML UI that can help out is speeding up the page loading? The answer is simple, we need to rely on the browser cache by checking the "If-Modified-Since" flag in the request header. Then, responding either status code 304 (resource has not been modified) or returning the partial HTML to the browser.

You will find tons of references if you are searching for "asp.net If-Modified-Since". Below is one of the reference that I found:

  http://madskristensen.net/post/use-if-modified-since-header-in-aspnet

The down side of this strategy is that the user will feel a bit slower on the first request to load the full page. But, the subsequent page loading or if the user is requesting for the same page, then, the time taken will be shorter. For example, we want to develop a page for user to key in the sales invoice and it allows the user from choosing the item from the list. The sales invoice is stored in a HTML file (not ASPX) and the item list HTML design is stored in another HTML file (where this item list will be reused by supplier invoice).

One of the advantage using this strategy is that it allows all these partial HTML file to be hosted in CDN (Content Delivery Network). Then, the whole LOB app will be loaded faster than using only one ASPX which could be crazily huge and hard to reuse some of the HTML design.

Note: "partial HTML UI" can be refer as "template" and it does not contains the HEAD and BODY tags. It just contains some DIV-s which eases the web designer to design and test in the browser. You don't need a programmer to start full coding but just some simple JQuery and AJAX to complete the demo.


Sunday, June 26, 2016

System design with ASHX web services (JQuery + AJAX + partial HTML UI)

Recently, we are working on a few new systems with ASP.NET. In the past, we are using Page (ASPX) + ScriptManager and we are facing some limitation in the system design which includes the following:
  • The system does not allow the user to add a new "item" into the drop down list on the fly.
  • The drop down list contains a few hundred items and we need to incorporate the 'search' functionality or paging to avoid all items to be loaded in one shot.
  • The data submitted to server failed to meet the validation process and causing the entire page sent back to the client (which travel from client to the server and then back to the client).
To solve these types of problem, we need to have a new design to the foundation of the system.
  • We must use JQuery + AJAX + partial HTML UI design so that it allows the user from adding "item" to the drop down list on the fly. The partial HTML UI will appear on the screen as a popup for user adding new item. After the user has submitted the new item to the server, validated OK and it will be added to the drop down list on the fly (with JQuery) without reloading the page or navigating to another page.
  • The drop down list that contains lots of item will be replace by a textbox. Upon the user clicking on this textbox, the items will be loaded from the server (with AJAX calls) and then display in the popup (with JQuery + partial HTML UI). To improve the user's experience, you may consider the auto-complete feature upon the user typing the text or divide the data using the paging concept.
  • We must do more AJAX calls for submitting the user input to be validated by the system. In case of any failure such as validation failed, the server should returns the error message only. This avoids the entire page to be re-created in the server and then send to the browser.
With the new design, the system becomes more responsive and lesser network traffic. But, we still have a problem on how to handle the AJAX call. Are we going to have one ASHX to handle one process (that will end up on lots of ASHX)? Or are we going to have only one ASHX entry point that handles all the requests?

To solve this problem, here is the list of frequent use "web service" to be implemented with Handler (ASHX):
  • ~/q  - this web service handles the "query" that includes CRUD (create, return, update & delete) processes, daily process, ad-hoc process and all other business processes. The report request is another area which you may consider to put into this service.
  • ~/t - this web service returns the "HTML template" (partial HTML UI design) to be injected to the current web page. By creating the partial HTML UI file, it allows the designer to work on the layout without have to go through all the JQuery + the DOM element generation (i.e., low level stuffs). Modifying the DOM elements using JQquery is very time consuming and it requires a more expensive Javascript programmer. But, we have done it with a cheaper costing. The nice partial HTML UI has been done by the designer and the programmer requires to populate the JSON data into the appropriate placeholder.
  • ~/f - this web service handles all the file services that include upload, download/view. For example, when the user calls out the "contact list", it shows the profile photo of the contact. This profile photo IMG SRC is "~/f?contact_id=124567" where "contact_id" is the primary key value of the contact. It does not point to any physical file name. The "f" service will do all the necessary at the server side and returns the binary of the photo (an image file).
To setup the above shortcut, you have to create mapped URL in web.config. For example,

  <system.web>
    <urlMappings>
      <add url="~/q" mappedUrl="~/myWebService/q.ashx"/>
    </urlMappings>
  </system.web>

The design these web services:
  • Client is making a request to the web service:
    • "code" - the command code, object type or process code to be executed.
    • "action" - this includes CRUD and other actions (such as run daily job, run hour job).
    • "query parameters" - the query parameters are wrapped into a JSON object. For example, the client is requesting the customers who is owing more than $10,000 for more than 90 days.
  • Responding to the client:
    • "msg" - the message to the client. "ok" to indicate the query has successfully executed. Otherwise, it contains the error message.
    • "list" - the list of JSON formatted data requested by the client. This information is optional.
Some of you might be thinking why we are not using REST for these web services. The answer is simple: we don't need to given definition to the "method". Such as PUT or POST method. What if the user wants to execute an ad-hoc process (POST or custom method)?



Friday, October 3, 2014

Queue processing design

Queue concept is straight forward - append new item at the tail and dequeue from the head. It does not tell you how to process the requests in the queue.

You can enqueue new requests with multiple threads and processing the requests with 1 worker thread or multiple threads. Then, here comes with the processing designs:

Sequential (synchronous) processing the requests

With only 1 worker thread processing the requests in the queue, this is going to be slower as compared to the multiple worker threads but it has advantages. The advantage of sequential processing is when it comes to file I/O operation or network I/O operation. Just imagine that the exit has one door and only one person is able to pass through at one time.

Parallel (asynchronous) processing the requests

Thanks to multi-core and multi-CPU. With more than 1 worker thread processing the requests, it reduces the client wait time (on the response) and increases the server throughput.

Multi-thread has nothing to do with the "reducing the processing time for a request"! The "processing time" is depending on your codes and the how much the codes can be optimized (in the development time and also run time).

If there is a long run request and the request is handling by a worker thread (in a thread-pool), then, that worker thread is blocked. It won't be able to handle other request until it finished the current request. And if you have many of long running requests, then, all threads in the thread pool might not be able to reduce the response time. The server is now consider as "busy" even though the client is able to keep sending their requests into the queue.

Can we have unlimited threads in the pool..?!

The answer is no. If you have instantiated too many threads without using thread pool, then, the memory will be depleted... and the program will crash. 

What happen to the requests in the queue if my program crash?

All requests will be disappear from the memory. Your program may be designed with auto-restart in case it crashed. But, you won't be able to recover the requests which was stored in the memory.

In case you need something better than storing the requests in memory..

1. Each thread in the thread pool must maintain individual processing statistics and these statistics will be updating to the database hourly. With this information, the system administrator will be able to tell how many worker threads were blocked, in which time zone and which request blocks it. Then, he will be able to justify whether to upgrade or replace the server.

The statistics to be keep track by each thread in the thread pool should include:

- Number of requests that has been processed
- Total processed time (ms)
- Highest processed time (ms)
- Request name for highest processed time

- Peak hour - 24 time zone in a day (just get the current hour value will do)

2. Using database as queue storage instead of memory - the main advantage is that the server will be able continue from where it's left before crashing. With this approach, there will be some overhead in storing the requests into database. But, those overhead is justifiable with the crash-proof design.

Another advantage of storing the requests into database is that the secondary server will be kick-in if all the worker threads in the thread pool (in the primary server) are blocked. The secondary server will be notify by the primary server (through WCF or socket). Then, the secondary server will query the database for the pending requests and process it accordingly.

Two ways to handle the respond to the client

- The result will be stored in the database and then notify the primary server. Then, the primary server will query the result and responds to the client.

- OR the result will be send to the primary server (through WCF or socket) and then responds to the client.


Saturday, August 2, 2014

Queuing the data/request

Non-blocking... hey just call me whenever the result is ready.. I don't want to wait here.

Whenever we talk about blocking (synchronous) or non-blocking (asynchronous), obviously it is related to the multi-threading and queue.

Thread is created to execute some codes without blocking the main thread (using the time slice of a CPU or run the thread in a separate CPU core). This improves the responsiveness of the program (i.e., the main thread).

In the server program, it needs to serve many clients concurrently. So, the server program have to queue the requests and let the client go (i.e., without asking them to wait for the result). Queue is always first in first out (i.e., FIFO).
  • The worker thread will always process the request that come first. The result will be send to the client through callback (please refers to the Push Design article earlier).
  • On the other hand, the client request  will be appended at the end of the queue. And the client will wait for the result through callback.
With this processing order, all requests will be served based on its' "request time".

Things will become more complicated with the following design:
  • Thread pool (i.e, there are multiple worker threads that handle the request) - you can find many open source C# thread pool libraries which smartly create more threads when there are many requests and reduces the number of threads when the number of requests reduce. Some will even create threads based on the number of CPU core.
  • Request can be prioritized - with prioritization, it allows the urgent request to jump queue even though it came in late. You can imagine that the server program has multiple queues (one for each priority) and the highest priority will have threads to standby to serve the urgent request.
OK. Let's continue the previous Push Design topic.

You need "command queue" and "callback data queue"

With WCF, the socket programming becomes easier. But, without the "non-blocking" design in mind, the communication process will make the server or the client unresponsive. The unresponsiveness will be severe when the number of concurrent clients increase and the amount of data to be transmit become larger. To alleviate this problem, you need to implement thread pool and queue into both server program and client program.

In the client program, you need this:
  • Command queue - when the user click "submit request to the server", the command (written in WCF) should go into a "command queue" (this queue is residing at the client site). Then, one worker thread will send this request to the server. Since we are designing "non-blocking" program, the worker thread should not wait for response from the server. It will continue to send the next request/command to the server until the queue is empty. From the user experience, the user will feel that clicking the submit button does not freeze the screen (this is something good). For example, the user is using Internet browser to open multiple tabs and each tab is requesting different web pages.
  • Callback data queue - once server completed and the request and sends the result back to the client (through callback), the client should store the result into a queue. This is the second queue that you need aside from the command queue. Upon receiving the response, the worker thread should dispatch the result to the respective "caller" (which could be a screen) until the queue is empty.

In the server program you need this:
  • Command queue - when the client program sends a request, it will be appended to the queue (this queue is residing at the server site). The client should not wait for the result or else they could be blocking the server (this could end up with resource contention problem where multiple clients are competing for the same resource). A worker thread will pickup this request and do all the necessary process. Upon completion, it will append the result to the callback data queue and let another worker thread to dispatch the result to the client.
  • Callback data queue - same as the client program, this is another queue aside from the command queue. The purpose of this queue is to let the command worker thread to process the rest of the requests immediately after one request has been completed. Making a callback from to the client might face latency problem (i.e, not really "realtime" but some unexpected network traffic out there). With a thread that only handles the callback, even though the network connection is slow it won't affect the command work thread (the processing time will be maintaining). Now, the callback worker thread will take it's own sweet time to send the result to the client. No worry about the process time. No worry about the limited command worker thread in the pool.
The command queue and callback data queue should work in conjunction with thread pool. You may have one thread pool to take care one queue or one thread pool that take cares all the queues.


Friday, July 4, 2014

Push design


Everything in the communication has time limit

In push design, one of the bigger challenge is to make sure that both server and client side are storing the data/request into a "queue" and then to be handle when some threads are free to process the data/request. By using the "queue", the client/server can end the current method call after the data/request has been queued for later processing.

In the WCF context, you have two types of design to process the data/request:

1. using "function" design (works like "DateTime.Now" which returns the value immediately) - for example, the client sends "current time" command to the server and expecting the server responding (almost) immediately at the end of the calls.

2. using "callback" design - for example, the client sends "current time" command to the server and does not wait for the server respond. Instead, the server will send the current time through callback.

Both designs have pros and cons and it all depends on your need.

- The "callback" design allows the server to take it's precious time to prepare the necessary data for the client. In case the server is busy or the resources were blocked, it just have to wait until those resources were freed up. It also allows the server to schedule the process later. Upon completion, the server will make callback to the client. This is acceptable if it is not a real-time system.

- The "function" design - the client is always waiting for the result and it needs it now. By using this design, your server is running on deadlock risk (i.e., competing for the resources and locked the resource that other client is asking for). Since all the clients want it now, the deadlock will occur as soon as the same resources were requested by multiple clients. Of course, the deadlock can be avoided with proper locking mechanism.

Even with WCF, the connection will get disconnected

This is not true if you have full control over the server and client. WCF allows the system administrator to tune the "keep alive" time limit. Just in case you don't have the full server access, you need to do something to keep the connection alive.


This can be done by sending NOOP command (i.e., a dummy command that does not perform any action) from client to the server - this will keep the connection alive. In case the connection has broken, you just need to re-establish it.

To send the NOOP command repeatedly, you just need a System.Threading.Timer object which queuing the NOOP command in every 1 or 2 seconds.

Tuesday, April 15, 2014

How to get the updated data - Push VS Pull

In a client and server environment or cloud environment, your program often requires to monitor the updates on the server. There are two ways to catch the updates: either using a Push or Pull design.

  • Push design - with this design, when there is an updates happened, the server will notify the client program. The design will be more complicated (in both client program and server program) as compared to the Pull design.
  • Pull design - with this design, the client program will continuously query the server for the updates. Of course, this design is very simple but it comes with a bigger costs (in terms of bandwidth and server processing power) when the number of connections grow.
In order to serve more client connections and reducing the bandwidth consumption, you will have to implement the Push design.

Using socket or web socket to implement the push design:
  • This is one of the basic element to implement the push design. So, you must learn how to write socket program. With .Net, you may use WCF (Windows Communication Foundation) to implement this idea but you still need to learn the technical details of what is all about socket and how it works with different configuration.
  • Imagine that user A key in a new blog post throught a website and then all the followers will be notified within a few seconds. In this case, the server will send a signal (either using TCP or UDP) to the "online users" (i.e., the user must run a client program and sitting down in the computer to wait for the incoming signal). The preferred way to send the signal is using UDP which you can find lots of information about TCP vs UDP.
  • Other than how to send the signal, one of the challenge is the how secure is your data when it is travelling from the server to the client or vice versa. Of course, with WCF, you have the choice of choosing different configuration. In other platform (other than .Net), you will might have to implement the security over the socket communication using SSL/TLS. Just to share with you that you can implement SSL/TLS in Python easily.
  • I guess we are quite lucky with the modern programming languages because most of the modern programming langauges able to support asynchronous design with a few keywords changes. We need to learn about async programming as well or otherwise the server program will not be able to scale-up.


Saturday, March 2, 2013

Posting data in JSON format to the ASP.NET website

When you are posting data using JQuery AJAX, you may call $.post() method to submit the data. We normally post the "data" by specifying the parameter name and value into the "data" parameter in the $.post() method. But, it will be very tedious to add new field to the data parameter when you already have lots of fields. To ease the coding maintenance, you may post the data in JSON format. This is quite simple by instantiating a new Object and set the property with values.

For example, I wrote this script in a HTML file:

    <script src="../Scripts/jquery-1.7.js" type="text/javascript"></script>
    <script type="text/javascript">
        function postJson() {
            var m = {
                myname: 'abc',
                myage: 10
                // add more properties here
            };
            var url = 'post_json.aspx';
            $.post(url, JSON.stringify(m), function (d) {
                if (d && d == 'ok') {
                    alert('ok');
                }
                else {
                    alert('failed');
                }
            });
        }
    </script>

Below is the "post_json.aspx" which you may use Handler (.ashx) to avoid the ASP.NET page life cycle.

using System.Web.Script.Serialization;

    public partial class post_json_post_json : System.Web.UI.Page
    {
        protected void Page_Load(object sender, EventArgs e)
        {      
            string json_text = this.Request.Form[0];
            JavaScriptSerializer js = new JavaScriptSerializer();
            CInfo my_obj = (CInfo)js.Deserialize(s, typeof(CInfo));

            this.Response.Write("ok");
            this.Response.End();
        }
    }

This is the data class that I'm using in this demo:

    public class CInfo
    {
        public string myname { get; set; }
        public int myage { get; set; }
    }

By using this technique, you will be achieving the following:
  • Easy to maintain the JavaScript coding by replacing the data parameter in the $.post() with a stringify data in JSON format.
  • Easy to maintain the code in ASP.NET - this is because you just need to declare a class which contains all the properties which is same as the data in JSON format. You may declare a property with other class type as well and also supporting object array.

Monday, October 22, 2012

Scaling out your system using web services

There are many different ways to improve the system respond time: one of the strategy is call scaling out. In our system design, we are implementing the "basic modules" (i.e., the system infrastructure) in web service and it is able to achieve this easily.

Web service does not have a huge different from a website. The noticeable different is that it does not have user interface (or web pages) for visitor to access. You may imagine that the web service like a mobile phone station/transmitter which is providing connection to the mobile phone. In .Net, the web service is implemented in ASMX format, WCF (Windows Communication Foundation) or simply ASPX (which returns the data in XML or JSON format).

Now, the best part of web service is not something that will make our solution cool. Instead, it is able to scale out to a server other than the web server (that is hosting the website). In this case, our customer might end up with a web server to host the website, a few web servers that host the web services.

Many programmers argue that web service is slow because of XML SOAP involves in the communication. It's true that web service in ASMX is slow but the visitor won't really feel it because of these communications were made among the web servers which is sitting next to each other. Of course, the web service implemented in XML SOAP is not suitable for real-time application. The real-time application requires low level socket programming and the handling strategy will be different.


Monday, October 15, 2012

Security web service

When you are developing a large scale application, you need a security module that is able to authenticate the users. In our system design, we developed a security web service which is shared among the sub-modules and also reduce the development time.

In .Net, you can achieve this with the technology that you want:
  • Implementing the security service using ASMX/ASHX - this service will be hosted through HTTP/HTTPS.
  • Implementing the security service using WCF - in this case, you will have the choices of different protocols such as TCP, named pipe, etc.
If you are asking why we need to reinvent the wheel when .Net comes already have this security feature for enterprise? The answer is simple: our security web service can be tailored made based on our customer's requirements. We know that not all projects that we are involving requires a complex security service. Some requires a basic login with user name and password only. Some requires the security control upto field on the screen.

Updates on 6th May 2017 - it seems like ASMX or WCF are quite hard to be converted to other programming language/platform. Best is to use ASHX (i.e., generic handler) which has a faster response time due to it's simplicity and  flexibility. It's also easier to port over to other programming language/platform.

Wednesday, December 21, 2011

Precompile Web Application Project

If you have a website project, you have the option to precompile the website. So, the website startup (in IIS) will be responsive. But, for Web Application project, you don't have such precompile option and you will have to do it manually.

Steps to precompile your Web Application project:
  1. Publish your project by right click on the project name and choose Publish.
  2. Choose "File System" in the Publish Method and specify the location.
  3. Once the publishing has completed, run the "Visual Studio Command Prompt (2010)" from the Windows Startup menu.
  4. Then, type the following command and press Enter.
                    aspnet_compiler -p WebApp -v / WebAppPrecompile

The above command will precompile the "WebApp" locally without creating the Virtual Directory in the IIS.

Reference:
http://msdn.microsoft.com/en-us/library/ms227976%28v=VS.90%29.aspx

Monday, August 9, 2010

String Comparison Optimization

String myString = String.Intern("VERY LONG STRING #2");

if (Object.ReferenceEquals(myString, "VERY LONG STRING #1"))
{
...
}

else if (Object.ReferenceEquals(myString, "VERY LONG STRING #2"))
{
...
}

else if (Object.ReferenceEquals(myString, "VERY LONG STRING #3"))
{
...
}

...

else

{

...

}

Reference:
http://dotnetfacts.blogspot.com/2008/03/how-to-optimize-strings-comparison.html
http://msdn.microsoft.com/en-us/library/system.string.intern%28vs.71%29.aspx