Archive for the ‘Architecture & Others’ Category

Scaling web applications

leave a comment »

As a web application starts becoming a hit with increase in traffic, increase in user base etc, often the biggest challenge that it faces is scaling.  How to ensure that all the features of your website work as good as ever even when there many more requests per second coming to your server.  Different scaling techniques are applied at different stages of a website growth – from something like 100K sessions per month to billions of sessions per month.

Before we delve deeper into how to scale, one thing that we need to understand is that performance != scalability. Performance and scalability are two very different things. Performance is more about how fast a request can be executed or how optimum is the use of resources. Whereas, scalability is the ability of the architecture or the system to handle a large number of requests in an efficient way.

So a website can be analysed by 2 kinds of variables – the ones that we want to be high like performance, scalability, responsiveness, availability and the ones that we want to be low – downtime, cost, maintenance, SPOF ( single point of failure). We have to keep in mind these variables while we are designing architecture of scalable web applications.

There are several methods or architectural designs by which we can scale web applications.

The first one and a very common one is Vertical Scaling. This type of scaling is also called “Scaling up”.Vertical scaling basically means that you add more hardware without adding more number of nodes. So if your current server has 4GB RAM and dual core, you extend it to 8GB RAM and quad-core. The advantage of vertical scaling is that it is easy to do. You don’t really need software skills to do this. On the other hand, the disadvantage is that the cost increases exponentially. Also , in case of vertical scaling we cannot handle the issue of SPOF. If the server goes down, the application dies too – which can lead to situations where users have to face significant downtime.

The second way to scale is Vertical Partitioning. This basically involves partitioning your application is such a way that different components or software layers are put on different servers and each of that server is optimised to handle that particular component or layer. For example, web servers like Apache or Tomcat typically require more CPU as they need to handle more TCP/IP connections. Also, database servers like MySQL require more RAM as it loads a lot of tables and queries in memory. So it makes sense to put them on different nodes.The advantage of Vertical Partitioning is that we can optimise the server according to the application requirement. Also we need not change anything in the application as such. However, the disadvantage is that in some situations it might lead to sub-optimal use of resources. For example, on the node that has the database hosted, CPU may remain idle most of the time. Also, in this kind of architecture all nodes are heterogenous. So the maintenance is a bit complicated. Nevertheless, in most situations vertical partitioning is a good way to scale websites and it works pretty well.

The third and another way to scale is Horizontal Scaling. In this approach, one would simply add more nodes to the system with the same copy of the application server. This type of scaling is also called “scaling out“. So you basically add a load balancer in front of your nodes and route the traffic. Load balancers could be hardware or software based. A very popular open source load balancer software is HAProxy. So as your traffic increases, you increase the number of nodes behind the load balancer. Since all the nodes are homogeneous in this case, it is simpler to scale.  One problem that needs to be addressed when we are designing horizontal scaling systems is Sessions. Now once we have a load balancer in front of our nodes, it can so happen that one request of a person goes to one node and the subsequent one goes to the other. If this happens, the user will suddenly feel lost as the application will show him a login page again or in case of an e-commerce application, all the cart data may suddenly vanish.There are several ways to handle handle this – the most common being “Sticky Sessions” – Sticky sessions imply that the first request and all further requests of the same user will go to the same server. This works in most cases though it has a slight disadvantage of asymmetric load balancing – which means it can happen that all requests of a particular user or user group going to a node may load that node heavily. There are ways to handle that by implementing Central session storage or cluster session management.

So as you might have observed, with the progress of each technique discussed here, the level of complexity is also increasing.

Another way to scale is “Horizontal Partitioning” of database. Since database is often observed as the bottleneck of web app, it makes sense to divide the db into multiple servers. Basically in this technique we divide the tables horizontally – The rows are divided across nodes based on algorithms like FCF (first come first), Round Robin or hashing etc. The flip side of implementing horizontal partitioning is that it needs code change and is complex to maintain – you need to aggregate data across clusters. Also, if we have a change in global setting, it needs to be replicated across nodes.

So here was my attempt to discuss the various techniques of scaling web application. Hope it helps:)

Written by rationalspace

February 27, 2015 at 2:32 pm

Using JSONP to make requests across domains

leave a comment »

Recently, I came across a requirement wherein we wanted to partner with other websites by giving them widgets of our stock charts. A widget is basically a tool or a piece of information that is useful and mostly just plug and play. Plug and play implies that one could take a small piece of code and embed in his blog/website and the information/widget will start showing up. In our case we wanted to make our charts available to anyone who would like to take it and embed in his/her blog.

This is pretty simple to do if it is some static piece of information like an image or a media file that needs to be shown on the other website. All you need to do is make an http request to your domain from the blogger’s domain. But we wanted our charts to be dynamic. Data should get updated each time a request is made. Now, JSON has been pretty much there for making any type of calls between client and server.


JSON (Javascript Object Notation) is a convenient way to transport data between applications, especially when the destination is a Javascript application.

JQuery has functions that make Ajax/HTTPD calls from a script to a server very easy and $.getJSON() is a great shorthand function for fetching a server response in JSON. But this simple approach fails if the page making the ajax call is in a different domain from the server. The Same Origin Policy prohibits these cross-domain calls in some browsers as a security measure.

But what about data transfer across domains?

A standard workaround is to make use of Cross Origin Resource Sharing (CORS) which is now implemented by most modern browsers. Yet many developers find this a heavyweight and somewhat pedantic approach. Also, you cannot possibly ask every blogger to first inform you about his domain, then you make an entry in the CORS file and only then would the widget start working for him. Pretty cumbersome, isn’t it?

So what is the way out ? 


JSONP (first documented by Bob Ippolito in 2005) is a simple and effective alternative that makes use of the ability of script tags to fetch content from any server.

This is how it works: A script tag has a src attribute which can be set to any resource path, such as a URL, and need not return a JavaScript file. Using this I can fetch data from another server and make the Javascript draw a widget using it.

Here is an example:

Client Side:
type : 'GET',
url : siteurl+'json/getSomeData.php',
data : {tick:(element.data('id')).toUpperCase(),callback:'callback_fun'},
dataType : 'jsonp',
cache : true,
crossDomain : true,
success : function(data){

function callback_fun(){
//do something; update your widget etc

Server:  PHP script example:

header("Content-Type: application/javascript");
$some_data = db_call(); //some db call
echo (isset($_GET['callback']) ? $_GET['callback'] : '').'('.json_encode($some_data).')';

Note the brackets enclosing in php script echo. Its pretty important.  Also, the callback is important. Won’t work without that.

See a demo here.

Written by rationalspace

February 9, 2015 at 4:20 pm

Performance techniques for responsive web design

leave a comment »

There is no doubt in the fact that mobile usage is sky-rocketing in the last few years. Also mobile is the only way by which a lot of people are accessing internet – especially in Asia and Africa.  With such compelling evidence, it is essential that a website should focus on becoming mobile friendly. In fact, it is not just enough to be mobile friendly but also equally important that a website loads fast in a mobile device.

So why does performance need special attention in mobile?

Before a mobile device can transmit or receive data, it has to establish a radio channel with the network. This can take several seconds. The best part is, if there is no data transmitted or received, after a timeout the channel will go idle, which requires a new channel to be established. This can obviously cause huge issues for your web page load times.

On a typical United States desktop using WiFi, a request’s average round trip takes 50 milliseconds. On a mobile network, it’s over 300 milliseconds. This is as slow as old dial-up connections. Additionally, even WiFi is slower on handsets, thanks to antenna length and output power. This means you really need to prioritize performance as you optimize your site’s design for mobile devices.

Techniques to improve performance of a responsive website

Over the past few months, conversations about Responsive Web design have shifted from issues of layout to performance. That is, how can responsive sites load quickly -even on constrained mobile networks. So what can be done? Here comes the talk a new set of techniques called RESS –  Responsive Web Design + Server Side Components.

So here is a list of things that can help :

Send smaller images to devices

The average weight of a webpage today is 1.5MB and 77% of that is just images! So if we optimise images, it will help significantly to improve performance. Now how can we send smaller images to the mobile devices? This could be done by an older approach wherein you need to maintain different image sizes on the server and then depending on the screen size send the appropriate one.

Detect client window size and set a cookie

<script type='text/javascript'>
function saveCookie(cookiename,cookieval){
//write cookie

Server Side Code to read size and deliver images

$imgSize = "300";
}else if($screenWidth=="500"){
$imgSize = "480";
} ///and so on
echo "<img src="&lt;path of file&gt;_$imgSize.png" alt="" />


So what’s the new way ? With tools like “Adaptive Images“, this is made much easier!  Adaptive Images detects your visitor’s screen size and automatically creates, caches, and delivers device appropriate re-scaled versions of your web page’s embeded HTML images. No mark-up changes needed.

Conditional Loading

Another technique that will help improve performance is conditional loading – You detect on server side the kind of device the user is on—screen size, touch capabilities, etc.—and load only the content that is necessary for that user to see. From social widgets (like google, fb, twitter etc sharing)  to maps to lightboxes, conditional loading can be used to ensure that small screen users don’t download a whole bunch of stuff they can’t use.

I found a good script on github that helps in server side detection – https://github.com/serbanghita/Mobile-Detect

Feature detection

Don’t load the features that won’t make sense on mobile. A simple example could be inserting a video link. Detect the browser and insert video link only where it works. Else show simple text.

A great tool for finding your user’s browser capabilities is Modernizr. However, you can only access its API on the browser itself, which means you can’t easily benefit from knowing about browser capabilities in your server logic. Now this can help to tweak thinks on client side and change appearance etc, but sometimes its better to send the correct markup from the server side itself. The modernizr-server library is a way to bring Modernizr browser data to your server scripting environment. For example you can detect if the browser has things like canvas, canvastext, geolocation etc.

if ($modernizr->svg) {
} elseif ($modernizr->canvas) {

Putting all these techniques together you can dramatically improve the performance of your responsive site. There’s really no excuse for serving the same large sized assets across all browser widths. Make your responsive website respond not only to changing design patterns but to the browser environment it’s being served into. Go mobile first and performance first when designing and coding your next responsive website.

Written by rationalspace

June 20, 2014 at 1:07 pm

5 fold speed increase – switch to fpm

leave a comment »

To all my friends who have been working on performance improvement on website, here is a good news. With the latest apache version 2.4, if you simply switch from using php as a module to using php-fpm , you could increase your website speed upto 5 times!

PHP as module

Most people are aware that PHP, 


php-fpm which stands for PHP –  Fast CGI Process Manager is t

With the arrival of mod_proxy_fcgi Apache finally gets the ability to neatly talk to external fastcgi process managers making it more efficient. Delegating php requests to external fpm servers greatly reduces the load on web servers like apache resulting into efficient utilisation of machine resources and faster processing speed for users on the other end. Along with all that, php fpm can run opcode caching engines like apc in a very stable manner.

Load Testing

Previous setup :  Apache 2.4.3, PHP 5.4.10,MySQL 5.5.28, PHP as module

New setup: Apache 2.4.9,  PHP 5.4.29,MySQL 5.5.36, PHP-FPM with apache module mod_proxy_fcgi

Here are the results of running seige for 60 seconds.

siege -b -t60S <url>

Transactions: 5989 hits 1180 hits
Availability: 100.00% 100.00%
Elapsed time: 60.00 secs  59.61 secs
Data transferred: 236.95 MB  75.82 MB
Response time: 0.15 secs  0.75 secs
Transaction rate: 99.82 trans/sec 19.80 trans/sec
Throughput: 3.95 MB/sec  1.27 MB/sec
Concurrency: 14.97 14.93
Successful transactions: 5989 1180
Failed transactions: 0 0
Longest transaction: 2.61 1.14
Shortest transaction: 0.08 0.31

Previously, the average time a page used to take to fetch (just the html) was 120-150ms. After this change it came down to 20-30ms! Quite a delightful observation.:)


Check out the following links to learn more




Written by rationalspace

June 16, 2014 at 6:25 pm

Responsive design is not the best way for all websites

leave a comment »

When the only changing factor in the Web experience is the user’s device, responsive design is a useful solution. It works very well for content sites like magazines and newspapers, because content is simply being reformatted. If you’re accessing a publication’s website on a smartphone, for example, you still want to read the news, just smaller parts of it.

People magazine recently adopted responsive design to great effect in order to scale traditional Web content across screens. This works well for magazines and other content publishers, as users are coming to consume content, not necessarily to interact or search for certain answers.

At the device level, responsive design works best if the page contains the type of text and image-based content often found on publisher sites. However, content delivery on responsive sites has the potential to deter users. For instance, if you’re trying to deliver complex functionality built with CSS, JavaScript, Ajax, and other heavy Web development technologies, pages will be heavy and the experience will be dramatically slower on a smartphone or tablet. Time lost equals users lost, as page load times have a direct impact on your ability to deliver users a positive experience.




Written by rationalspace

October 8, 2013 at 7:27 pm

Nginx vs Apache

leave a comment »

Major differences:

  1. Nginx is event-based while apache is process based – i.e apache spawns a thread for each request while in nginx the same thread handles multiple requests
  2. Apache is synchronous while nginx is asynchronous
  3. Nginx works well for static files rendering
  4. Nginx consumes less memory – for 10K requests per second only few MB of RAM
  5. It is light-weight. Most of the time we do not need as many features as apache provides

Written by rationalspace

February 24, 2013 at 7:01 pm

Non-relational Database Systems – NoSQL

leave a comment »

There is a paradigm shift in web applications that were there in early 1990s and now. Most of the earlier web applications were content based sites where a few people used to update / create content and a lot many people used to consume it. So there were fewer writes and more reads. Now with the advent of social networking, more and more people are creating content on the application. There is a shift from read-only architectures to read/write or write heavy architecture. In such a scenario, traditional relational databases do not suffice as the number of users grow. In a large database to make the queries faster, the logic of relating the tables is being implemented at application level. By doing this, the features of a relational db are not being used and the databases are just becoming stores.

Hence was borne a system where we do not maintain relations and is much more flexible and scalable. The NoSQL movement is a used to describe the increasing usage of non-relational databases among Web developers. This approach has initially pioneered by large scale Web companies like Facebook (Cassandra), Amazon (Dynamo) & Google (BigTable) but now is finding its way down to smaller sites like Digg. This movement was initially called the No-SQL as in an alternative to SQL but later was renames as Not Only SQL 🙂

Salient features:

  1. It isn’t a relational database.
  2. lack of fixed schemas
  3. limited support for rich querying.
  4. Offer little functionality beyond record storage (e.g. key–value stores)
  5. Highly optimized for retrieval and appending operations
  6. Has no limit on number of columns , size of data
  7. Development cycle might be faster as one does not need to spend too much time on data modelling and schema design.
  8. Has a distributed, fault-tolerant architecture. Several NoSQL systems employ a distributed architecture, with the data held in a redundant manner on several servers. In this way, the system can easily scale out by adding more servers, and failure of a server can be tolerated. This type of database typically scales horizontally and is used for managing large amounts of data, when the performance and real-time nature is more important than consistency

Written by rationalspace

February 14, 2013 at 5:51 pm

%d bloggers like this: