RationalSpace

css3 features

leave a comment »

http://tutorialzine.com/2013/10/12-awesome-css3-features-you-can-finally-use/

Advertisements

Written by rationalspace

March 24, 2015 at 5:10 pm

Posted in Frontend, UI

Tagged with

Browser Caching with response headers

leave a comment »

Everyone is looking for high-speed ,faster websites with response time less than 100ms. Even Google in its quest to crawl more and more websites and increase its gigantic index size,  is now giving more weightage to websites that respond faster.

A lot depends on the communication between the browser and the web server. Nowadays, the browsers are becoming much more smarter – If the resource is already fetched once , then the browser requests it again to the server with the header “If-modified-since” <last modified time>. If the resource has not changed since the last time it was fetched by the browser, the server responds with a status code 304. This basically tells the browser that the requested resource is not modified and it can go ahead and serve it from its cache. So, the response headers sent by your web server play an important role in delivering the resource faster to the user.

This technique of conditional request works well in a desktop environment but when we use the same for mobile applications, it still turns out to be expensive. That’s because browser still needs to make a request to the server to confirm whether it should server from its cache or not. In a mobile scenario, we are constrained in terms of network speed, RAM, CPU of the device etc. So the more requests we make the more is the overhead in rendering the page which should be avoided.

So for things like JS, CSS and images can’t we just tell the browser not to request it since we don’t expect it to be changed frequently anyways?

Looks like we can. We should use headers called “cache-control: max-age: 1 year, public”. This will tell the browser that once it has fetched the resource it need not go again to the server looking for it till the time specified in max-age.

Also we can use another header called “Expires:<Timestamp GMT>”, which basically tells the browser that the resource does not expire till the specified time. So the data value for Expires is an HTTP date whereas Cache-Control max-age lets you specify a relative amount of time so you could specify “X hours after the page was requested”.

Though both these headers end up doing the same job effectively, still it may be better to use both. Cache-control is relatively new and has been introduced only in HTTP/1.1.  It is possible that if your page goes through some old proxies, the header may not be understood.

An advantage of using Cache-Control over Expires is that if you specify only “Expires”, your server may get overloaded with the requests from all users for a resource coming at the same time.

So why don’t you go back and check the headers in your websites? If these 2 are not there, make sure that you put them in!

Note: An awesome resource on web caching

Written by rationalspace

March 2, 2015 at 4:07 pm

Posted in Frontend, Performance

Scaling web applications

leave a comment »

As a web application starts becoming a hit with increase in traffic, increase in user base etc, often the biggest challenge that it faces is scaling.  How to ensure that all the features of your website work as good as ever even when there many more requests per second coming to your server.  Different scaling techniques are applied at different stages of a website growth – from something like 100K sessions per month to billions of sessions per month.

Before we delve deeper into how to scale, one thing that we need to understand is that performance != scalability. Performance and scalability are two very different things. Performance is more about how fast a request can be executed or how optimum is the use of resources. Whereas, scalability is the ability of the architecture or the system to handle a large number of requests in an efficient way.

So a website can be analysed by 2 kinds of variables – the ones that we want to be high like performance, scalability, responsiveness, availability and the ones that we want to be low – downtime, cost, maintenance, SPOF ( single point of failure). We have to keep in mind these variables while we are designing architecture of scalable web applications.

There are several methods or architectural designs by which we can scale web applications.

The first one and a very common one is Vertical Scaling. This type of scaling is also called “Scaling up”.Vertical scaling basically means that you add more hardware without adding more number of nodes. So if your current server has 4GB RAM and dual core, you extend it to 8GB RAM and quad-core. The advantage of vertical scaling is that it is easy to do. You don’t really need software skills to do this. On the other hand, the disadvantage is that the cost increases exponentially. Also , in case of vertical scaling we cannot handle the issue of SPOF. If the server goes down, the application dies too – which can lead to situations where users have to face significant downtime.

The second way to scale is Vertical Partitioning. This basically involves partitioning your application is such a way that different components or software layers are put on different servers and each of that server is optimised to handle that particular component or layer. For example, web servers like Apache or Tomcat typically require more CPU as they need to handle more TCP/IP connections. Also, database servers like MySQL require more RAM as it loads a lot of tables and queries in memory. So it makes sense to put them on different nodes.The advantage of Vertical Partitioning is that we can optimise the server according to the application requirement. Also we need not change anything in the application as such. However, the disadvantage is that in some situations it might lead to sub-optimal use of resources. For example, on the node that has the database hosted, CPU may remain idle most of the time. Also, in this kind of architecture all nodes are heterogenous. So the maintenance is a bit complicated. Nevertheless, in most situations vertical partitioning is a good way to scale websites and it works pretty well.

The third and another way to scale is Horizontal Scaling. In this approach, one would simply add more nodes to the system with the same copy of the application server. This type of scaling is also called “scaling out“. So you basically add a load balancer in front of your nodes and route the traffic. Load balancers could be hardware or software based. A very popular open source load balancer software is HAProxy. So as your traffic increases, you increase the number of nodes behind the load balancer. Since all the nodes are homogeneous in this case, it is simpler to scale.  One problem that needs to be addressed when we are designing horizontal scaling systems is Sessions. Now once we have a load balancer in front of our nodes, it can so happen that one request of a person goes to one node and the subsequent one goes to the other. If this happens, the user will suddenly feel lost as the application will show him a login page again or in case of an e-commerce application, all the cart data may suddenly vanish.There are several ways to handle handle this – the most common being “Sticky Sessions” – Sticky sessions imply that the first request and all further requests of the same user will go to the same server. This works in most cases though it has a slight disadvantage of asymmetric load balancing – which means it can happen that all requests of a particular user or user group going to a node may load that node heavily. There are ways to handle that by implementing Central session storage or cluster session management.

So as you might have observed, with the progress of each technique discussed here, the level of complexity is also increasing.

Another way to scale is “Horizontal Partitioning” of database. Since database is often observed as the bottleneck of web app, it makes sense to divide the db into multiple servers. Basically in this technique we divide the tables horizontally – The rows are divided across nodes based on algorithms like FCF (first come first), Round Robin or hashing etc. The flip side of implementing horizontal partitioning is that it needs code change and is complex to maintain – you need to aggregate data across clusters. Also, if we have a change in global setting, it needs to be replicated across nodes.

So here was my attempt to discuss the various techniques of scaling web application. Hope it helps:)

Written by rationalspace

February 27, 2015 at 2:32 pm

Using JSONP to make requests across domains

leave a comment »

Recently, I came across a requirement wherein we wanted to partner with other websites by giving them widgets of our stock charts. A widget is basically a tool or a piece of information that is useful and mostly just plug and play. Plug and play implies that one could take a small piece of code and embed in his blog/website and the information/widget will start showing up. In our case we wanted to make our charts available to anyone who would like to take it and embed in his/her blog.

This is pretty simple to do if it is some static piece of information like an image or a media file that needs to be shown on the other website. All you need to do is make an http request to your domain from the blogger’s domain. But we wanted our charts to be dynamic. Data should get updated each time a request is made. Now, JSON has been pretty much there for making any type of calls between client and server.

JSON

JSON (Javascript Object Notation) is a convenient way to transport data between applications, especially when the destination is a Javascript application.

JQuery has functions that make Ajax/HTTPD calls from a script to a server very easy and $.getJSON() is a great shorthand function for fetching a server response in JSON. But this simple approach fails if the page making the ajax call is in a different domain from the server. The Same Origin Policy prohibits these cross-domain calls in some browsers as a security measure.

But what about data transfer across domains?

A standard workaround is to make use of Cross Origin Resource Sharing (CORS) which is now implemented by most modern browsers. Yet many developers find this a heavyweight and somewhat pedantic approach. Also, you cannot possibly ask every blogger to first inform you about his domain, then you make an entry in the CORS file and only then would the widget start working for him. Pretty cumbersome, isn’t it?

So what is the way out ? 

JSONP

JSONP (first documented by Bob Ippolito in 2005) is a simple and effective alternative that makes use of the ability of script tags to fetch content from any server.

This is how it works: A script tag has a src attribute which can be set to any resource path, such as a URL, and need not return a JavaScript file. Using this I can fetch data from another server and make the Javascript draw a widget using it.

Here is an example:

Client Side:
$.ajax({
type : 'GET',
url : siteurl+'json/getSomeData.php',
data : {tick:(element.data('id')).toUpperCase(),callback:'callback_fun'},
dataType : 'jsonp',
cache : true,
crossDomain : true,
success : function(data){
}
});


function callback_fun(){
//do something; update your widget etc
}

Server:  PHP script example:

header("Content-Type: application/javascript");
$some_data = db_call(); //some db call
echo (isset($_GET['callback']) ? $_GET['callback'] : '').'('.json_encode($some_data).')';

Note the brackets enclosing in php script echo. Its pretty important.  Also, the callback is important. Won’t work without that.

See a demo here.

Written by rationalspace

February 9, 2015 at 4:20 pm

Performance techniques for responsive web design

leave a comment »

There is no doubt in the fact that mobile usage is sky-rocketing in the last few years. Also mobile is the only way by which a lot of people are accessing internet – especially in Asia and Africa.  With such compelling evidence, it is essential that a website should focus on becoming mobile friendly. In fact, it is not just enough to be mobile friendly but also equally important that a website loads fast in a mobile device.

So why does performance need special attention in mobile?

Before a mobile device can transmit or receive data, it has to establish a radio channel with the network. This can take several seconds. The best part is, if there is no data transmitted or received, after a timeout the channel will go idle, which requires a new channel to be established. This can obviously cause huge issues for your web page load times.

On a typical United States desktop using WiFi, a request’s average round trip takes 50 milliseconds. On a mobile network, it’s over 300 milliseconds. This is as slow as old dial-up connections. Additionally, even WiFi is slower on handsets, thanks to antenna length and output power. This means you really need to prioritize performance as you optimize your site’s design for mobile devices.

Techniques to improve performance of a responsive website

Over the past few months, conversations about Responsive Web design have shifted from issues of layout to performance. That is, how can responsive sites load quickly -even on constrained mobile networks. So what can be done? Here comes the talk a new set of techniques called RESS –  Responsive Web Design + Server Side Components.

So here is a list of things that can help :

Send smaller images to devices

The average weight of a webpage today is 1.5MB and 77% of that is just images! So if we optimise images, it will help significantly to improve performance. Now how can we send smaller images to the mobile devices? This could be done by an older approach wherein you need to maintain different image sizes on the server and then depending on the screen size send the appropriate one.

Detect client window size and set a cookie

<script type='text/javascript'>
function saveCookie(cookiename,cookieval){
//write cookie
}
saveCookie("RESS",window.innerWidth);
</script>

Server Side Code to read size and deliver images

<?php
$screenWidth=$_COOKIE["RESS"];
if($screenWidth=="320"){
$imgSize = "300";
}else if($screenWidth=="500"){
$imgSize = "480";
} ///and so on
echo "<img src="&lt;path of file&gt;_$imgSize.png" alt="" />

?>

So what’s the new way ? With tools like “Adaptive Images“, this is made much easier!  Adaptive Images detects your visitor’s screen size and automatically creates, caches, and delivers device appropriate re-scaled versions of your web page’s embeded HTML images. No mark-up changes needed.

Conditional Loading

Another technique that will help improve performance is conditional loading – You detect on server side the kind of device the user is on—screen size, touch capabilities, etc.—and load only the content that is necessary for that user to see. From social widgets (like google, fb, twitter etc sharing)  to maps to lightboxes, conditional loading can be used to ensure that small screen users don’t download a whole bunch of stuff they can’t use.

I found a good script on github that helps in server side detection – https://github.com/serbanghita/Mobile-Detect

Feature detection

Don’t load the features that won’t make sense on mobile. A simple example could be inserting a video link. Detect the browser and insert video link only where it works. Else show simple text.

A great tool for finding your user’s browser capabilities is Modernizr. However, you can only access its API on the browser itself, which means you can’t easily benefit from knowing about browser capabilities in your server logic. Now this can help to tweak thinks on client side and change appearance etc, but sometimes its better to send the correct markup from the server side itself. The modernizr-server library is a way to bring Modernizr browser data to your server scripting environment. For example you can detect if the browser has things like canvas, canvastext, geolocation etc.
<?php
include('modernizr-server.php');
...

if ($modernizr->svg) {
...
} elseif ($modernizr->canvas) {
...
}
?>

Putting all these techniques together you can dramatically improve the performance of your responsive site. There’s really no excuse for serving the same large sized assets across all browser widths. Make your responsive website respond not only to changing design patterns but to the browser environment it’s being served into. Go mobile first and performance first when designing and coding your next responsive website.

Written by rationalspace

June 20, 2014 at 1:07 pm

5 fold speed increase – switch to fpm

leave a comment »

To all my friends who have been working on performance improvement on website, here is a good news. With the latest apache version 2.4, if you simply switch from using php as a module to using php-fpm , you could increase your website speed upto 5 times!

PHP as module

Most people are aware that PHP, 

 php-fpm

php-fpm which stands for PHP –  Fast CGI Process Manager is t

With the arrival of mod_proxy_fcgi Apache finally gets the ability to neatly talk to external fastcgi process managers making it more efficient. Delegating php requests to external fpm servers greatly reduces the load on web servers like apache resulting into efficient utilisation of machine resources and faster processing speed for users on the other end. Along with all that, php fpm can run opcode caching engines like apc in a very stable manner.

Load Testing

Previous setup :  Apache 2.4.3, PHP 5.4.10,MySQL 5.5.28, PHP as module

New setup: Apache 2.4.9,  PHP 5.4.29,MySQL 5.5.36, PHP-FPM with apache module mod_proxy_fcgi

Here are the results of running seige for 60 seconds.

siege -b -t60S <url>

Mod PHP PHP-FPM
Transactions: 5989 hits 1180 hits
Availability: 100.00% 100.00%
Elapsed time: 60.00 secs  59.61 secs
Data transferred: 236.95 MB  75.82 MB
Response time: 0.15 secs  0.75 secs
Transaction rate: 99.82 trans/sec 19.80 trans/sec
Throughput: 3.95 MB/sec  1.27 MB/sec
Concurrency: 14.97 14.93
Successful transactions: 5989 1180
Failed transactions: 0 0
Longest transaction: 2.61 1.14
Shortest transaction: 0.08 0.31

Previously, the average time a page used to take to fetch (just the html) was 120-150ms. After this change it came down to 20-30ms! Quite a delightful observation.:)

Resources

Check out the following links to learn more
https://wiki.apache.org/httpd/PHP-FPM

 

 

 

Written by rationalspace

June 16, 2014 at 6:25 pm

Sending Emails With Attachments using Amazon SES

leave a comment »

Amazon Simple Email Service (Amazon SES) is a very reliable email service provided by Amazon for sending emails – transactional, communication to your customers, registration or bulk emails like newsletters etc.
It is easy to set up and also scales well. In addition to all this, it is quite cheap. Gone are the days when one had to pay steep prices to the hosting provider for email campaigns with all kinds of cap/limit set to it.
With SES you can send 1000s of emails in a day and even if your limit exceeds, you can request them for raising the threshold. Along with high deliverability, Amazon SES also provides real-time access to your sending statistics and built-in notifications for bounces and complaints to help you fine-tune your email-sending strategy.

So how does one get started? They have written quite a good documentation here, so I will not dwell into that.

My focus here is to share the way to do it in php. There are many ways to download the required package but I used aws phar. You need pear to download phar and set it up.

Setting up phar

  1. sudo apt-get install pear (If you don’t have pear already)
  2. sudo pear -D auto_discover=1 install pear.amazonwebservices.com/sdk

Now , to send an email with attachment – here is the code:

Include the phar file

require '/some path/php/lib/php/AWSSDKforPHP/aws.phar';
use Aws\Ses\SesClient;
global $client;

Create client object

$client = SesClient::factory(array(
'key' => 'your key',
'secret' => 'your secret',
'region' => 'your region'
));

The file you want to attach in the email

$myfile = "path to file to be attached";
$file_size = filesize($myfile);
$handle = fopen($myfile, "r");

Read the file content

$content = fread($handle, $file_size);

Format $content using RFC 2045 semantics using chunk_split. It inserts \r\n at the end of every chunk. This is required by your smtp client – else they just reject your attachment.

$content = chunk_split(base64_encode($content));
$header = "";
$message = "some html string";

You need to put a unique boundary to the multipart message

$uid = md5(uniqid(time()));
$header = "From: ".$source_email_id." <".$source_email_id.">\r\n";
$header .= "Reply-To: ".$replyto_email_id."\r\n";
$header .= "To: ".$dest_email_id."\r\n";
$header .= "Bcc: ".$bcc_email_id."\r\n";
$header .= "Subject: ".$subject_of_the_email."\r\n";
$header .= "MIME-Version: 1.0\r\n";
$header .= "Content-Type: multipart/mixed; boundary=\"".$uid."\"\r\n\r\n";
$header .= "This is a multi-part message in MIME format.\r\n";
$header .= "--".$uid."\r\n";
$header .= "Content-type:text/html; charset=iso-8859-1\r\n";
$header .= "Content-Transfer-Encoding: 7bit\r\n\r\n";
$header .= $message."\r\n\r\n";
$header .= "--".$uid."\r\n";
$header .= "Content-Type: text/csv; name=\"".$myfile."\"\r\n";
use different types here

$header .= "Content-Transfer-Encoding: base64\r\n";
$header .= "Content-Disposition: attachment; filename=\"".$myfile."\"\r\n\r\n";
$header .= $content."\r\n\r\n";
$header .= "--".$uid."--";
$msg['RawMessage']['Data'] = base64_encode($header);
$msg['RawMessage']['Source']= $src;
$msg['RawMessage']['Destinations'] = array($dest);
fclose($handle);

Now send the mail

try{
$result = $client->sendRawEmail($msg);
//save the MessageId which can be used to track the request
$msg_id = $result->get('MessageId');
}

So as you would have observed, there are quite a few complications that one has to handle while sending an email with attachment like the unique id, putting proper end characters at the end of your message chunks, base 64 encoding etc.

Hope this helps.

Written by rationalspace

June 6, 2014 at 6:52 pm

Posted in Cloud, Utilities

%d bloggers like this: