Archive for the ‘Apache’ Category

5 fold speed increase – switch to fpm

leave a comment »

To all my friends who have been working on performance improvement on website, here is a good news. With the latest apache version 2.4, if you simply switch from using php as a module to using php-fpm , you could increase your website speed upto 5 times!

PHP as module

Most people are aware that PHP, 


php-fpm which stands for PHP –  Fast CGI Process Manager is t

With the arrival of mod_proxy_fcgi Apache finally gets the ability to neatly talk to external fastcgi process managers making it more efficient. Delegating php requests to external fpm servers greatly reduces the load on web servers like apache resulting into efficient utilisation of machine resources and faster processing speed for users on the other end. Along with all that, php fpm can run opcode caching engines like apc in a very stable manner.

Load Testing

Previous setup :  Apache 2.4.3, PHP 5.4.10,MySQL 5.5.28, PHP as module

New setup: Apache 2.4.9,  PHP 5.4.29,MySQL 5.5.36, PHP-FPM with apache module mod_proxy_fcgi

Here are the results of running seige for 60 seconds.

siege -b -t60S <url>

Transactions: 5989 hits 1180 hits
Availability: 100.00% 100.00%
Elapsed time: 60.00 secs  59.61 secs
Data transferred: 236.95 MB  75.82 MB
Response time: 0.15 secs  0.75 secs
Transaction rate: 99.82 trans/sec 19.80 trans/sec
Throughput: 3.95 MB/sec  1.27 MB/sec
Concurrency: 14.97 14.93
Successful transactions: 5989 1180
Failed transactions: 0 0
Longest transaction: 2.61 1.14
Shortest transaction: 0.08 0.31

Previously, the average time a page used to take to fetch (just the html) was 120-150ms. After this change it came down to 20-30ms! Quite a delightful observation.:)


Check out the following links to learn more





Written by rationalspace

June 16, 2014 at 6:25 pm

Forcing gzip content in response

leave a comment »

Recently we observed a very odd thing in our apache logs. A handful of requests were taking very long time to process. When we checked for the same in php logs, we saw that they had been processed pretty fast at php’s end. So the issue lied in serving the response to the client. Another interesting thing was that all these requests were from User-Agent IE6.  We figured that  the response that we were sending back was not g-zipped.  Since IE6+ supports gzip, so how come we are not sending the compressed response back? A little more digging revealed that the  header Accept-Encoding : gzip was not coming in the request and hence we were sending uncompressed response.

Weird. Why would IE6 not send the correct headers? Who is the culprit?

The article Use compression to make the web faster from the Google Code Blog contains some interesting information:

anti-virus software, browser bugs, web proxies, and misconfigured web servers.  The first three modify the web request so that the web server does not know that the browser can uncompress content. Specifically, they remove or mangle the Accept-Encoding header that is normally sent with every request. 

This is hard to believe, but it’s true. According to a google developer’s post:

a large web site in the United States should expect roughly 15% of visitors don’t indicate gzip compression support.

Also there is  additional information:

  • Users suffering from this problem experience a Google Search page that is 25% slower – 1600ms for compressed content versus 2000ms for uncompressed.
  • Google Search was able to force the content to be compressed (even though the browser didn’t request it), and improved page load times by 300ms.
  • Internet Explorer 6 downgrades to HTTP/1.0 and drops the Accept-Encoding request header when behind a proxy. For Google Search, 36% of the search results sent without compression were for IE6.

One way could be just to force gzip response if-respective of the request. Since most browsers accept gzip this should not be a problem. This can be achieved by setting the flag in the mod_deflate section in apache config.

BrowserMatchNoCase (MSIE|Firefox|Chrome|Safari|Opera) force-gzip

And restart apache. But this is not a 100% safe method since there will be some requests from very old browsers who do not understand gzip and will not take this request.

There could be ways to check whether the browser supports gzip other than using the Accept-Encoding header and then send the compressed response. We could probably do the following:

  1. Inspect all requests missing a valid Accept-Encoding header.
  2. Look at the User-Agent.
  3. If it’s a “modern” browser…
  4. (IE 6+, Firefox 1.5+, Safari 2+, Opera 7+, Chrome)
  5. And if the request does not have a special cookie…
  6. Run a test.

Check if the browser supports GZIP

At the bottom of a page, inject JavaScript to: Check for a cookie. If absent, set a session cookie with a stop value. Write out an iframe element to the page.

if (!document.cookie.match(/GZ=Z=[0,1]/) {
document.cookie = 'GZ=Z=0';
var i = document.createElement('iframe');
i.src = '/compressiontest/gzip.html';
// Append iframe to document.

Running the test

The server responds with an HTML document containing a JavaScript block in the body, served with the following headers:
Content-Type: text/html
Pragma: no-cache
Expires: Fri, 01 Jan 1990 00:00:00 GMT
Cache-Control: no-cache, must-revalidate
Content-Encoding: gzip

We do not want this response to be cached.
The response body is always served compressed, regardless of the Accept-Encoding header on the request.

If the browser understands the compressed response, it executes the JavaScript and sets the session cookie to a “compression ok” value.
document.cookie = 'GZ=Z=1; path=/';

If the browser does not understand the response, it silently fails and the cookie value remains the same.

Forcing compression

Subsequent requests from the client will contain this session cookie with its updated value.
The server always sends compressed content to requests that contain the cookie with the “compression ok” value.
We are only able to compress the response to the second request.
The server never sends the compression testing JavaScript to requests that contain the stop value in the cookie.

Gzip compresses the page by about 70%!
This reduces page load time for affected requests by ~15%!
This provides a noticeable latency win and bandwidth savings for both the website owners and the users.


Written by rationalspace

May 14, 2014 at 6:24 pm

Posted in Apache, Performance

Hiding Apache and PHP version information in response headers

leave a comment »

This is an important security check that all web-masters should make – hide the web server information from the response headers.

One can easily check the response headers in firebug. It comes like this :

Server: Apache 2.4

To hide apache version  from the response headers , you can do the following in your apache config:

ServerTokens ProductOnly
ServerSignature Off

And restart apache.

You can also do a similar thing to hide php information. Find your php.ini and make sure this piece of code is there.
expose_php = Off

Again, restart apache and you are done.

Written by rationalspace

April 9, 2014 at 4:22 pm

Analysing apache logs

leave a comment »

View Apache requests per day

awk '{print $4}' <log file> | cut -d: -f1 | uniq -c

Code breakdown:

awk ‘{print $4}’ example.com Use the awk command to print out the$4th column of data from the Apache access log which is the time stamp.
cut -d: -f1 | uniq -c Use the cut command with the -delimter set to a colon : and grab the -field of data that shows up 1st before the delimiter. Then use the uniq -ccommand to uniquely count up the hits.

You should get back something like this:

6095 [20/Jan/2014
7281 [21/Jan/2014
6517 [22/Jan/2014
5278 [23/Jan/2014

View Apache requests per hour

grep "23/Jan" | cut -d[ -f2 | cut -d] -f1 | awk -F: '{print $2":00"}' | sort -n | uniq -c

Code breakdown:

grep “23/Jan” progolfdeal.com Use the grep command to only show hits from today from the Apache access log.
cut -d[ -f2 | cut -d] -f1 Use the cut command with the -delimter set to an opening bracket [and print out the -field of data that shows up 2nd, then use the cutcommand again with the -delimter set to a closing bracket ] and print out the-field of data that shows up 1st which gives us just the time stamp.
awk -F: ‘{print $2″:00″}’ Use the awk command with the -Field delimiter set to a colon :, then print out the $2nd column of data which is the hour, and append “:00” to the end of it.
sort -n | uniq -c Finally sort the hours numerically, and then uniquely count them up.

You should get back something like this:

200 00:00
417 01:00
244 02:00
242 03:00
344 04:00
402 05:00
522 06:00
456 07:00
490 08:00
438 09:00
430 10:00
357 11:00
284 12:00
391 13:00
163 14:00

View Apache requests per minute

grep "23/Jan/2013:06" | cut -d[ -f2 | cut -d] -f1 | awk -F: '{print $2":"$3}' | sort -nk1 -nk2 | uniq -c | awk '{ if ($1 > 10) print $0}'


Code breakdown:

grep “23/Jan/2013:06” example.com Use the grep command to only show hits from today during the 06th hour from our Apache access log.
cut -d[ -f2 | cut -d] -f1 Use the cut command with the -delimter set to an opening bracket [and print out the -field of data that shows up 2nd, then use the cutcommand again with the -delimter set to a closing bracket ] and print out the-field of data that shows up 1st which gives us just the time stamp.
awk -F: ‘{print $2″:”$3}’ Use the awk command with the -Field delimiter set to a colon :, then print out the $2nd column which is the hour, follwed by the $3th colum which is the minute.
sort -nk1 -nk2 | uniq -c Sort the hits numerically by the 1st column which is the hour, then by the2nd column which is the minute.
awk ‘{ if ($1 > 10) print $0}’ Finally use the awk command with an ifstatment to only print out data when the $1st colum which is the number of hits in a minute is greater than 10.

You should get back something similar to this:

12 06:10
11 06:11
16 06:12
13 06:20
11 06:21
12 06:28
12 06:30
16 06:31
14 06:39
11 06:40
15 06:52
32 06:53
43 06:54
14 06:55

Written by rationalspace

February 27, 2014 at 3:43 pm

Posted in Apache, Utilities

Improving website performance – high speed delivered!

leave a comment »

Performance is a big thing. The faster the website, the better it is. In the world of new-age internet, speed is not only an important factor to retain users, but also important from SEO perspective. More and more weightage is being given to speed by search engines to rank websites. There are a number of tools like pingdom and google page speed insights available that help you to analyse your site with respect to performance

Since I have been working on improving performance of our website for quite some time, I thought of jotting down all the pointers required to optimize websites in one place.

  1. Use Sprite
  2. Zip your content – Use Content Encoding Header
  3. Add expire headers
  4. Remove blocking javascripts – Place assets optimally – CSS on top, JS on bottom
  5. Reduce Cookie Size
  6. Serve static content from another domain/CDN
  7. Optimise CSS
  8. Minify JS and CSS
  9. Cache resources – Use headers ” cache-control” and “Expiry”
  10. Render Google Ads asynchronously
  11. Compress images
  12. Don’t call ads in mobile responsively
  13. Minimise use of plugins in a CMS
  14. Optimise Queries
  15. Use APC Caching
  16. Add Character Set Header
  17. Add dimensions to images
  18. Load scripts asynchronously whenever possible
  19. Use Google PageSpeed Module

Written by rationalspace

February 26, 2014 at 3:49 pm

Logging response time and request headers in apache

leave a comment »

If you are ever want to optimise your website for performance in a LAMP kind of setup, you might want to measure the response time observed by apache.

This can be done quite simply by using the flag %D. It logs the response time in microseconds.

Your configuration directive would be:

LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %D

Also, if you want to monitor your request headers you can do that by individually logging different headers. For example:

LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %D \"%{Connection}i\" \"%{If-Modified-Since}i\" \"%{Expect}i\" \"%{Pragma}i\" \"%{Cache-Control}i\" combined

Also, you can change LogLevel to Debug. This will log the different modules that are involved in processing a request and their output in error_log. It is quite interesting how you can see the mod_deflate working to gzip your content, by how much it compressed etc.


Written by rationalspace

November 14, 2013 at 3:41 pm

Posted in Apache

Rotate Logs – Apache

leave a comment »

Rotatelogs is a simple utility for use along with Apache’s piped logfile feature. It supports rotation based on a time interval or maximum size of the log.

Here is what you can do for rotating your custom and error logs:

 ErrorLog "|/some path/apache2/bin/rotatelogs -l logs/error_log_%Y-%m-%d 86400"

This will rotate error logs after one day(86400 seconds) and create files based on dates.

CustomLog "|/some path/apache2/bin/rotatelogs -l logs/access_log_%Y-%m-%d 86400" combined

This will rotate custom logs after one day(86400 seconds) and create files based on dates.

And yes, you need to restart apache to make this configuration take effect.

Written by rationalspace

September 30, 2013 at 12:27 pm

Posted in Apache, OpenSource Tech

%d bloggers like this: