May 10 2012
May 10

Hardware-based SSL decryption allows web servers (Apache, nginx, Varnish) to focus on serving content.

At Lullabot several of our clients have invested in powerful (but incredibly expensive) F5 Big-IP Load Balancers. One of the primary reasons for investing in an F5 is for the purpose of SSL Offloading, that is, converting external HTTPS traffic into normal HTTP traffic so that your web servers don't need to do the work themselves. HTTPS requests (and more specifically, the SSL handshaking to start the connection) is incredibly expensive, often on the magnitude of at least 10 times slower than normal HTTP requests.

In a quick, largely unscientific test, here are two Apache Bench results against a stock Apache install, one with SSL and one without. Just serving up a static text file:

ab -c 100 -n 100 http://localhost/EXAMPLE.txt
Requests per second:    611.21 [#/sec] (mean)
Time per request:       163.609 [ms] (mean)
Time per request:       1.636 [ms] (mean, across all concurrent requests)
Transfer rate:          816.54 [Kbytes/sec] receivedab -c 100 -n 100 https://localhost/EXAMPLE.txt
Requests per second:    48.52 [#/sec] (mean)
Time per request:       2060.857 [ms] (mean)
Time per request:       20.609 [ms] (mean, across all concurrent requests)
Transfer rate:          64.82 [Kbytes/sec] received

Yikes, HTTPS is 12 times slower than HTTP! Not to mention more processor intensive. Comparing against nginx or Varnish, the slowness ratio increases as they serve HTTP traffic even faster. Now there are certainly ways to speed up SSL (using faster cyphers for example), but the fact remains that SSL is expensive. By using hardware level decryption at the load balancer, the web server software (or reverse-proxy software like nginx or Varnish) can focus on serving pages.

Getting Started

For those not familiar with a Big-IP load balancer's administration, most of the configuration is done via a web interface, accessible via the device's IP address.

Big-IP Administration Page

The Big-IP Administrative interface

The navigation for the site is located in the left-hand column.

Adding SSL Certificates

The first thing you need to do to get SSL termination set up is to install the SSL certificate onto the machine. This is done by navigating to Local Traffic -> SSL Certificates -> Import. You must import the .key and the .crt files obtained from your Certificate Authority (i.e. Verisign, Comodo, etc.) separately with the same "Name" property. So give your certificate and key a name, usually matching the domain name, such as "example-com" or "example-com-wildcard". Upload the .key file as a Key, and the .crt file as a Certficate; both using the same value in the Name field.

Adding an SSL Certificate

Adding an SSL Certificate

After finishing, the list of SSL Certificates should inlude your certificate and key in the list as a single entry, meaning they're associated with each other.

After adding an SSL Certificate

After adding an SSL Certificate

Set up SSL Profile

Now that our SSL certificate is uploaded into the load balancer, we need to create an SSL profile that utilizes the certificate. Visit Local Traffic -> Profiles -> SSL -> Client. The term "Client" means traffic between the outside world and the load balancer (conversely "Server" means traffic between your internal servers and the load balancer). Click the "Create..." button to add a new profile.

Give your profile a name (it can be the same as the certificate if you like), such as "example-com-wildcard". Leave the Parent profile as the default "clientssl". Check the box for custom options, then select your Certificate and Key that should be used to communicate with your end-user browsers. Leave all the other defaults.

Adding the SSL Profile

Adding the SSL Profile

Set up the Virtual Server

F5 Load Balancers use a concept of a "Virtual Server" to accept connections at a certain IP address and hostname. I won't go into the details here and assume you already have a Virtual Server for HTTP.

If you already have a Virtual Server for HTTPS, edit it. If not, create a new virtual server with these settings:

Name: [same as your HTTP virual server, with "https" added somewhere]
Destination Type: Host
Destination Address: [same as your HTTP virtual server]
Service Port: 443, HTTPS
HTTP Profile: http
SSL Profile (Client): example-com-wildcard
SSL Profile (Server): None
SNAT Pool: [same as your HTTP virtual server]

Configuring the HTTPS Virtual Server

Configuring the HTTPS Virtual Server

The most important part of this configuration is selecting an SSL Profile for the "Client", but not for the "Server". This is all that is needed to actually "enable" SSL termination. The F5 is actually decrypting all incoming traffic no matter what, but by selecting "None" for the Server-side profile, the traffic simply is not re-encrypted before communicating with the back end servers.

Don't skip this: Just because you have SSL termination enabled on this virtual server, you still need to point it at the correct location. If you're editing an existing virtual machine, it is probably currently pointing at a pool of servers on port 443. In the case of Apache, it will throw an error page, refusing to serve insecure HTTP pages over a secure port (443). To fix this (or set it up if this is a new virtual machine), click the "Resources" tab on the new virtual machine.

Under the "Load Balancing" section, select the same "Default Pool" option as you are using for your HTTP virtual machine. This makes it so that both HTTP and traffic that was formerly HTTPS come into the same port on your backend servers.

Setting the Load Balancing Pool to match the HTTP Virtual Server

Setting the Load Balancing Pool to match the HTTP Virtual Server

Identifying HTTPS traffic in your application

The only problem with the above approach to SSL termination is that all traffic getting to your web application is now over HTTP. This is a problem because often times security checks on the page will enforce an HTTPS connection and possibly attempt to redirect the user to HTTPS. In order for the application to avoid redirects like this, we need to inform the web server that the contents of the request were previously encrypted over HTTPS, even though they aren't any more.

To do this, it's recommended to set up an iRule that sets a special header. Visit Local Traffic -> iRules -> iRule List. Click the "Create..." button.

In the new iRule, give it a name such as "https-offloaded-header". In the rule contents, use the following code:

# Notify the backend servers that this traffic was SSL offloaded by the F5.
  HTTP::header insert "X-Forwarded-Proto" "https";

Save the iRule, then head back over to your virtual server under Local Traffic -> Virtual Servers -> Virtual Server List and click on your HTTPS virtual server. Under the "Resources" tab, click "Manage..." in the iRules section.

Move your new iRule from the "Available" list into the "Enabled" list. Moving it to the top of the rule list is also a good idea if you're doing any kind of HTTP/HTTPS redirects on your load balancer as setting headers after doing a redirect can cause pages to be undeliverable. Click "Finished" when done.

Now we're setting a special HTTP header on requests that have been SSL offloaded onto the F5. In your application, check this header in your code. For a PHP application, you may want to use this header to set the $_SERVER['HTTPS'] super-global. And even more specifically for Drupal (since that's what we do here at Lullabot), you would probably include code like this in your settings.php file.

$_SERVER['HTTPS'] = 'On';

Since most PHP applications (including Drupal) check if they're on an HTTPS page by checking this variable, no further changes are necessary to the application.

If you're looking for more information on setting up an F5, Googling usually will not turn up too much because F5 has put the bulk of their documentation on a private site, for which you must first register (for free thankfully), at

Updated 5/10/2012: Switched iRule to using "X-Forwarded-Proto".

Sep 22 2011
Sep 22

Tomorrow is the last day of Summer but the Drupal training scene is as hot as ever. We’ve scheduled a number of trainings in Los Angeles this Fall that we’re excited to tell you about, and we’re happy to publicly announce our training assistance program.

First, though, we’re sending out discount codes on Twitter and Facebook. Follow @LarksLA on Twitter, like Exaltation of Larks on Facebook or sign up to our training newsletter at to get a 15% early bird discount* toward all our trainings!

Los Angeles Drupal trainings in October and November, 2011

Here are the trainings we’ve lined up. If you have any questions, visit us at or contact us at trainings [at] larks [dot] la and we’ll be happy to talk with you. You can also call us at 888-LARKS-LA (888-527-5752) with any questions.

Beginner trainings:

Intermediate training:

Advanced trainings:

All our trainings are $400 a day (1-day trainings are $400, 2-day trainings are $800, etc.). We’re excited about these trainings and hope you are, too. Here are some more details and descriptions.

Training details and descriptions

   Drupal Fundamentals
   October 31, 2011

Drupal Fundamentals is our introductory training that touches on nearly every aspect of the core Drupal framework and covers many must-have modules. By the end of the day, you’ll have created a Drupal site that looks and functions much like any you’ll see on the web today.

This training is for Drupal 7. For more information, visit

   Drupal Scalability and Performance
   October 31, 2011

In this advanced Drupal Scalability and Performance training, we’ll show you the best practices for running fast sites for a large volume of users. Starting with a blank Linux virtual server, we’ll work together through the setup, configuration and tuning of Drupal using Varnish, Pressflow, Apache, MySQL, Memcache and Apache Solr.

This training is for both Drupal 6 and Drupal 7. For more information, visit

   Drupal Architecture (Custom Content, Fields and Lists)
   November 1 & 2, 2011

Drupal Architecture (Custom Content, Fields and Lists) is our intermediate training where we explore modules and configurations you can combine to build more customized systems using Drupal. You’ll create many examples of more advanced configurations and content displays using the popular Content Construction Kit (CCK) and Views modules.

This training is for Drupal 6. For more information, visit

   Developing RESTful Web Services and APIs
   November 3, 4 & 5, 2011

Offered for the first time in Southern California, Developing RESTful Web Services and APIs is an advanced 2-day training (with an optional third day of additional hands-on support) for those developers seeking accelerated understanding of exploiting Services 3.0 to its fullest. This is THE training you need if you’re using Drupal to create a backend for iPad, iPhone or Android applications.

This training covers both Drupal 6 and Drupal 7. For more information, visit

Training assistance program

In closing, we’d like to tell you about our training assistance program. For each class, we’re setting aside a limited number of seats for students, unemployed job seekers and people in need.

For more details about the program, contact us at trainings [at] larks [dot] la and we’ll be happy to talk with you. You can also call us at 888-LARKS-LA (888-527-5752) with any questions.

* Our early bird discount is not valid toward the Red Cross First Aid, CPR & AED training and 2-year certification that we’re organizing. It’s already being offered at nearly 33% off, so sign up today. You won’t regret it and you might even save someone’s life. ^

Aug 25 2011
Aug 25

Note: I am hosting a BoF at DrupalCon London about this: Join us in Room 333 on Thursday 25th August from 11:00 - 12:00 (second half).


Varnish is a fast, really fast reverse-proxy and a dream for every web developer. It transparently caches images, CSS / Javascript files and content pages, and delivers them blazingly fast without much CPU usage. On the other hand Apache - the most widely used webserver for providing web pages - can be a real bottleneck and take much from the CPU even for just serving static HTML pages.

So that sounds like the perfect solution for our old Drupal 6 site here, right? (Or our new Drupal 7 site.)

We just add Varnish and the site is fast (for anonymous users) ...

The perfect solution?

But if you do just that you'll be severely disappointed, because Varnish does not work with Drupal 6 out of the box and, even with Drupal 7, you can run into problems with contrib modules. You also need to install an extra Varnish Module and learn VCL. And if you have a module using $_SESSION for anonymous users, you have to debug this and find and fix it all, because if Varnish is seeing any cookie it will by default not cache the page. The reason for this is that Varnish can't know if the output is not different, which is actually true for the SESSION cookie in Drupal. (Logged in users see different content from logged out ones). That means that those pages are not cached at all and that is true for all pages on a stock (non pressflow) Drupal installation.

So Varnish is just for experts then? Okay, we go with just Boost then and forget about Varnish. Boost just takes a simple installation and some .htaccess changes to get up and running. And we'll just add more Apache machines to take the load. (10 machines should suffice - no?)

Not any longer! Worry no more: Here comes the ultimate drop-in Varnish configuration (based on the recent Lullabot configuration) that you can just add. With minimal changes, it'll work out of the box.

That means that if you have Boost running successfully and can change your Varnish configuration (and isntall varnish on some server), you can run Varnish, too.

How to Boost your site with Varnish

And here are the very simple steps to upgrade your site from Boost to Boosted Varnish.

1. Download Varnish configuration here:
2. Install and configure Boost (follow README.txt or see documentation on Boost project page)
3. Set Boost to aggressivly set its Boost cookie
4. Setup Apache to listen on port 8080
5. Setup Varnish to listen to port 80
6. Replace default.vcl with boosted-varnish.vcl

Now we need to tweak the configuration a little:

There is a field in Boost where you can configure pages that should not be boosted. We want to make sure those pages don't cache in Varnish either.

In Boost this will just be a list like:


In Varnish we have to translate this to a regexp. Find the line in the configuration to change it and do:

##### BOOST CONFIG: Change this to your needs
       # Boost rules from boost configuration
       if (!(req.url ~ "^/(user$|user/|my-special-page)")) {
         unset req.http.Cookie;
##### END BOOST CONFIG: Change this to your needs

And thats it. Now Varnish will cache all boosted pages for at least one hour and work exactly like Boost - only much faster and much more scalable.

We had a site we worked on where we had a time of 4s for a page request under high load and brought this down to 0.17s.

The only caveat to be aware of here is that pages are cached for at least one hour, so there is an hour of delay until content appears for anonymous users. But this can be set to 5 min, too, and you'll still profit from the Varnish caching. In general this setting is similar to the Minimum Cache Lifetime setting found in Pressflow.

The code line to change in boosted-varnish.vcl is:

##### MINIMUM CACHE LIFETIME: Change this to your needs
    # Set how long Varnish will keep it
    set beresp.ttl = 1h;
##### END MINIMUM CACHE LIFETIME: Change this to your needs

Even 5 min of minimum caching time give tremendous scalability improvements.

Actually with this technique I can instantly make any site on the internet running Boost much much faster. I just set the backend to the IP, set the hostname in the VCL and my IP address will serve those pages. So you could even share one Varnish server instance for all of your pages and those of your friends, too. I did experiment with EC2 micro instances and it worked, but for any serious sites you should at least get a small one. I spare the details for another blog post though - if there is interest to explore this further.

How and Why it works

The idea of this configuration is quite simple.

Boost is a solution which works well with many many contrib modules out of the box. With Varnish you need to use Pressflow or Drupal 7 and you need to make sure no contrib modules are opening sessions needlessly, which can be quite a hassle to track down. (Checkout varnish_debug to make this task easier here:

But Boost's behavior and rules can be emulated in Varnish, because if it is serving a static HTML page, it could serve also a static object out of the Varnish cache.

And the property that is distinguishing between boosted and non-boosted pages is the DRUPAL_UID cookie set by Boost.

The cookies (and such the anonymous SESSION) are removed whenever Boost would have been serving a static HTML page, which would mean that Drupal never got to see that Cookies in the first place, so we can safely remove them.

The second thing to prevent Drupal from needlessly creating session after session is a very simple rule:

If a SESSION was not sent to the webserver, do not send a SESSION to the client. If a SESSION was sent to the webserver, return the SESSION to the client. So SESSION cookies will only be set on pages that are excluded from caching in Varnish like the user/login pages or POST requests (E.g. forms). As Drupal has the pre-existing SESSION cookie, it does not need to create a new SESSION.

To summarize those rules in a logic scheme:

# Logic is:
# * Assume: A cookie is set (we add __varnish=1 to client request to make this always true)
# * If boosted URL -> unset cookie
# * If backend not healthy -> unset cookie
# * If graphic or CSS file -> unset cookie
# Backend response:
# * If no (SESSION) cookie was send in, don't allow a cookie to go out, because
#   this would overwrite the current SESSION.

Why Boost and Varnish?

Now the question that could come up is: If I have Varnish, why would I need Boost anymore?

Boost has some very advanced expiration characteristics, which can be used for creating a current permanent cache on disk of the pages on the site.

This can help pre-warm the varnish cache in case of a varnish restart. But as it turns out, you can use the stock .htaccess and boosted varnish will still work - as long as the DRUPAL_UID cookie is set. It might be possible as further work to just write a contrib module doing exactly that.

But Boost can also be really helpful in this special configuration as you can set your Varnish cache to a minimum lifetime of - for example - 10 min. And instead of Drupal being hit every 10 min, Apache is just happily serving the static HTML page Boost had created until it expires.

The advantage of that is:

If there are 1000 requests to your frontpage, Apache will just be hit once and then Varnish will serve this cached page to the 1000 clients. So instead of Apache having to serve 1000 page requests, it just have to serve one every 10 min. Multiply that with page assets like images and CSS and JS files and you get some big savings in traffic going to Apache.


Varnish is a great technology, but it has been difficult to configure and there are lots of caveats to think of (especially with Drupal 6). This blog post introduced a new technology called Boosted Varnish, which lets Varnish work with every page that is running the Boost Module by temporarily adding it to the active working set of varnish and fetching it frequently back from the permanent Boost cache on disk. The purpose is not for those that are already running high performance drupal sites with Mercury stack, but for those that are using Boost and want to make their site faster by adding Varnish in front of it without having to worry about Varnish specifics.

I created a sandbox project to create any issues related to the configuration on:

Have fun with the configuration and I am happy to hear from you or see you tomorrow at my BoF session at DrupalCon London!

AttachmentSize 10.74 KB
Sep 15 2010
Sep 15

Update 16th Sep: Scroll down to comments for some technical info about moving to InnoDB

Drupal has definitely matured into an enterprise ready CMS. With this growth has come a need for high performance solutions for sites that weal with high visitor traffic. This is where Pressflow really shines - Pressflow is a Drupal distribution created by Four Kitchens that aims to deliver high performance to Drupal users and webmasters by streamlining some of the bottlenecks found in core Drupal. We strongly recommend it to developers and system administrators who are facing heavy traffic. In fact, we now base all of our projects on the Pressflow version of Drupal.

Some of the highlights of Pressflow’s improvements are that it comes with support for slave MySQL servers, implements a better locking framework, recommends InnoDB, supports lazy session handling, to name a few. In fact many of these features are so useful that they made it to Drupal 7 core.

One of our client’s sites suffered from a major growth in traffic over the last few months, going from about 3 million to 10 million page views a month. In order to deal with this higher traffic we decided to move from Drupal vanilla core and MyISAM to Pressflow and InnoDB. The migration was complete last weekend and the results are simply incredible. We expect even further improvement since we can now also implement Varnish caching in the near future.


Since a picture is worth a thousand words we‘ll share some of the server monitoring graphs in order to give you a more detailed understanding of the migration and its performance results.

The main bottleneck in our case was that the database server was bogged down due to certain Views queries, the most complex of which had several JOINs. The database server wasn’t able to handle the traffic peaks and would often become overloaded with loads exceeding 50.

Slow queries

The slow queries in all cases simply disappeared. Day 12 is when we migrated to InnoDB, and after that the number of slow queries is zero.

MySQL Slow queries

Memory usage

Following chart shows the database server’s memory usage. Migration to InnoDB allowed us to better utilize the 16GB of the memory.


CPU & Load

What about CPU usage? You can see from the CPU usage graph there are no more spikes, which were causing the server to be unresponsive.


The same dramatic improvement is shown by the average load of the server.


Importance of monitoring

The last chart I’d like to share with you shows something important. The previous charts all show dramatic improvement and most sysadmins would be happy to leave the server now with its improved performance.

We always look deeper to find what will be the next issues to make sure that we are ready for bottlenecks before they appear. When we were finishing our analysis, we saw that around day 10, which was when we moved to Pressflow, that there was a sudden spike in INSERT queries. If sysadmins are not used to reading logs and monitoring, they probably won’t know what is actually behind the monitoring statistics and wouldn’t notice this spike. After poking around a bit we found that Pressflow changed how error_reporting behaves. These INSERT queries were inserts into watchdog because the client’s site had some notices and undefined indexes. This database load issue would have been mitigated by using Syslog module instead of DBLog, which is definitely more appropriate on a site with this much traffic. Still though, there would be an onslaught of error messages to wade through. We deployed a quick hot fix by changing the PHP configuration error reporting levels and made a ticket in our project management system to deal with the the notices later in a more permanent way.


The lesson

What is the lesson that we learned?

  1. Seriously consider Pressflow if you require truly high performance from Drupal.
  2. Don’t be afraid to move to InnoDB, in most cases, it will deliver better performance (Drupal 7 defaults to it). A quick test on your development server won’t take too much time.
  3. Server administrators and Drupal developers need more then just deep knowledge of the technologies they use on a regular basis - they need to be passionate about their work and understand the big picture about Drupal and high performance web site hosting.

Thanks to Andrew Burcin for helping with this post.

Bookmark/Search this post with:

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web