May 18 2016
May 18
teaser image for blog post

Members of the British Council Digital team were delighted to receive the RITA2016 award last Thursday for the huge change in IT cloud infrastructure that Ixis delivered in the summer of 2015.

The award for "Infrastructure as an Enabler" reflected the innovative change in the way the British Council undertook their hosting requirement for the initial 120 sites operating in over 100 countries across the globe and delivering a clear business benefit. Moving away from dedicated infrastructure to virtual containers provided the ability to tightly control and guarantee server resources to an individual site and a quick and easy way to duplicate an environment for QA, testing, staging and feature branch development.

Ixis partnered with Drupal container expert to provide the underlying infrastructure and API. We'll publish further detail on our integration as a case study.

Congratulations also go another of our clients: Westminster City Council, for their award and further two highly commended positions in this years awards.

Photo courtesy of Chaudhry Javed Iqbal on Twitter.

May 18 2016
May 18
teaser image for blog post

In this blog post I'll discuss some methods of ensuring that your software is kept up to date, and some recent examples of why you should consider security to be among your top priorities instead of viewing it as an inconvenience or hassle.

Critics often attack the stability and security of Open Source due to the frequent releases and updates as projects evolve through constant contributions to their code from the community. They claim that open source requires too many patches to stay secure, and too much maintenance as a result.

This is easily countered with the explanation that by having so many individuals working with the source code of these projects, and so many eyes on them, potential vulnerabilities and bugs are uncovered much faster than with programs built on proprietary code. It is difficult for maintainers to ignore or delay the release of updates and patches with so much public pressure and visibility, and this should be seen as a positive thing.

The reality is that achieving a secure open source infrastructure and application environment requires much the same approach as with commercial software. The same principles apply, with only the implementation details differing. The most prominent difference is the transparency that exists with open source software.

Making Headlines

Open Source software often makes headlines when it is blamed for security breaches or data loss. The most recent high profile example would be the Mossack Fonseca “Panama Papers” breach, which was blamed on either WordPress or Drupal. It would be more accurate to blame the firm itself for having poor security practices, including severely outdated software throughout the company and a lack of even basic encryption.

Mossack Fonseca were using an outdated version of Drupal: 7.23. This version was released on 8 Aug 2013, almost 3 years ago as of the time of writing. That version has at least 25 known vulnerabilities. Several of these are incredibly serious, and were responsible for the infamous “Drupalgeddon” event which led to many sites being remotely exploited. warned users that “anyone running anything below version 7.32 within seven hours of its release should have assumed they’d been hacked”.

Protection by Automation

Probably the most effective way to keep your software updated is to automate and enforce the process. Don’t leave it in the hands of users or clients to apply or approve updates. The complexity of this will vary depending on what you need to update, and how, but it can often be as simple as enabling the built-in automatic updates that your software may already provide, or scheduling a daily command to apply any outstanding updates.

Once you've got it automated (the easy part) you will want to think about testing these changes before they hit production systems. Depending on the impact of the security exploits that you're patching, it may be more important to install updates even without complete testing; a broken site is often better than a vulnerable site! You may not have an automated way of testing every payment permutation on a large e-commerce site, for example, but that should not dissuade you from applying a critical update that exposes credit card data. Just be sure you aren't using this rationale as an excuse to avoid implementing automated testing.

The simple way

As a very common example of how simple the application of high priority updates can be, most Linux distributions will have a tried and tested method of automatically deploying security updates through their package management systems. For example, Ubuntu/Debian have the unattended-upgrades package, and Redhat-based systems have yum-cron. At the very least you will be able to schedule the system’s package manager to perform nightly updates yourself. This will cover the OS itself as well as any officially supported software that you have installed through the package manager. This means that you probably already have a reliable method of updating 95% of the open source software that you're using with minimal effort, and potentially any third-party software if you're installing from a compatible software repository. Consult the documentation for your Linux distro (or Google!) to find out how to enable this, and you can ensure that you are applying updates as soon as they are made available.

The complex way

For larger or more complex infrastructure where you may be using configuration management software (such as Ansible, Chef, or Puppet) to enforce state and install packages, you have more options. Config management software will allow you to apply updates to your test systems first, and report back on any immediate issues applying these updates. If a service fails to restart, a service does not respond on the expected port after the upgrade, or anything goes wrong, this should be enough to stop these changes reaching production until the situation is resolved. This is the same process that you should already be following for all config changes or package upgrades, so no special measures should be necessary.

The decision to make security updates a separate scheduled task, or to implement them directly in your config management process will depend on the implementation, and it would be impossible to cover every possible method here.

Risk Management

Automatically upgrading software packages on production systems is not without risks. Many of these can be mitigated with a good workflow for applying changes (of any kind) to your servers, and confidence can be added with automated testing.


  • You need to have backups of your configuration files, or be enforcing them with config management software. You may lose custom configuration files if they are not flagged correctly in the package, or the package manager does not behave how you expect when updating the software.
  • Changes to base packages like openssl, the kernel, or system libraries can have an unexpected effect on many other packages.
  • There may be bugs or regressions in the new version. Performance may be degraded.
  • Automatic updates may not complete the entire process needed to make the system secure. For example, a kernel update will generally require a reboot, or multiple services may need to be restarted. If this does not happen as part of the process, you may still be running unsafe versions of the software despite installing upgrades.

Reasons to apply updates automatically

  • The server is not critical and occasional unplanned outages are acceptable.
  • You are unlikely to apply updates manually to this server.
  • You have a way to recover the machine if remote access via SSH becomes unavailable.
  • You have full backups of any data on the machine, or no important data is stored on it.

Reasons to NOT apply updates automatically

  • The server provides a critical service and has no failover in place, and you cannot risk unplanned outages.
  • You have custom software installed manually, or complex version dependencies that may be broken during upgrades. This includes custom kernels or kernel modules.
  • You need to follow a strict change control process on this environment.

Reboot Often

Most update systems will also be able to automatically reboot for you if this is required (such as a kernel update), and you should not be afraid of this or delay it unless you're running a critical system. If you are running a critical system, you should already have a method of hot-patching the affected systems, performing rolling/staggered reboots behind a load-balancer, or some other cloud wizardry that does not interrupt service.

Decide on a maintenance window and schedule your update system to use it whenever a reboot is required. Have monitoring in place to alert you in the event of failures, and schedule reboots within business hours wherever possible.

Drupal and Other Web-based Applications

Most web-based CMS software such as Drupal and Wordpress offer automated updates, or at least notifications. Drupal security updates for both core and contributed modules can be applied by Drush, which can in turn be scheduled easily using cron or a task-runner like Jenkins. This may not be a solution if you follow anything but the most basic of deployment workflows and/or rely on a version control system such as Git for your development (which is where these updates should go, not direct to the web server). Having your production site automatically update itself will mean that it no longer matches what you deployed, nor what is in your version control repository, and it will be bypassing any CI/testing that you have in place. It is still an option worth considering if you lack all of these things or just want to guarantee that your public-facing site is getting patches as a priority over all else.

You could make this approach work by serving the Git repo as the document root, updating Drupal automatically (using Drush in 'security only' upgrade mode on cron), then committing those changes (which should not conflict with your custom code/modules) back to the repo. Not ideal, but better than having exploitable security holes on your live servers.

If your Linux distribution (or the CMS maintainers themselves) provide the web-based software as a package, and security updates are applied to it regularly, you may even consider using their version of the application. You can treat Drupal as just another piece of software in the stack, and the only thing that you're committing to version control and deploying to servers is any custom modules to be layered on top of the (presumably) secure version provided as part of the OS.

Some options that may fit better into the common CI/Git workflows might be:

  • Detect, apply, and test security patches off-site on a dedicated server or container. If successful, commit them back to version control to your dev/integration branch.
  • Check for security updates as part of your CI system. Apply, test and merge any updates into your integration branch.

Third-party Drupal Modules (contrib)

Due to the nature of contrib Drupal modules (ie, those provided by the community) it can be difficult to update them without also bringing in other changes, such as new features (and bugs!) that the author may have introduced since the version you are currently running. Best practice would be to try to keep all of the contrib that the site uses up to date where possible, and to treat this with the same care and testing as you would updates to Drupal itself. Contrib modules often receive important bug fixes and performance improvements that you may be missing out on if you only ever update in the event of a security announcements.


  • Ensure that updates are coming from a trusted and secure (SSL) source, such as your Linux distribution's packaging repositories or the official Git repositories for your software.
  • If you do not trust the security updates enough to apply them automatically, you should probably not be using the software in the first place.
  • Ensure that you are alerted in the event of any failures in your automation.
  • Subscribe to relevant security mailing lists, RSS feeds, and user groups for your software.
  • Prove to yourself and your customers that your update method is reliable.
  • Do not allow your users, client, or boss to postpone or delay security updates without an incredibly good reason.

You are putting your faith in the maintainer's' ability to provide timely updates that will not break your systems when applied. This is a risk you will have to take if you automate the process, but it can be mitigated through automated or manual testing.

Leave It All To Somebody Else

If all this feels like too much responsibility and hard work then it’s something Ixis have many years of experience in. We have dedicated infrastructure and application support teams to keep your systems secure and updated. Get in touch to see how we can ensure you're secure now and in the future whilst enjoying the use and benefits of open source software.

Mar 20 2015
Mar 20

Not so long ago, many of us were satisfied handling deployment of our projects by uploading files via FTP to a web server. I was doing it myself until relatively recently and still do on occasion (don’t tell anyone!). At some point in the past few years, demand for the services and features offered by web applications rose, team sizes grew and rapid iteration became the norm. The old methods for deploying became unstable, unreliable and (generally) untrusted.

So was born a new wave of tools, services and workflows designed to simplify the process of deploying complex web applications, along with a plethora of accompanying commercial services. Generally, they offer an integrated toolset for version control, hosting, performance and security at a competitive price. is a newer player on the market, built by the team at Commerce Guys, who are better known for their Drupal eCommerce solutions. Initially, the service only supported Drupal based hosting and deployment, but it has rapidly added support for Symfony, WordPress, Zend and ‘pure’ PHP, with node.js, Python and Ruby coming soon.

It follows the microservice architecture concept and offers an increasing amount of server, performance and profiling options to add and remove from your application stack with ease.

I tend to find these services make far more sense with a simple example. I will use a Drupal platform as it’s what I’m most familiar with. has a couple of requirements that vary for each platform. In Drupal’s case they are:

  • An id_rsa public/private key pair
  • Git
  • Composer
  • The CLI
  • Drush

I won’t cover installing these here; more details can be found in the documentation section.

I had a couple of test platforms created for me by the team, and for the sake of this example, we can treat these as my workplace adding me to some new projects I need to work on. I can see these listed by issuing the platform project:list command inside my preferred working directory.

Platform list

Get a local copy of a platform by using the platform get ID command (The IDs are listed in the table we saw above).

This will download the relevant code base and perform some build tasks, any extra information you need to know is presented in the terminal window. Once this is complete, you will have the following folder structure:

Folder structure from build

The repository folder is your code base and here is where you make and commit changes. In Drupal’s case, this is where you will add modules, themes and libraries.

The build folder contains the builds of your project, that is the combination of drupal core, plus any changes you make in the repository folder.

The shared folder contains your local settings and files/folders, relevant to just your development copy.

Last is the www symlink, which will always reference the current build. This would be the DOCROOT of your vhost or equivalent file.

Getting your site up and running

Drupal is still dependent on having a database present to get started, so if we need it we can get the database from the platform we want by issuing:

platform drush sql-dump > d7.sql

Then we can import the database into our local machine and update the credentials in shared/settings.local.php accordingly.

Voila! We’re up and working!

Let’s start developing

Let’s do something simple: add the views and features modules. is using Drush make files, so it’s a different process from what you might be used to. Open the project.make file and add the relevant entry to it. In our case, it’s:

projects[ctools][version] = "1.6"
projects[ctools][subdir] = "contrib"

projects[views][version] = "3.7"
projects[views][subdir] = "contrib"

projects[features][version] = "2.3"
projects[features][subdir] = "contrib"

projects[devel][version] = "1.5"
projects[devel][subdir] = "contrib"

Here, we are setting the projects we want to include, the specific versions and what subfolder of the modules folder we want them placed into.

Rebuild the platform with platform build. You should notice the devel, ctools, features and views module downloaded, and we can confirm this by making a quick visit to the modules page:

Modules list page in Drupal

You will notice that each time we issue the build command, a new version of our site is created in the builds folder. This is perfect for quickly reverting to an earlier version of our project in case something goes wrong.

Now, let’s take a typical Drupal development path, create a view and add it to a feature for sharing amongst our team. Enable all the modules we have just added and generate some dummy content with the Devel Generate feature, either through Drush or the module page.

Now, create a page view that shows all content on the site:

Add it to a feature:

Uncompress the archive created and add it into the repository -> modules folder. Commit and push this folder to version control. Now any other team member running the platform build command will receive all the updates they need to get straight into work.

You can then follow your normal processes for getting modules, feature and theme changes applied to local sites such as update hooks or profile based development.

What else can do?

This simplifies the development process amongst teams, but what else does offer to make it more compelling than other similar options?

If you are an agency or freelancer that works on multiple project types, the broader CMS/Framework/Language support, all hosted in the same place and with unified version control and backups, is a compelling reason.

With regards to version control, provides a visual management and record of your git commits and branches, which I always find useful for reviewing code and status of a project. Apart from this, you can create snapshots of your project, including code and database, at any point.

When you are ready to push your site live, it’s simple to allocate DNS and domains all from the project configuration pages.

Performance, Profiling and other Goodies

By default, your projects have access to integration with Redis, Solr and EntityCache / AuthCache. It’s just a case of installing the relevant Drupal modules and pointing them to the built-in server details.

For profiling, has just added support for Sensiolabs Blackfire, all you need to do is install the browser companion, add your credentials, create an account and you’re good to go.

Backups are included by default as well as the ability to restore from backups.

Team members can be allocated permissions at project levels and environment levels, allowing for easy transitioning of team members across projects and the roles they undertake in each one. offers some compelling features over it’s closest competition (Pantheon and Acquia) and pricing is competitive. The main decision to be made with all of these SaaS offerings is if the restricted server access and ‘way of doing things’ is a help or a hindrance to your team and its workflows. I would love to know your experiences or thoughts in the comments below.

Oct 25 2014
Oct 25
teaser image for blog post

We are delighted to be working with the British Council on a new Drupal hosting and infrastructure support project. The British Council are valued clients, and we have worked with them for more than 6 years managing both the global suite of 150 country sites, and the prestigious suite of Drupal teaching and learning sites.

We will be working to to create four individual platforms for hosting key Drupal websites on, moving away from just one main infrastructure, to improve resilience, efficiency and increase availability to the sites which generate more than 35 million page impressions per month and are used by more than 65 million people each year alone.

Will also continue to provide 24/7 Drupal website support to the sites which include Learn English, Teaching English and Premier Skills English.

Martin Heineberg, websites and media coordinator for the British Council, said: “Having worked with Ixis for more than six years, we knew we could rely on them to provide the right solution for hosting the websites. As Drupal hosting and support experts, they are always quick to respond and resolve any rare instances involving technical issues.”

We are looking forward to getting started with this hosting and support project, which will be one of the largest Drupal infrastructures in the UK.

Read more at on the Prolific North Website

Jul 03 2013
Jul 03

Installing an SSL certificate on Aegir 1.x is a bit tricky, but definitely manageable. It is easy to bring all your sites down if you don’t follow these exact steps. Running a “verify” task on your site, either intentionally or inadvertently, will rewrite your SSL config and attempt to restart apache, which, if everything is not set up perfectly, will cause fatal errors upon restart and lock you out of your Aegir control panel until they are fixed.  Check out the latest documentation from the Aegir project here before following this documentation to ensure that everything matches, but do note the specifics pointed out in the steps below:

1.) Let Aegir generate its own self-signed certificate, which will create two files in /var/aegir/config/server_master/ssl.d/SITENAME.COM/:

[email protected]-prod:/var/aegir/config/server_master/ssl.d/ ls 

openssl.crt <– Replace this Aegir-generated file with the X.509 cert file from the cert provider.

openssl.key <– Replace this with the .key file generated for the CSR. Or, even better, just use this file rather than generating the CSR yourself.

2.) Add the bundle / intermediate file to this same directory, but you must name it “openssl_chain.crt” in order for Aegir to recognize or “import” this file.

 openssl_chain.crt <– This is the bundle / intermediate cert file from the cert provider, which must be created manually. It *must* be named exactly like this.

3.) If you don’t follow these exact steps, or forget to add the bundle / intermediate cert file, when Aegir runs a “verify” task on your site again (or if you run it yourself), Apache will fail to restart, and all your sites will go down! So, before you run a “verify” task on the site on the front-end to complete the installation, be prepared to comment out the SSL settings for your site at /var/aegir/config/server_master/apache/vhost.d/SITENAME.COM, restart apache, fix your errors, and start over.

Nov 05 2012
Nov 05

Frustration has motivated me to write a post about this. These are largely a lot of issues I have with Pantheon. I will address the good things first.

The good bits (Some of which you may already know)

Innovative: I feel Pantheon is doing a great thing. They're helping Drupal developers simplify their workflow. They've created a service that stands above most other hosting services that currently exist. Additionally, Pantheon exudes the culture of listening to the Drupal community and continually trying to improve their product.

User Interface: For the most part, the user interface is pretty good. You're able to find things that you need without too much searching. It's not perfect and it could be improved.

Multi-tenet deployment is cheap: Being able to deploy a multi-tenet deployment structure is very easy and takes little time and effort. It's just a couple clicks away. You upload your database and then upload your files folder and after you get your favorite warm morning beverage you have a new Drupal installation created.

Upstream updates: Being able to run updates for Drupal core is awesome. A single click allows you to update code quickly without having to click through the interface. Although, drush is equally sufficient for this task.

Quick backup: With one click you're able to execute a complete backup. The backup is also conveniently compressed for quick download.

The bad bits

Pulling in changes: Moving from dev/staging/live requires clicking. I may be in the minority but I really enjoy using git purely through the command line interface. Having to pull in changes in your testing environment by clicking can get old quickly.

Drush / SSH-less access: While some innovative folks in the contributed module space have created a Drupal project for drush access, it still limits you from what you can do. I understand the limitations exist due to large concerns of security. Without Drupal or ssh access, it can often be a burden for Drupal developers. I would much prefer to have the ability to sync databases and files a command line interface with Drupal. I know Pantheon does a great job of creating user interfaces to replace this but being able to `drush cc all` is superior in my opinion.

Working the Pantheon way: Using Pantheon, you're tied to a specific workflow. This includes exporting the database, downloading it from the browser and then importing it into your machine. This is OK the first 10 times. After a while it gets quite old. I would much rather use `drush sql-sync` for this.

New Apollo Interface: The new Apollo interface has too many tabs and fancy dropdowns. Changing the environment now requires clicking twice. Click the dropdown, then pick your environment, then pick the tab on the left side. Someone went a little crazy with Twitter bootstrap. I would rather see a longer page. The tabs/dropdowns often abstract where and what you need. Also, you have to re-learn another new workflow for it to work. This is a slight curveball.

503 Errors: This issue was of the most problematic. On one of our website setups, it produces an unhelpful 503 error every time you would visit the features page or to clear cache. This became instantly an impediment on our team's productivity. We've posted a ticket; however, the ticket process has been rather slow. Different techs have come in, passed the issue and escalated it each time; but we have yet to have a resolution. (We're on day 7 of this problem.) Being able to call and wait for an hour or two and getting it resolved then would be more efficient of my time; especially when something like this is becoming an impediment to our project.

In the end it's up to you

Overall, it all depends on the workflow and tastes of the Drupal developer. Pick and choose what works for you. For some people, Pantheon is the right service / tool for their job. For me, I would much rather prefer more granularity and control. I really desire for Pantheon to succeed. Pantheon is fulfilling a need that exists in the Drupal community. Hopefully, they'll continue to improve their product and I'll give them another shot later on. At the moment, it's not what I'm looking for.

Do you agree, disagree or have comments? Please let us know below.

May 31 2012
May 31

[box type=”note”]Unless you are using MediaTemple, you should try the directions to install Drush using Pear on a shared host first, as that method is preferred.[/box]

A while back I wrote a quick guide for getting Drush up and running on a shared Dreamhost account and it was great to see lots of folks taking advantage of the power of Drush on a more commodity (read: cheap) host. But what if you’re running on a slightly more expensive and more featured host like MediaTemple’s grid offering? Well; you can run Drush there too, and I’ll show you how.

First off, we’re going the more manual route this time since I’ve had more success with it on MT. So head on over to the Drush project page and copy the latest tar.gz version of Drush to your clipboard. Then, log into your MT (gs) instance (or just run the command `cd` if you already have an SSH session open to your (gs) already) and create a directory called bin in your home directory and enter it:

mkdir ~/bin
cd ~/bin/

Next, download and extract the tarball of Drush you copied earlier:

tar xvpfz drush-*.tar.gz

Now, make a new folder in your home directory called .drush and enter it:

mkdir ~/.drush
cd ~/.drush/

Because of the more tightened security on the default CLI version of PHP MediaTemple provides we’re going to need to override some configuration values to suit us. In the ~/.drush/ directory make a file named drush.ini and place in it the following values (I prefer to use nano drush.ini for this):

memory_limit = 128M
error_reporting = E_ALL | E_NOTICE | E_STRICT
display_errors = stderr
safe_mode =
open_basedir =
disable_functions =
disable_classes =

Lastly, we need to add Drush into our path and reload the BASH configuration:

echo "export PATH=\"~/bin/drush\:$PATH\"" >> ~/.bash_profile
source ~/.bash_profile

That’s it! You now have a fully working Drush on MediaTemple; get out there and `drush up`. Go on, I know you want to :)

tl;dr: Install the latest 5.x-dev Drush on your (gs) by running this long command, or replace the highlighted tar.gz with the version you want:
mkdir ~/bin;cd ~/bin/;wget;tar xvpfz drush-*.tar.gz;mkdir ~/.drush;cd ~/.drush/;echo -e "memory_limit = 128M\nerror_reporting = E_ALL | E_NOTICE | E_STRICT\ndisplay_errors = stderr\nsafe_mode =\nopen_basedir =\ndisable_functions =\ndisable_classes =" >> drush.ini;echo "export PATH=\"~/bin/drush\:$PATH"" >> ~/.bash_profile;source ~/.bash_profile

Updated 2/14/2013: Dreamhost made some config changes that required a different PATH variable for this method to work. The instructions above have been updated to reflect this. Editing drush.ini is now suggested to be done with nano, since some folks were having issues with the quad-chevron approach.

[box type=”note”]

Robin Monks has a passion for openness and freedom in technology and he’s spent the last 8 years of his life developing, supporting and maintaining open source software. He’s part of the panel of Drupal community experts who authored The Definitive Guide to Drupal 7 and currently provides independent consulting services. Reach him at 1-855-PODHURL.[/box]

Share this:

Feb 25 2012
Feb 25

[box type=”note”]If you have issues with these directions, please read the alternate directions for installing on a shared host manually without using Pear. MediaTemple is one provider where you must use the alternate directions to get Drush working.[/box]

One of the easiest and most convenient ways to get your hands on Drupal is on a shared hosting environment. It’s pretty much where everyone is going to start from unless you happen to have a giant long-term budget for your own hardware.

You have a number of options when it comes to hosting and at one time or another I’ve tried a lot of them. Three companies that have managed to keep me the happiest though are A Small Orange (use coupon 15-off to get 15% off; no affiliate), DreamHost (no affiliate) and MediaTemple. I’ve found DreamHost in particular to be pretty impressive when it comes to hosting a lot of one-off sites on my very inexpensive shared plan. I even use it when rapid prototyping for clients.

The best part though is that with DreamHost I can use SSH and Drush the way I normally would like to on a dedicated or virtual server. Set up is a bit more complex than normal, but, once it’s set up for your user it’s smooth sailing.

Before you can do any of this, you will need to make sure you enable SSH for the user you assigned to the  domain you want to use Drush with. You can find these options under the Manage Users section of the DreamHost panel. From there open up your favorite SSH client and log into your site (PuTTY works well on Windows, you can just run ssh from the command line on Mac or Linux).

Since you’re in a locked-down shared hosting environment, you obviously can’t make system-wide changes. The easiest way to install Drush is though Pear. Pear, by default, wants to make system-wide changes. Since we can’t do that in a shared environment, our first order of business will be to set up a new instance of pear just for our user.

Once logged in, and in your user’s home folder, run:
pear config-create ${HOME} ${HOME}/.pearrc
pear install -o PEAR

Now you have your own instance of pear, but you still can’t use it yet. To fix this, we will edit our .bash_profile and add in the path to the most recent PHP version DreamHost provides, as well as adding in our pear directory early on in this user’s PATH.

Open ~/.bash_profile in your favorite editor:
nano ~/.bash_profile

And add these lines at the bottom of the file:
export PHP_PEAR_PHP_BIN=/usr/local/php53/bin/php
export PATH=${HOME}/pear:/usr/local/php53/bin:${PATH}

Finally, re-load the file so the changes will take effect immediately:
. ~/.bash_profile

Now we have the newest version of PHP and our custom version of pear available to us, to check this out, you can run these commands:
which php
which pear

Pretty cool, eh? Lastly, just run the usual install command for drush to install it:
pear channel-discover
pear install drush/drush

Now you can use drush from any site under this user as you would anywhere else (try running `drush help` for some cool examples). You can repeat this process for any other SSH users on your account. A one line tl;dr version of this process is included below for your automated and repetitive pleasure. Enjoy!

tl;dr: Get drush running on DreamHost by running this line in SSH:
pear config-create ${HOME} ${HOME}/.pearrc;pear install -o PEAR;echo "export PHP_PEAR_PHP_BIN=/usr/local/php53/bin/php" >> ~/.bash_profile;echo 'export PATH=${HOME}/pear:/usr/local/php53/bin:${PATH}' >> ~/.bash_profile;. ~/.bash_profile;pear channel-discover;pear install drush/drush

[box type=”note”]

Robin Monks has a passion for openness and freedom in technology and he’s spent the last 8 years of his life developing, supporting and maintaining open source software. He’s part of the panel of Drupal community experts who authored The Definitive Guide to Drupal 7 and currently provides independent consulting services. Reach him at 1-855-PODHURL.[/box]

Share this:

Jan 03 2012
Jan 03

Tips for Acquia Hosting Development

Posted on: Tuesday, January 3rd 2012 by Brandon Tate

Here at Appnovation, we frequently use the Acquia hosting platform for our clients. The Dev Cloud and Managed Cloud are impressive platforms that fit well for many Drupal sites being built today. I’ve listed some items below that have helped with the overall build quality and ease of use for these platforms.

Utilities - Drush aliases

Anyone who develops with Drupal should know about Drush by now. Acquia has gone ahead and installed Drush and the Drush aliases on your Acquia Hosting account automatically. For example, you can run the command “drush @sitename.stg cc all” from your local computer to clear all the caches within the Drupal site without having to log into the server for it to work! You can find the aliases under Sites > Managed (or Dev) Cloud > Utilities. After downloading the file to your $HOME/.drush/ directory and setting up an SSH key, you're good to go.

Enabling Git

Acquia uses SVN as its primary repository system. However, some of us like to use Git for its enhanced speed over SVN. Not a problem since Acquia supports both. Switching to Git is easy and can be found under Sites > Dev Cloud > Workflow. At the top of the page you’ll find an SVN URL by default with a gear icon located next to it. If you click that icon a drop down will appear and the option to switch to Git will be displayed. Simply click this link and your repository will switch from SVN to Git. For Managed Cloud platforms, you just have to request Git by filing a support ticket through Acquia and they will switch it for you.


Acquia Dev Cloud and Managed Cloud implement Varnish to speed things up. To take advantage of this I recommend you use Pressflow. The installation and setup is the exact same as Drupal and Pressflow is 100% compatible with Drupal core modules. There are some issues with contrib modules, so make sure you check the functionality of these modules before assuming they work. Once Pressflow is installed and setup on the Acquia server, you’ll notice you have a new option under Performance called “external”. This allows Varnish to handle caching instead of Drupal’s stock functionality.

Mobile sites and Varnish

I recently worked on a site that was using Varnish to speed up the performance. The site was also configured to use a different theme when a mobile device was detected. I was using the global variable $custom_theme to switch the theme when a mobile device was found. However, when this solution was implemented with Varnish we noticed that sometimes the mobile site would get served to a desktop browser and vice versa. The problem was that Varnish was caching the page and not hitting the code to determine if the device was mobile or not. To correct this, we needed to switch from using the global $custom_theme variable to using Drupal’s multi-site folder architecture. To do this, you need to add a mobile site folder ( to the sites directory within Drupal. Then within the settings.php file add the following line.

$conf = array('theme_default' => 'mobile_theme');

This will force the site to use the mobile theme. Next, you’ll have to add the domain to the Acquia Hosting platform which is located under Sites > Managed (or Dev Cloud) > Domains. After that, you’ll have to get in contact with Acquia support so that they can configure Varnish to detect mobile devices and redirect to the correct sites folder.

Memcache setup

Memcache is another technology that speeds up the performance of Drupal. To enable this, you must download the Memcache module from After this, you can add the following lines into your settings.php file.

if (!empty($conf['acquia_hosting_site_info']) && !empty($conf['memcache_servers'])) {
  $conf['cache_inc'] = './sites/all/modules/contrib/memcache/';
  $conf['memcache_key_prefix'] = 'domainname';
  $conf['memcache_bins'] = array(
    'cache' => 'default',

First, I’d like to point out the memcache_key_prefix which must be unique across the different environments such as development to production. This prevents the cache tables from overwriting each other. Next, the memcache_bins key allows you to cache specific cache tables from Drupal. In the example above, I’ve left it to "default" but you could cache specific tables such as views, blocks etc.

Workflow - Lock out the files folder

Acquia has a drag and drop interface that allows the developer to drag the codebase, files and database between development, staging and production environments. This allows you to easily migrate your site between any of the servers. One thing I want to point out is, once your site is on the production server, its a good idea to lock the environment by setting it to Production Mode. This prevents the files directory and database from being overwritten and also allows you to pre-release the site to the client, allowing them to add content while you update the code base. To do this, go to Sites > Managed (or Dev) Cloud > Workflow. There will be a “Prod” section with a gear beside it, clicking that gear will provide a link to switch the site to Production Mode. This can be easily reverted as well by clicking the gear and reverting the site to Pre-Launch mode.

Those are some tips that I’ve discovered over my use of Acquia that have eased the development and deployment of a Drupal site on their hosting platforms. If you have any questions or tips of your own, feel free to comment.

Nov 02 2011
Nov 02

How many times have the following issues happened on a project you've worked on?

  • Notices (or worse) appeared on production because of a PHP version mismatch between a developer's machine and the production web servers.
  • A new PHP extension or PECL extension had to be installed on production because it was installed in WAMP or MAMP?
  • A team member ran into difficult setting up their local environment and spent many hours stuck on something.
  • Team members didn't set up SSL or Varnish on their local machines and issues had to be caught on a dev server.
  • A team member would like to switch to Homebrew, but can't set aside the many hours to redo their setup until a project is done.

Tools like MAMP, XAMPP, the Aqcuia dev desktop, MacPorts and Homebrew all make it easy to get an *AMP stack up and running on your computer, and tools like MacPorts and Homebrew even make it pretty easy to install tools like Varnish and memcached.

While these tools make it easy to run a very close approximation of the production hosting stack on your local machine (arguably closer if you use Macintosh or Linux,) it will still have some key differences which will ultimately contribute at some point to a "Works on My Machine!" situation in your project.

Works On My Machine Badge

Luckily, virtualization has advanced to such a degree that there are cross-platform virtualization solutions such as VirtualBox, but just using a VM inside of VirtualBox doesn't solve the whole problem. It makes acquiring the correct versions of software easy, but keeping configuration in sync can still be a challenge for users who are not deeply familiar with Linux.

Vagrant is a Ruby gem that makes working with Linux virtual machines easy. You distribute a Vagrantfile to your team, and it does the following things for you:

  • Downloads and sets up virtual machines from a single .box file which it will download over HTTP or FTP.
  • Provisions the software and configuration on the VM using your choice of Chef, Puppet, or simple shell scripts
  • Automatically shares the directory with the Vagrantfile (and any subdirectories) to the virtual machine with Virtualbox's built-in file sharing
  • Forwards the SSH port (and optionally other ports) to your localhost and avoids collisions so you can always directly SSH to the machine
  • Optionally sets up a dedicated host-only IP address that you can use to connect to all services on the VM without port forwarding
  • Optionally shares directories to the VM over NFS from a Macintosh or Linux guest, which enables acceptable performance for a Drupal docroot

Since Vagrant handles the file sharing with the VM, you and your team don't have to mess around with setting up FUSE or the like and you can continue to use the tools that you're used to using locally, such as your text editor or garphical source control program.

In addition, so long as you have a single developer skilled in ops who can encapsulate the production configuration into a system like Chef or Puppet, these changes can be pushed down to the whole team. Once your ops team has a working Varnish configuration, for example, they can push that into the Vagrant repository, and then a working Varnish setup on all your developers' VMs is just a git pull and a vagrant provision away.

We've been working with Vagrant over the last few months and think it offers a number of advantages. All it takes to get started installing VirtualBox and the vagrant ruby gem. Detailed information on how to get started is available in the excellent Vagrant Quickstart guide.

I've put together a screencast that's just over 10 minutes long and shows the whole process of bringing up a CentOS 5.6 VM with the site shared from my local machine.

We'll be posting more example code over the coming weeks that will allow you to try out Drupal from your local machine on a Linux VM.

Jun 28 2011
Jun 28

Cadre Web Hosting is sending someone to Drupalcon London 2011. Will it be you?

That's right, Cadre is sending one lucky winner to London to attend Drupalcon all on our dime. Since we are Silver Sponsors this year we decided that we'd give away a conference registration along with airfare and lodging for the week of Drupalcon. All you have to do is be from the US, Canada or Europe and gain the most points during the contest, simple right?

Here's how it works:

Step two: Activate your contest page by following the activation link we'll send you after submitting the entry form

Step three: Run your contest! Send your contest page around to everyone you know. For each tweet or facebook post someone does through your page you get one point. Folks can do this once per day, so be sure to get your crew to come back so you can stay in the lead. We'll be sending out additional updates with other ways to win points as the contest continues, so stay tuned for those announcements!

Step four: The contestant with the most points wins!

In addition to the grand prize, we will also be giving runner-up prizes including a year's worth of web hosting and fancy Cadre t-shirts, how about that? Are you so excited you just can't stand it? We are too, so get over there, sign up and start rallying your supporters!

We'll see one of you in London, good luck!

The Cadre Web Hosting Team

Apr 19 2011
Apr 19

Managing a properly configured server stack is one of the pain points in developing small client sites. Shared hosting is usually sub-par, setting up a VPS from scratch is overkill, and automation/simplification of the server configuration and deployment is always welcome. So I've been very interested in seeing how DotCloud might work for Drupal sites.

DotCloud is in the same space as Heroku, an automated server/deployment platform for Rails applications. DotCloud is trying to cater to much more than Rails, however: they currently support PHP (for websites and "worker" daemons), Rails, Ruby, Python, MySql, Postgresql, and have Node.js, MongoDB and a whole bunch of other components on their roadmap.

The basic idea is to automate the creation of pre-configured EC2 instances using a shell-based API. So you create a web and database setup and push your code with four commands:

dotcloud create mysite
dotcloud deploy -t php mysite.www
dotcloud deploy -t mysql mysite.db
dotcloud push mysite.www ~/code

Each "deployment" is its own EC2 server instance, and you can SSH into each (but without root). The "push" command for deployment can use a Git repository, and files are deployed to the server Capistrano-style, with symlinked releases and rollbacks. (This feature alone, of pushing your code and having it automatically deploy, is invaluable.)

Getting it to work with Drupal is a little tricky. First of all, the PHP instances run on nginx, not Apache. So the usual .htaccess file in core doesn't apply. Drupal can be deployed on nginx with some contortions, and there is a drupal-for-nginx project on Github. However, I write this post after putting in several hours trying to adapt that code to work, and failing. (I've never used nginx before, which is probably the main problem.) I'll update it if I figure it out, but in the meantime, this is only a partial success.

The basic process is this:

  • Set up an account (currently needs a beta invitation which you can request)
  • Install the dotcloud client using python's easy_install app
  • Set up a web (nginx) instance with dotcloud deploy
  • Set up a database (mysql or postgres) instance
  • Set up a local Git repo, download Drupal, and configure settings.php (as shown with dotcloud info
  • Push the repository using dotcloud push
  • Navigate to your web instance's URL and install Drupal.
  • To use your own domain, set up a CNAME record and run dotcloud alias. ("Naked" domains, i.e. without a prefix like www, don't work, however, so you have to rely on DNS-level redirecting.)
  • For added utility, SSH in with dotcloud ssh and install Drush. (Easiest way I found was to put a symlink to the executable in ~/bin.)

The main outstanding issue is that friendly URL's don't work, because of the nginx configuration. I hope to figure this out soon.

Some other issues and considerations:

  • The platform is still in beta, so I experienced a number of API timeouts yesterday. I mentioned this on Twitter and they said they're working on it; today I had fewer timeouts.
  • The server instances don't give you root access. They come fully configured but you're locked into your home directory, like shared hosting. I understand the point here - if you changed the server stack, their API and scaling methodologies wouldn't work - but it means if something in the core server config is wrong, tough luck.
  • The shell (bash in Ubuntu 10.04 in my examples) is missing a Git client, vim, and nano, and some of its configuration (for vi for instance) is wonky out of the box.
  • The intended deployment direction is one-way, from a local dev environment to the servers, so if you change files on the server, you need to rsync them down. (You can SSH in with the dotcloud app and put on a normal SSH key for rsync.)
  • Because the webroot is a symlink, any uploaded files have to be outside the webroot (as a symlink as well). This is normal on Capistrano setups, but unusual for most Drupal sites (running on shared or VPS hosting).
  • It's free now, but only because it's in beta and they haven't announced pricing. It is yet to be seen if this will be cost-effective when it goes out of beta.
  • They promise automated scaling, but it's not clear how that actually works. (Nowhere in the process is there a choice of RAM capacity, for instance.) Does scaling always involve horizontally adding small instances, and if so, does that make sense for high-performance applications?

Conclusion so far

The promise of automated server creation and code deployment is very powerful, and this kind of platform could be perfect for static sites, daemons, or some custom apps. If/when I get it working in Drupal, it could be as simple as a shell script to create a whole live Drupal site from nothing in a few seconds.

Try it out! I'm very curious what others' experience or thoughts are on this approach. I'd especially love for someone to post a solution to the nginx-rewrite issue in the comments.

Update 4/22:

Their support staff recommended reducing the nginx.conf file to one line:

try_files $uri $uri/ /index.php?q=$uri;

And that worked. However, it leaves out all the other recommended rules for caching time, excluding private directories, etc. I asked about these and am still waiting for a reply.

Also, to get file uploads to work properly, you'll need to put your files directory outside of the webroot, and symlink sites/default/files (or equivalent) to that directory, using a postinstall script. Mine looks like this (after creating a ~/drupal-files directory):

chmod g+w /home/dotcloud/current/sites/default && \
ln -s /home/dotcloud/drupal-files /home/dotcloud/current/sites/default/files && \
echo "Symlink for files created."

That runs whenever you run dotcloud push, and is similar to sites deployed with Capistrano, where the same kind of symlink approach is generally used.

Sep 09 2010
Sep 09

I’ve been tossing about the idea of moving some of the sites I host to Amazon Web Services, specifically EC2 (server), S3 and EBS (storage) and CloudFront (content delivery network).  Price was my excuse for not jumping (OK, the real reason is the inevitable pain that comes with moving hosting providers) but with today’s introduction of “Micro” instances, I no longer have that excuse.

AWS now can host my sites for roughly the same price I pay for shared hosting!

(24 hours * 365 days) / 12 months = 730 hours/month EC2 Micro pricing: $.02/hour = $14.60/month EC2 Micro 1-year reserved instance pricing: $.007/hour (plus $54/year) = $9.61/month EC2 Micro 3-year reserved instance pricing: $.007/hour (plus $82/3-years) = $7.39/month

These prices are starting to overlap the range of shared hosting prices.

What’s so great about AWS, especially as it relates to Drupal?  A few things to consider:

  1. You have access to Amazon’s pipe. This is the same connection to the Internet backbone they use for  Sure, you could pick a shared hosting provider that’s well located and (according to their splashy promos) have high-throughput connections.
  2. Amazon’s services integrate well with each other.  Sure, you could sign up for a third-party CDN and, yes, some hosting providers offer daily backup.
  3. Database is on localhost – great for DB-intensive apps such as Drupal.  Sure, some shared hosting providers do that but most don’t put such geeky details on their splashy promo page.  A call to tech support will suss that out – in fact, if tech support knows what you’re talking about that bodes well for that provider.
  4. Lots of memory for PHP.  Sure, shared hosting, um...  never mind.

AWS hosting is unmanged hosting – meaning there’s no hosting provider artificially capping your CPU usage, PHP memory or bandwidth.  Plus you’ve got full root access meaning you and fiddle with more config files than you knew existed.  “Great,” you may exclaim, now I can tweak Apache/MySQL/PHP to my heart’s desire to squeeze every last optimized bit out of my server.  Yes, I can!

Yes, I can also take down my site (and that of my clients) because my LAMP know-how is just enough to get my in trouble.  Damn double-edged swords.

But I’ve been bumping up against limitations with shared hosting more and more frequently.  In other words:

if (shared_hosting.pain() > (server_migration.pain() + sys_admin.pain())) {
  AWS = new HostingProvider();

How AWS works

Pretend you run a 24-hour mini-mart. You’re open all the time so you have to have someone behind the counter at 3am when a random drunk needs a munchie fix.  You add more staff during the off-to-work rush (coffee, donuts and newspapers) and the afternoon rush (six-packs and heat-and-eat dinners) and scale back staff when it’s less crowded.  But all this time you pray you don’t get caught understaffed by some odd rush at 10pm on a Thursday night because you make your money by selling stuff to people as fast as they come in the door.

A site such as this, which got about 2000 page-hits last month (despite the fact I haven’t posted since May...  um, sorry about that...) spends a great deal of time twiddling fingers waiting for page requests.  But if this article gets linked to by the likes of Slash-Dot or Digg, the one high-school stoner manning the checkout counter ain’t going to meet the needs of all those visitors.

Like the mini-mart, you want to add resources during busy times.  With AWS, you can do this automatically – swap the stoner for a half dozen smiling, uniformed (sober) staff when 20 people walk in the door.  Swap them out when they leave.

All this mini-mart talk is making me hungry.  Maybe I’ll start moving servers after lunch...

Jun 08 2010
Jun 08

This week Rackspace Europe announced Ixis IT as their Partner of the Month for June 2010 to promote our dedicated Drupal managed hosting service and our accompanying monthly Drupal support package.

Ixis have used Rackspace for over 5 years due to their strong reputation in the hosting industry backed by fanatical support and an enterprise service level agreement.

We've been able to provide friendly and knowledgeable advice for all our customers hosting specifications - including load balancing, branch offices and VPNs, intrusion detection systems (IDS), dedicated Apache Solr & Tika indexing servers along with deploying sites using Pressflow, Memcache and Varnish for improved performance in high traffic or complex site setups.

Contact us for your next projects hosting needs.

You can see the original press release at

May 18 2010
May 18

     In the last few years, the term “cloud computing” has become the ubiquitous new buzz-word to toss around when talking about flexible, scalable hosting solutions. Unfortunately, as often happens when a new concept is introduced to a large market, it is often misunderstood; the benefits and drawbacks are misrepresented (intentionally or not).

A recent schematic of 'The Cloud'     The original concept of cloud computing was a utility-like service that would provide a pool of shared resources for clients to use on-demand – thereby allowing customers to leverage a large pool of resources without having to bear the burden of the capital cost of building or maintaining the entire infrastructure themselves. In the web hosting world what we refer to as “cloud computing” sometimes aligns with this original concept, sometimes not so much, depending on the service in question.

     In this post, we hope to remove some of the mystery around cloud computing as it relates to web hosting services; pull back the curtain a bit and explain what is actually happening inside that misty and illusory “cloud”.

Virtualization and The Cloud
     All cloud products that are offered as a web hosting service have one thing in common: they are virtualized. "Virtualized" means that your “server” (whether called an “instance”, “slice” or something else by your service provider) is actually a virtualized operating system (kernel) that is referred to as a “virtual machine”, or VM. The VM is running inside a “container” system, which is in charge of managing the resources of multiple VMs running alongside yours. There are several products that do this, some of which you may be familiar with – some of the more well known products out there are VmWare, VirtualBox, and XEN.

Although all cloud products are virtualized, not all virtualized hosting services are cloud services. What distinguishes the two is:

  • the ability to scale dynamically
  • adding or removing resources can be controlled by the customer through an API
  • have the utilized resources billed out by some small increment, such as by the hour or day (although some providers do have larger billable increments, such as week or months)

This flexibility is generally considered to be the largest advantage of a cloud product over your more standard virtualized private server, it allows you to dial up and down resources, on-demand and as-needed, without having to commit to a lot of hardware that comes with a bunch of unused overhead during non-peak hours.

The team said my post needed more pictures, so, eh, here is a guy 'Virtualizing'     Once you are on a virtualized platform, your “server" becomes platform agnostic. It is now a VM. This means that it will run anywhere the “container” will run. It doesn't matter what the hardware is. This means that (compared to an operating system installed directly on some hardware) its trivially easy to move your VM from one container to another – from server to server, or even from datacenter to datacenter. The various virtualization technologies have different ways to do this, with different requirements, but they almost all have some way to “migrate” a VM from one host to another. Here is where an important distinction between virtualized private server (or “VPS”) services and cloud services comes into play.

Under the Hood
     In order for cloud services to be able to scale in an elastic fashion, they must use some sort of network attached storage. Network attached storage is pretty much what it sounds like - a physical disk storage system that connects to a server over a network. In the case of the cloud, your VM runs on a server which provides the "brain", the RAM/CPUs and other hardware necessary for the operating system to run and connect to the Internet. But instead of having hard drives inside that same server, the storage is on a networked storage solution that is connected to the server running your VM (in many cases just using your standard Ethernet interface and switching gear).

     Having network based storage is generally a requirement for any type of “hot migration”, that is, moving a VM from one server to another very seamlessly. In this configuration, using XEN for instance, you can migrate a running VM from one server to another without even a reboot. This is useful when you need to allocate more ram or CPU cores to a VM that is on a server that is fully utilized (by migrating to a server with more resources available), or if a server has some hardware failure and needs maintenance all the VMs on it can be migrated with minimal interruption.

More Ins and Outs

Image of an actual I/O bottleneck in the cloud, taken by electron microscope and artificially colorized by photoshop.     There are several drawbacks to using network attached storage, the primary one being I/O. I/O, very simply, is the rate at which data can be transferred “In” and “Out” of a device. There are various kinds of I/O that affect performance in various ways, but in this case we are talking about disc I/O. There are several types of network attached storage products out there, but the most popular is iSCSI, which is basically SCSI over Ethernet. This means that if you are running Gigabit Ethernet, you have 1,000 Mbit/s of I/O. Serial attached SCSI (SAS) drives put you at 4,800 Mbit/s - in a RAID configuration, you can get much more than that. If you are running 10 Gigabit Ethernet, you have 10,000 Mbit/s.

     Comparing I/O rates between devices and standards is never an apples to apples comparison, because it completely depends on the hardware and network configuration - for example it doesn't matter if you've spent all your money on a 10GbE network if you have connected an 8 servers with 8 CPU cores to one 10GbE network connection - you are now sharing that connection with potentially 64 CPU cores - that many cores could easily saturate your expensive 10GbE network several times over. Now, of course, even if each server had a dedicated direct attached SCSI array you could still saturate your disk I/O, but you would have much more overhead to play with, and you could more easily dedicate resources to a specific VM or set of VMs.

     As always, there are some exceptions to the rule, but as of right now, most cloud products will not give you as much I/O as you can get out of dedicated direct attached storage. I/O is not everything, but this is one potential bottleneck.

Keep it down over there! - or - Noisy Neighbors
These crazy kids up all night saturating my network     Perhaps more important than raw I/O, is the fact that cloud products (and shared VPS services for that matter), are based on shared resources. This is great when you don't want to pay for all the resources you have access to all the time, but what happens when someone sharing the same resources as you maxes them out? Your service suffers. If they saturate the network your storage is on, your service can suffer *a lot*. For disk I/O intensive applications (like Drupal) this can mean slow, frustrating death. This is true not only for shared network resources, but for shared CPU cores (you can't share RAM for technical reasons, so at least you are safe there). If you have someone sharing the same resources as you, and they use more than their "fair share", your service will be impacted. This is called having “Noisy Neighbors”. Some people mitigate this by rebooting their VM, hoping they boot back up on a less resource-bound server. Some just wait it out. Many just watch the low load numbers on their VM, and puzzle over the lackluster performance.

What should I do?!?
     Decisions about what type of hosting service to use depends on your particular situation. In most cases, we always say “proof is in the pudding” - if the service you are on is making you happy, and the price is sustainable, then stay there for goodness sake! But if you experience performance problems on a budget hosting or cloud service that are causing you to lose money or traffic, maybe its time to look at a dedicated server environment.

     Drupal, especially with the default database-based cache enabled, can be particularly I/O heavy on the database. Many cloud providers offer a hybrid model, where you can have virtualized cloud servers that connect to external non-virtualized hardware with direct attached storage. This is a great solution if you can afford it – it allows you to put your database server on dedicated hardware, but keep your web servers on a more flexible solution.

     If you can't afford that, then look at Drupal performance solutions like Pressflow or Mercury. If thats not an option, then the next stop may be a VPS with direct attached disks and dedicated resources.

     If you feel like you are having I/O or “Noisy Neighbor” issues, check if you are on a “shared” resource plan, ask your provider what type of infrastructure they are using and if its saturated. If they won't tell you, and you continue to have problems, its probably time to start looking for another provider.

Next up: Caching and Drupal – examining apc, varnish, boost, memcache, and content delivery networks, and how they work together.

Related resources

Mar 27 2010
Mar 27

This short video shows you how to create a cron job using the Dreamhost web panel that will allow you to execute cron on a Drupal site. As I have noted previously execution of the cron file is an important task for any Drupal site. Many modules rely on cron for periodic processing of data including the core search and aggregator modules. Running cron also prompts the system to check for module updates. 

Here's what I entered in the video. Be sure to replace the url with url of your site. wget -qO /dev/null

This video is one in a series of videos I created called Getting The Most Out Of Dreamhost. If you're a current Dreamhost customer you probably already know how to do things like set up a domain or create a MySQL database so it may not be relevant to you. If you're considering Dreamhost I think the series offers a very good preview of how the web panel functions make it easy to launch new Drupal sites. There's also a discount code (LEARNDRUPAL2010) on that page that will save you $50 on a new Dreamhost web hosting account.

Note: Click the 'full screen' icon (to the left of the volume control) in order to watch online at full 1280x720 resolution.

Video Links

Flash Version

Oct 29 2009
Oct 29

So you're a small startup company, ready to go live with your product, which you intend to distribute under an Open Source License. Congratulations, you made a wise decision! Your developers have been hacking away frantically, getting the code in good shape for the initial launch. Now it's time to look into what else needs to be built and setup, so you're ready to welcome the first members of your new community and to ensure they are coming back!

Keep the following saying in mind, which especially holds true in the Open Source world: "You never get a second chance to make a first impression!". While the most important thing is of course to have a compelling and useful product, this blog post is an attempt to highlight some other aspects about community building and providing the adequate infrastructure. This insight is based on my own experiences and my observations from talking with many people involved in OSS startups and projects.

First of all, realize that your community is diverse. They have different expectations, skills and needs. Pamper your early adopters. They are the multipliers that help you to spread the word, if they are convinced and excited about what you provide. Put some faith and trust in them and listen to their input. In the beginning, you might want to focus on your developer community and the tech-savvy early adopters, but this of course depends on the type of product you provide and on what your target audience looks like. In any case, make sure that you provide the necessary infrastructure to cater the respective needs of these different user bases.

Also remember that you can not overcommunicate with your community. Blog heavily, write documentation/FAQs/HOWTOs, build up Wiki content and structure, create screencasts. Don't rely on the community to create any of this in the early stages. But be prepared to embrace and support any activities, if they arise. Solicit input, provide opportunities and guidelines for participation!

While it's tempting to do: don't establish too many communication channels in the beginning. Keep it simple and reduce the different venues of communication to an absolute minimum at this point. A new forum with many different topics but no comments looks like an art gallery with a lot of rooms, but they are either empty or there's just a single picture hanging at the wall. Nobody wants to visit that, he'd feel lost in the void. At the early stage of a project, I think it's essential to keep the discussions in as few places as possible. This helps you to identify your key community contributors (the "regulars" aka the "alpha geeks") and to build up personal relationships with them (and among themselves).

Consider establishing a forum with only a few topics, start with one or two mailing lists. Also make sure that these are actively being followed (e.g. by yourself or your developers) and that questions are being answered! I personally prefer mailing lists over forums, but I'm probably not representative. Ideally, it would be nice if there would be a unified communication hub that supports both posting via the web site like a forum, or via email or NNTP (similar to Google Groups). This keeps the discussions on one central place (which eases searching for specific keywords/topics) and still allows users to choose their preferred means of communication. Unfortunately, I haven't really found any suitable platform for this approach yet — suggestions are welcome! And once your community grows and people start complaining about too many or off-topic discussions, you can think about further separation of the discussion topics.

Allow your users to submit and comment on issues and feature requests by providing a public bug/feature tracking system. Use this system for your release tracking and planning as well, to give your users a better insight into what they can expect from upcoming versions. Also, make it very clear to your users where bug reports and feature requests should be sent to! Should one use the Forums or the bug tracker for that? A mailing list or forum makes it easier for users to participate in these discussions, but makes it more difficult to keep track of them and to ensure they are being followed up on. For the sake of simplicity, I would actually suggest to remove any separate forums about these topics. Instead, educate your community early about which is the right tool and venue to use for such requests. This saves time and resources on your side and helps to build up an initial core of community members that can then educate others about "the ropes". Otherwise you end up with the burden of keeping track of every feature request or bug report that was posted somewhere, ensuring it has been added to the bug tracker...

If your community infrastructure consists of separate building blocks to provide the required functionality (e.g. forums, bug tracking, wiki), consider setting up a single-sign on (SSO) technology and establish a unified look and feel between these applications. Your users should not be required to log in with more than one username and password, and every application should share the same login and profile data. However, only require a login, if absolutely necessary! Many users feel alienated by having to enter their personal data, even if they only want to lurk around or browse through existing discussions or documentation. As an additional benefit, it helps you to quickly identify your "community stars" in the various sections of your site: Who reports the most bugs? Who is the most helpful person on our Forums? This information could also be published on your community site, giving users the opportunity to build up reputation and karma. Community infrastructure sites like Drupal or Joomla provide an excellent foundation to get you started, while offering enough room for improvement and additional functionality at a later point.

Lower the entrance barrier and make it as easy as possible for people to get started with your application. Don't just throw a source archive at them, hoping that someone else will take care of doing the binary builds. Put some effort into building and providing binary, ready-to-install packages for the most popular platforms that your target audience is likely to use. The three most important platforms to cover are Microsoft Windows, Mac OS X and Linux. While users of the latter usually have the required tools and experience in building stuff from source, Windows and Mac users are usually "spoiled" and don't want to be bothered with having to install a full-fledged development environment before they could eventually evaluate your application.

When it comes to Linux distributions, you should look into building distribution-specific packages. This heavily depends on the requirements for external libraries that your application is using, which might differ on the various flavours of Linux. Depending on the purpose of your application, you may either focus on the more desktop/developer-centric distributions like Mandriva, openSUSE, Ubuntu, or on the distributions commonly used in server environments, e.g. Debian, CentOS, Fedora, RHEL, SLES (Yes, I am aware that most distributions are multi-purpose and serve both tasks equally well, and it's of course possible to use each of them to get the job done — it's a matter of taste and preference). If possible, make use of existing build infrastructure like Fedora's Koji build system, Launchpad's Personal Package Archives (PPA) or the openSUSE Build Service (which even allows you to build RPMs and DEBs for non-SUSE distributions) to automate the building and provisioning of distribution-specific packages for a wide range of Linux flavours. If your application is slightly complicated to install or set up, consider providing a live demo server that people can access via the Internet to give it a try. Alternatively, create ready-to-run images for virtual machines like Parallels, VirtualBox or VMWare. Everything that makes it easier to access, install and test your software should be explored.

In closing, make community involvement a part of your company culture and make sure that you preserve enough time to take care of it. Community engagement has so many different aspects, you don't necessarily have to be a developer or a very technical person to get involved. I'm aware that doing community work can be seen as a distraction and definitely takes away time from other tasks. But community involvement should become a habit and a well-accepted part of everyone's job — this is much easier to establish while you're still small and growing.

Jul 14 2005
Jul 14

Drupal Donations Thanks Poster

Lots of good news for the Drupal project:

The future looks bright...

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web