Dec 15 2014
Dec 15

Earlier this year we undertook a project to upgrade a client's infrastructure to all new servers including a migration from old Puppet scripts which were starting to show their age after many years of server and service changes. During this process, we created a new set of Puppet scripts using Hiera to separate configuration data from modules. The servers in question were all deployed with CentOS, and it soon became obvious that we needed a modular way in Puppet to install and configure yum package repositories from within our various Puppet modules.

Searching through the Puppet Forge uncovered a handful of modules created to deal with yum repos, however most were designed to implement a single repository, or were not easily configurable via Hiera, which was one of our main goals for the new Puppet scripts. So, we decided to create our own module, and the yumrepos Puppet module was born.

The idea behind the module is to provide a clean and easy way to pull in common CentOS/RHEL yum repos from within Puppet. By wrapping each repo with its own class (e.g. yumrepos::epel and yumrepos::ius), we gain the ability to override default class parameters with Hiera configuration, making the module easy to use in most any environment. For example, if you have your own local mirror of a repo, you can override the default URL parameter either in Hiera, or from the class declaration without having to directly edit any files within the yumrepos module. This is as easy as:

  1. In your calling class, declare the yumrepo class you need. In this example, we'll use EPEL: class { 'yumrepos::epel': }
  2. In your Hiera configuration, you can configure the repo URL with: yumrepos::epel::epel_url: http://your.local.mirror/path/to/epel/

Currently the yumrepos module provides classes for the following yum repos:

  • Drupal Drush 5
  • Drupal Drush 6
  • EPEL
  • IUS Community (Optionally: IUS Community Archive)
  • Jenkins
  • Jpackage
  • Percona
  • PuppetLabs
  • RepoForge
  • Varnish 3
  • Zabbix 2.4

Each repo contains the GPG key for the repo (where available) and defaults to enabling GPG checks. Have another repo you'd like to see enabled? Feel free to file an issue or pull request at

Additionally, yumrepos classes accept parameters for package includes or excludes so that you can limit packages on a per-repo basis. These translate to the includepkgs and exclude options within a yum repo configuration. Similar to overriding the default repo URL, these options can be overridden by passing parameters within the class declaration or by setting the appropriate variables within Hiera.

Once we had the yumrepos module in place, we had an easy way to configure yum repos from within our other modules. Stay tuned to the blog; we'll have more information about the overall Puppet scripts coming soon.

Jun 04 2014
Jun 04

This is a repost of an article I wrote for the Acquia Blog some time ago.

As mentioned before, devops can be summarized by talking about culture, automation, monitoring metrics and sharing. Although devops is not about tooling, there are a number of open source tools out there that will be able to help you achieve your goals. Some of those tools will also enable better communication between your development and operations teams.

When we talk about Continuous Integration and Continuous Deployment we need a number of tools to help us there. We need to be able to build reproducible artifacts which we can test. And we need a reproducible infrastructure which we can manage in a fast and sane way. To do that we need a Continuous Integration framework like Jenkins.

Formerly known as Hudson, Jenkins has been around for a while. The open source project was initially very popular in the Java community but has now gained popularity in different environments. Jenkins allows you to create reproducible Build and Test scenarios and perform reporting on those. It will provide you with a uniform and managed way to , Build, Test, Release and Trigger the deployment of new Artifacts, both traditional software and infrastructure as code-based projects. Jenkins has a vibrant community that builds new plugins for the tool in different kinds of languages. People use it to build their deployment pipelines, automatically check out new versions of the source code, syntax test it and style test it. If needed, users can compile the software, triggering unit tests, uploading a tested artifact into a repository so it is ready to be deployed on a new platform level.

Jenkins then can trigger an automated way to deploy the tested software on its new target platform. Whether that be development, testing, user acceptance or production is just a parameter. Deployment should not be something we try first in production, it should be done the same on all platforms. The deltas between these platforms should be managed using a configuration management tool such as Puppet, Chef or friends.

In a way this means that Infrastructure as code is a testing dependency, as you also want to be able to deploy a platform to exactly the same state as it was before you ran your tests, so that you can compare the test results of your test runs and make sure they are correct. This means you need to be able to control the starting point of your test and tools like Puppet and Chef can help you here. Which tool you use is the least important part of the discussion, as the important part is that you adopt one of the tools and start treating your infrastructure the same way as you treat your code base: as a tested, stable, reproducible piece of software that you can deploy over and over in a predictable fashion.

Configuration management tools such as Puppet, Chef, CFengine are just a part of the ecosystem and integration with Orchestration and monitoring tools is needed as you want feedback on how your platform is behaving after the changes have been introduced. Lots of people measure the impact of a new deploy, and then we obviously move to the M part of CAMS.

There, Graphite is one of the most popular tools to store metrics. Plenty of other tools in the same area tried to go where Graphite is going , but both on flexibility, scalability and ease of use, not many tools allow developers and operations people to build dashboards for any metric they can think of in a matter of seconds.

Just sending a keyword, a timestamp and a value to the Graphite platform provides you with a large choice of actions that can be done with that metric. You can graph it, transform it, or even set an alert on it. Graphite takes out the complexity of similar tools together with an easy to use API for developers so they can integrate their own self service metrics into dashboards to be used by everyone.

One last tool that deserves our attention is Logstash. Initially just a tool to aggregate, index and search the log files of our platform, it is sometimes a huge missed source of relevant information about how our applications behave.. Logstash and it's Kibana+ElasticSearch ecosystem are now quickly evolving into a real time analytics platform. Implementing the Collect, Ship+Transform, Store and Display pattern we see emerge a lot in the #monitoringlove community. Logstash now allows us to turn boring old logfiles that people only started searching upon failure into valuable information that is being used by product owners and business manager to learn from on the behavior of their users.

Together with the Graphite-based dashboards we mentioned above, these tools help people start sharing their information and communicate better. When thinking about these tools, think about what you are doing, what goals you are trying to reach and where you need to improve. Because after all, devops is not solving a technical problem, it's trying to solve a business problem and bringing better value to the end user at a more sustainable pace. And in that way the biggest tool we need to use is YOU, as the person who enables communication.

Feb 22 2013
Feb 22

Listen online: 

In this week's episode Addison Berry is joined by Lullabots Joe Shindelar, Andrew Berry, and Ben Chavet to talk about automating server set up with tools like Puppet and Jenkins. Building a Drupal site is only part of the equation to a complete project, and setting up the servers you will need, and making sure that you can reproduce the site set up consistently is hugely important. It becomes even more important as your projects and teams grow in size. We cover the basics of what the goal is, and what tools we use at Lullabot, and why we use them. This is a great overview of the topic for people who keep hearing these terms thrown around, and would like to understand more of what it's all about.

Podcast notes

  • New videos out on Views Bulk Operations and Entity Views Attachment

Ask away!

If you want to suggest your own ideas for podcasts, or have questions for us to answer on a podcast, let us know:
Contact us page

Release Date: February 22, 2013 - 10:00am


Length: 38:50 minutes (22.4 MB)

Format: mono 44kHz 80Kbps (vbr)

Jan 13 2013
Jan 13

My last post has been a while ... in that I announced that there would be another event right before FOSDEM ... I totally forgot to announce it here but I`m sure that most of you already know. Yes. PuppetCamp Europe is coming back to it's roots... it's coming back to the city where we hosted it for the first time on this side of the ocean.. Gent. (that's 31/1 and 1/2 )

There is still time to register for the event The schedule for the event will be published soonish (given that the selection was done on Friday evening and the speakers already received their feedback)

Co-located with PuppetCamp there will another Build and Open Source cloud day
Build a Cloud day with interesting topics such as Cloudstack, Ceph, devops and a really interesting talk on how the Spotify crowd is using Cloudstack.

So after those 2 days in Ghent, a lot of people will be warmed up for the open source event of the year FOSDEM.

And right after FOSDEM a bunch of people will gather at the Inuits office for 2 days of discussing, hacking and evangelizing around #monitoringlove (see previous post)

I almost forgot but even before the FOSDEM week-long there is the 2013 PHP Benelux Conference where I`ll be running a fresh version of the 7 Tools for your devops stack

There is a ****load of #DevopsDays events being planned this year .... the 2012 edition of New York will be taking place next week .
Austin and London have been announced and have opened up their CFP and Registration but different groups are organizing themselves to host events in Berlin, Mountain View, Tokyo, Barcelona, Paris, Amsterdam , Australia , Atlanta and many more ..

And there's even more to come .. April 6 and 7 will be the dates for the Linux Open Administration Days (Loadays 2013) in Antwerp again ... a nice small conference where people gather to discuss different interesting Linux topics .... Call For Presentations is still open ..Submit here

On the other side of the ocean there's DrupalCon Portland which once again is featuring a #devops track , and also the folks over at Agile 2013 (Nashville)
have a #devops track now. Both events are still looking for speakers ..

So if by the end of this year you still don't know what devops is all about .. you probably don't care and shouldn't be in the IT industry anyhow.

And those are only the events I`m somehow involved in for the next couple of months

Aug 25 2012
Aug 25

While heading back home from DrupalCon Munich after 4 days of good interaction with lots of Drupal folks.
I realized to my big suprise that there are a lot of people using Vagrant to make sure that developers are not working on platforms they invented their own. Lots of people have realized that "It works on my computer" is not something they want to hear from a developer and are reaching out to give them viable solutions to work on shared and reproducible solutions.

There were 2 talks proposing solutions to the problem,

the first one was ..Fearless development with Drush, Vagrant and Aegir by Christopher Gervais He talked about Drush VAgrant Integraion and how extentions to Drush allow for easy vagrant integration , bridging this gap allows rupal developers to use a tool they are already familiar with

The second one was Jochen Lillich who explained how he us using Vagrant an Chef for this purpose his talk titled Use datacenter tools to make your dev life easier has been posted already.

During the Vagrant BOF , I briefly ran over @patrickdebois old slides on Vagrant after which people started discussing their use cases.. 2 other projects came up

First is Project Oscar which aims at providing developers with a default Drupal development environment in a Jiffy. they do this by providing a bunch of puppetmanifests that sets up a working environment.

And the second one is Ariadne which is a standardized virtual machine development evironment for easily developing Drupal sites in a local sandbox that is essentially identical to a fully-configured hosted solution. It attempts to emulate a dedicated Acquia/Pantheon server as closely as possible, with added development tools. Project Ariadne is just like the examples from Jochen Lillich based on Chef

With all of these tools and examples around , there should be no excuses anymore for Drupal developers to hack on their own machine and tell the systems people "It works on my machine" (let alone to hack in production).

Aug 25 2012
Aug 25

With 2 of the bigger Open Source projects I care about talking about certifications programs questions pop up again ...

Should we certify ourselves ?

So let me tell you about my experiences in getting Open Source related Certifications ..

Over a decade ago, (2001) when RedHat was still Redhat and not yet Fedora the company I was working for was about to partner with RedHat and needed to get a number of people certified for that.

So I took the challenge, I bored myselve to death during a 4 day RedHat fast track training and set out to do the exam the next day. Obviosly I scored pretty well given my yearlong experience in the subject. Back then I was told that I scored the one but European Record on the exam which was actually held by another collegue (hey Ico) , our CTO however was not amused when I told that I could have scored better but I didn't bother running a chkconfig smb  on since I didn't see the use in using windows fileshares in a unix environment (Yes I was young , we're all allowd to make stupid mistakes :))

So I was certified, we were expecting the requests to flow in en masse ... nothing happened... not a single customer request... If I recall correctly we got 2 requests for certified engineers over the course of the following years. One was from a customer that wanted to have us do some junior level sysadmin work on their systems which we didn't care about, we proposed a more junior profile, but they insisted on having someone who was certified, The other one was from a Large institution that wanted certified people for their RedHat support, only to quickly learn that the budget they had planned for this project was about half the rate we usually charged ..

When RedHat introduced their certified Architect program my answer was, sure .. if you bring us the customer that will make the investment worthwhile , guess what..

My second experience with Open Source certification came a couple of years later with MySQL, same story partnering etc, . only this time our trainer had put some focus on a couple of slides during the training (Hi Tobias) and during the exam indeed one of those questions popped up, The correct answer to "What are the core values of MySQL AB" was "We reply to email" , I stood up and left the exam ...
I ranted about this to a number of people including Roland Bouman who back then was just starting on the MySQL (NDBD) Cluster certifciation track and I assisted him in making the book to study for that exam better.
Once again .. pretty much no one asked for MySQL certification in Europe back in those days (2007 ?)

I won't go deeper into discussing the Xen certification I got from Citrix, but it involved correcting slides from the presenters at the first European training.

Based on my experience with these certifications in Belgium/Europe you can see that I`m not a big fan of certifications I have not seen a reason for me to certify yet

I actually think that noone within the Open Source community should be looking for certification, we should be looking for people that are active in the community and that are contributing to projects.
Unlike in the proprietary world where you have to cough up tons of money in order to get a license to play with a tool and learn itl In the open source world with projects such as both Drupal and Puppet, there are absolutely no excuses for Junior people not to engage and prove themselves. they have full access to anything they need, the only thing they need to do is want to get involved.

Sadly this world however is still full of incompetent recruiters, middlemarket agencies that will never understand this and will ask for cerftifications of some kind. My fear is indeed that there will be a group of mediocre but certified developers swarming these growing markets at dumping rates and that the people with the real experience that have been involved in the communities for ages already will be the ones pulling the short straw.

Anyhow ... in just a short couple of years everything will be fine again .. as by then my RHCE will be current again and the incompetent recruiters that need people that are RedHat 7 certified will start calling me by the dozen.

Aug 06 2012
Aug 06

3+ months is probably the biggest timeout I've taken from blogging in a while..
Not that I didn't have anything to write ..but more that I was prioritizing writing different content over
over writing blogposts.

Blogging tech snippets and contributing documentation used to be one now all of that has evolved.
Anyhow ..

So to get things going here's my preliminary Conference schedule for the next couple of months.

Next up .. content ... on how monitoring tools still suck .. and I`m still not sure wether a certification program is relevant for open source consultants ..

May 01 2012
May 01

Devopsdays Mountainview sold out in a short 3 hours .. but there's other events that will breath devops this summer.
DrupalCon in Munich will be one of them ..

Some of you might have noticed that I`m cochairing the devops track for DrupalCon Munich,
The CFP is open till the 11th of this month and we are still actively looking for speakers.

We're trying to bridge the gap between drupal developers and the people that put their code to production, at scale.
But also enhancing the knowledge of infrastructure components Drupal developers depend on.

We're looking for talks both on culture (both success stories and failure) , automation,
specifically looking for people talking about drupal deployments , eg using tools like Capistrano, Chef, Puppet,
We want to hear where Continuous Integration fits in your deployment , do you do Continuous Delivery of a drupal environment.
And how do you test ... yes we like to hear a lot about testing , performance tests, security tests, application tests and so on.
... Or have you solved the content vs code vs config deployment problem yet ?

How are you measuring and monitoring these deployments and adding metrics to them so you can get good visibility on both
system and user actions of your platform. Have you build fancy dashboards showing your whole organisation the current state of your deployment ?

We're also looking for people talking about introducing different data backends, nosql, scaling different search backends , building your own cdn using smart filesystem setups.
Or making smart use of existing backends, such as tuning and scaling MySQL, memcached and others.

So lets make it clear to the community that drupal people do care about their code after they committed it in source control !

Please submit your talks here

Nov 02 2011
Nov 02

How many times have the following issues happened on a project you've worked on?

  • Notices (or worse) appeared on production because of a PHP version mismatch between a developer's machine and the production web servers.
  • A new PHP extension or PECL extension had to be installed on production because it was installed in WAMP or MAMP?
  • A team member ran into difficult setting up their local environment and spent many hours stuck on something.
  • Team members didn't set up SSL or Varnish on their local machines and issues had to be caught on a dev server.
  • A team member would like to switch to Homebrew, but can't set aside the many hours to redo their setup until a project is done.

Tools like MAMP, XAMPP, the Aqcuia dev desktop, MacPorts and Homebrew all make it easy to get an *AMP stack up and running on your computer, and tools like MacPorts and Homebrew even make it pretty easy to install tools like Varnish and memcached.

While these tools make it easy to run a very close approximation of the production hosting stack on your local machine (arguably closer if you use Macintosh or Linux,) it will still have some key differences which will ultimately contribute at some point to a "Works on My Machine!" situation in your project.

Works On My Machine Badge

Luckily, virtualization has advanced to such a degree that there are cross-platform virtualization solutions such as VirtualBox, but just using a VM inside of VirtualBox doesn't solve the whole problem. It makes acquiring the correct versions of software easy, but keeping configuration in sync can still be a challenge for users who are not deeply familiar with Linux.

Vagrant is a Ruby gem that makes working with Linux virtual machines easy. You distribute a Vagrantfile to your team, and it does the following things for you:

  • Downloads and sets up virtual machines from a single .box file which it will download over HTTP or FTP.
  • Provisions the software and configuration on the VM using your choice of Chef, Puppet, or simple shell scripts
  • Automatically shares the directory with the Vagrantfile (and any subdirectories) to the virtual machine with Virtualbox's built-in file sharing
  • Forwards the SSH port (and optionally other ports) to your localhost and avoids collisions so you can always directly SSH to the machine
  • Optionally sets up a dedicated host-only IP address that you can use to connect to all services on the VM without port forwarding
  • Optionally shares directories to the VM over NFS from a Macintosh or Linux guest, which enables acceptable performance for a Drupal docroot

Since Vagrant handles the file sharing with the VM, you and your team don't have to mess around with setting up FUSE or the like and you can continue to use the tools that you're used to using locally, such as your text editor or garphical source control program.

In addition, so long as you have a single developer skilled in ops who can encapsulate the production configuration into a system like Chef or Puppet, these changes can be pushed down to the whole team. Once your ops team has a working Varnish configuration, for example, they can push that into the Vagrant repository, and then a working Varnish setup on all your developers' VMs is just a git pull and a vagrant provision away.

We've been working with Vagrant over the last few months and think it offers a number of advantages. All it takes to get started installing VirtualBox and the vagrant ruby gem. Detailed information on how to get started is available in the excellent Vagrant Quickstart guide.

I've put together a screencast that's just over 10 minutes long and shows the whole process of bringing up a CentOS 5.6 VM with the site shared from my local machine.

We'll be posting more example code over the coming weeks that will allow you to try out Drupal from your local machine on a Linux VM.

Jul 17 2011
Jul 17

For those who haven't noticed yet .. I`m into devops .. I`m also a little bit into Drupal, (blame my last name..) , so one of the frustrations I've been having with Drupal (an much other software) is the automation of deployment and upgrades of Drupal sites ...

So for the past couple of days I've been trying to catch up to the ongoing discussion regarding the results of the configuration mgmt sprint , I've been looking at it mainly from a systems point of view , being with the use of Puppet/ Chef or similar tools in mind .. I know I`m late to the discussion but hey , some people take holidays in this season :) So below you can read a bunch of my comments ... and thoughts on the topic ..

First of all , to me JSON looks like a valid option.
Initially there was the plan to wrap the JSON in a PHP header for "security" reasons, but that seems to be gone even while nobody mentioned the problems that would have been caused for external configuration management tools.
When thinking about external tools that should be capable of mangling the file plenty of them support JSON but won't be able to recognize a JSON file with a weird header ( thinking e.g about Augeas ( , I`m not talking about IDE's , GUI's etc here, I`m talking about system level tools and libraries that are designed to mangle standard files. For Augeas we could create a separate lens to manage these files , but other tools might have bigger problems with the concept.

As catch suggest a clean .htaccess should be capable of preventing people to access the .json files There's other methods to figure out if files have been tampered with , not sure if this even fits within Drupal (I`m thinking about reusing existing CA setups rather than having yet another security setup to manage) ,

In general to me tools such as puppet should be capable of modifying config files , and then activating that config with no human interaction required , obviously drush is a good candidate here to trigger the system after the config files have been change, but unlike some people think having to browse to a web page to confirm the changes is not an acceptable solution. Just think about having to do this on multiple environments ... manual actions are error prone..

Apart from that I also think the storing of the certificates should not be part of the file. What about a meta file with the appropriate checksums ? (Also if I`m using Puppet or any other tool to manage my config files then the security , preventing to tamper these files, is already covered by the configuration management tools, I do understand that people want to build Drupal in the most secure way possible, but I don't think this belongs in any web application.

When I look at other similar discussions that wanted to provide a similar secure setup they ran into a lot of end user problems with these kind of setups, an alternative approach is to make this configurable and or plugable. The default approach should be to have it enable, but the more experienced users should have the opportunity to disable this, or replace it with another framework. Making it plugable upfront solves a lot of hassle later.

Someone in the discussion noted :
"One simple suggestion for enhancing security might be to make it possible to omit the secret key file and require the user to enter the key into the UI or drush in order to load configuration from disk."

Requiring the user to enter a key in the UI or drush would be counterproductive in the goal one wants to achieve, the last thing you want as a requirement is manual/human interaction when automating setups. therefore a feature like this should never be implemented

Luckily there seems to be new idea around that doesn't plan on using a raped json file
instead of storing the config files in a standard place, we store them in a directory that is named using a hash of your site's private key, like sites/default/config_723fd490de3fb7203c3a408abee8c0bf3c2d302392. The files in this directory would still be protected via .htaccess/web.config, but if that protection failed then the files would still be essentially impossible to find. This means we could store pure, native .json files everywhere instead, to still bring the benefits of JSON (human editable, syntax checkable, interoperability with external configuration management tools, native + speedy encoding/decoding functions), without the confusing and controversial PHP wrapper.

Figuring out the directory name for the configs from a configuration mgmt tool then could be done by something similar to

  1. cd sites/default/conf/$(ls sites/default/conf|head -1)

In general I think the proposed setup looks acceptable , it definitely goes in the right direction of providing systems people with a way to automate the deployment of Drupal sites and applications at scale.

I`ll be keeping a eye on both the direction they are heading into and the evolution of the code !

Dec 27 2010
Dec 27

So far in this series we have covered a potential target market and business plan, resources and infrastructure and the tools required to deliver Drupal sites with a sale price of $100 per site. In this post I'll be covering some of the considerations when building Drupal platforms or distributions.

The sites which customers deploy will need to be based on a custom Drupal distribution or "distro". The distro should be modular and primarily driven by Features.

Customers shouldn't have to know anything about administering Drupal when they first buy their site. A customer should be able to turn functionality on and off as they want, through a simple user interface.


The platform should contain a good collection of Features. The following list is an example of what you might offer customers:

  • Contact Form
  • Image Gallery
  • Products
  • Services
  • "Static" Pages
  • Blog
  • News
  • Mailing Lists
  • Social Network Integration
  • Office / Store Locations
  • Staff Profiles

When developing your list of things to include in the site, think in terms of functionality a small business would want, not what modules you should be using. The list of modules should be derived from the functionality, not the other way around.

As the features included in the platform will be modular and generically useful, you should consider releasing them publicly, via your own features server or as full modules.

On top of the features listed above you will probably need to include some custom glue code to enhance the user experience. In my first post in this series I discussed the target audience not having high level computer skills, so the user interface should take this into account. Some of the language might need to be changed or form options modified to use sane defaults and some might even be hidden from the user.


As each server may have hundreds or even thousands of sites running on it, security will be an important consideration. Like with all servers you should ensure it is properly locked down and only running the services you need. Apache should be configured to block access to most things except index.php and relevant client side files (images, css, js) and the files directory. At the Drupal level you should make sure that things like the PHP module aren't enabled and secure coding practices are adhered to. The user account given to your customer shouldn't be user 1, they should be user 2, with restricted permissions that only gives them access to what they need.

I strongly recommend that you read Cracking Drupal by Greg Knaddison.

Sales and Support

In order to attract customers you will need a site to promote the service and allow customers to sign up and hand over their credit card details. Drupal now offers 3 ecommerce projects, Drupal e-Commerce, Drupal Commerce and ubercart, you should investigate which of these best suits your needs. The sales system will need some custom code to hook into Aegir, which will be managing the actual site deployments. The sales and support platform/s should be managed in a similar manner to the customer sites.

Once you have paying customers, you will also need to provide them with some resources such as detailed documentation, video walk throughs, forums and possibly a ticketing system. This site can either be part of the sales site or a separate site. In the next instalment I'll cover support in more detail.

Deploying Platforms

We need to keep the whole process very automated, CPU cycles are a lot cheaper than workers. Building and deploying platforms should involve a few clicks or tweaking a configuration file. For example platforms could be built as Debian (or Ubuntu) packages (aka debs) using an automated build process that runs a test suite against the code base before building a deb. The debs could then be deployed using puppet and a post installation script can notify Aegir that it has been installed successfully. The whole process could involve very little human interaction. Migrating client sites to upgraded platforms could also be automated using a simple script which adds a migrate task for each site.

What's Next?

Now that we have the service almost ready to go, we should look into how we are going to get customers to part with their cash and how we will support them once they have paid.

Feb 16 2010
Feb 16

So John wrote down his experiences on deploying Drupal sites with Puppet .

It's not a secret that I've been thinking about similar stuff and how I could get to the best possible setup.

John starts of with using Puppet to download Drush... while I want to use rpm for that ...

I want my core infrastructure to be fully packaged... not downloaded and untarred. I want to be able to reproduce my platform in a couple of months , with the exact same versions I`m using now .. not with the version that happens to be on at that point in time, or with being down.

Now the next question off course is what's the core infrastructure.
Where does the infrastructure end and does the application start. There's little discussion about having a puppet created vhost , an apache conf.d file, a matching .htaccess file if wanted , and the appropriate settings.php for a multisite drupal config.

There's also little doubt to me on using drush to run the updates, manage the drupal site etc . Reading John's article made me think some further about what and when I want things packaged.

John's post lead to a discussion on #infra-talk on getting all drupal modules packaged for Centos with Karan and some others

In a development environment I probably want to have periodic drush updates getting the latest modules from the interwebs and potentially breaking my devs code. But making sure that when you put a site in production it will be on a fairly up to date platform, and not on the platform you started developing on 24 months ago.

In a production environment however you only want tested updates of your modules as indeed they will break code.

It's probably going to be a mix and match setup having a local rpm/deb repo with packaged modules that have been tested and validated in your setup and using drush to enable or configure them for that production setup.

But also having a CI environment wher Drush will get the new modules from the interwebs when needed. and package them for you.

To me that sounds beter than getting all the available Drupal modules and packaging them, even automated, and preparing a repository of those modules of which only a small percentage will actually be used by people.

But I need to think about it some more :)

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web