Mar 06 2019
Mar 06, like any good PaaS, exposes a lot of useful information to applications via environment variables. The obvious parts, of course, are database credentials, but there's far more that we make available to allow an application to introspect its environment.

Sometimes those environment variables aren't as obvious to use as we'd like. Environment variables have their limits, such as only being able to store strings. For that reason, many of the most important environment variables are offered as JSON values, which are then base64-encoded so they fit nicely into environment variables. Those are not always the easiest to read.

That's why we're happy to announce all new, completely revamped client libraries for PHP, Python, and Node.js to make inspecting the environment as dead-simple as possible.


All of the libraries are available through their respective language package managers:


composer install platformsh/config-reader


pip install platformshconfig


npm install platformsh-config --save

That's it, you're done.


All three libraries work the same way, but are flavored for their own language. All of them start by instantiating a "config" object. That object then offers methods to introspect the environment in intelligent ways.

For instance, it's easy to tell if a project is running on, in the build hook or not, or if it's in a Enterprise environment. In PHP:

$config = new \Platformsh\ConfigReader\Config();

$config->inValidPlatform(); // True if env vars are available at all.

// Individual environment variables are available as their own properties, too.
// ...

The onProduction() method already takes care of the differences between Professional and Enterprise and will return true in either case.

What about the common case of accessing relationships to get credentials for connecting to a database? Currently, that requires deserializing and introspecting the environment blob yourself. But with the new libraries, it's reduced to a single method call. In Python:

config = platformshconfig.Config()

creds = config.credentials('database')

This will return the access credentials to connect to the database relationship. Any relationship listed in is valid there.

What if you need the credentials formatted a particular way for a third-party library? Fortunately, the new clients are extensible. They support "credential formatters", which are simply functions (or callables, or lambdas, or whatever the language of your choice calls them) that take a relationship definition and format it for a particular service library.

For example, one of the most popular Node.js libraries for connecting to Apache Solr, solr-node wants the name of a collection as its own string. The relationship provides a path, since there are other libraries that use a path to connect. Rather than reformat that string inline, the Node.js library includes a formatter specifically for solr-node:

const solr = require('solr-node');
const config = require("platformsh-config").config();

let client = new solr(config.formattedCredentials('solr-relationship-name', 'solr-node'));

Et voila. client is now a solr-node client and is ready to be used. It's entirely possible to register your own formatters, too, and third-party libraries can include them as well:

config.registerFormatter('my-client-library', (creds) => {
  // Do something here to return a string, struct, dictionary, array, or whatever.

We've included a few common formatters in each library to cover some common libraries. We'll be adding more as time goes by, and, of course, PRs are always extremely welcome to add more!

But what about my language?

We wanted to get these three client libraries out the door and into your hands as soon as possible. But don't worry; Go and Ruby versions are already in the works and will be released soon.

We'll continue to evolve these new libraries, keeping the API roughly in sync between all languages, but allowing each to feel as natural as possible for each language.

Aug 23 2018
Aug 23

PHP continues its steady march forward, and today marks the release of the latest version, PHP 7.3.

It also marks its release on, and our first holiday gift to you this season.

So what's new?

PHP 7.3 brings continued incremental performance improvements to the language. It's not as big as the jump to 7.0 was (few things can be), but the same code running under PHP 7.3 should be a bit faster than on 7.2.

While there's no earth-shattering additions in this version, there are a few nice pluses. Like a handful of new utility functions, such as is_countable(), array_key_first(), and array_key_last() (all of which are fairly self-explanatory). What's most exciting for language nerds that follow PHP's development (like yours truly)? Trailing commas in function calls are now legal, just as they have been on array definitions for years; Heredoc and Nowdoc syntax are now also more forgiving, allowing for more nicely formatted, multiline strings; the JSON utilities can now be set to throw exceptions, like most newer functionality, allowing error handling to be more consistent throughout the application.

OK, it's not going to change the world, but it's still nice.

There are also a number of deprecations around edge cases to flag behaviors that are expected to go away in PHP 8.0. (Yes, we did just say that number.) See the Changelog for the full list of changes and fixes.

Cool, so how do I do it?

As always at, it's just a YAML tweak away. In your file (not on the master branch), change your application type to php:7.3, like so:

type: php:7.3

That's it. When next you push on that branch, you'll get the PHP 7.3 container. Test it out, and make sure everything is working (it should be fine), then merge back to master when you're ready.

Enjoy the latest and greatest PHP has to offer—any day of the week!

Aug 09 2018
Aug 09

The cicada is a flying insect found world-wide. It's loud but not particularly threatening. It's most famous attribute, though, is that many species of cicada (particularly in North America) are periodic, only emerging every 13 or 17 years depending on the species. When it does emerge, a huge brood reaches maturity all at once, mates, lays eggs, and then dies. The eggs hatch and the offspring spend the next 13 or 17 years living deep underground and burrowing before repeating the cycle again.

But why 13 and 17 years? That's a rather odd set of numbers... And that's actually the point. Those lifespans are both prime numbers, that is, they are divisible only by themselves and one. Many cicada predators also have multi-year life cycles rather than emerging every year. So what are the odds of a large number of cicada predators emerging in the same year as a large number of cicadas?

Very low, in fact. That's the point. Because a prime number is only divisible by 1 and itself, a smaller number sequence will overlap with it only when those two are multiplied. That is, a 4 year cycle predator and a 13 year cicada will only emerge at the same time every 4 * 13 = 52 years. If the cicada emerged every 12 years, however, the 4 year predator would have a veritable buffet every third generation and the cicadas would have a bad time every time.

Over time, evolutionary pressure weeded out the many-common-divisor periodic species of cicada, leaving only those that have overlapping generations every year and those that have a huge all-at-once generation on a prime-number schedule.

What can we learn from the little cicada? If you have two repeating events, and you want them to happen at the same time as rarely as possible, have them repeat on prime numbers.

But what does that have to do with web development?

A website frequently has background tasks that it needs to run from time to time; sometimes every few minutes, sometimes every few hours, sometimes every few days. Most often these are run using a cron task.

Generally speaking it's a bad idea to run more than one cron job at once. Even if they don't interfere with each other they may use a lot of CPU, and you don't want them to slam the system all at once. In fact, on we don't allow that to happen: If a cron task tries to start but there's another already running, we force the new one to pause and wait for the first to complete.

That can sometimes cause issues if, say, a nightly backup process wants to start while a routine every-few-minutes cron task is running. The snapshot will start but block waiting for the other cron task to finish, which if it's a long running task could result in a brief period of site outage while the snapshot waits its turn.

Avoiding predatory cron jobs

So how do we make sure one cron job runs at the same time as another as little as possible? The same way cicadas avoid predators: Prime numbers!

More specifically, say we have a cron task that runs normal system maintenance every 20 minutes. Then we have an import process that periodically reads data from an external system every 10 minutes, and another that runs every 5 minutes to send out pending emails.

The result will be that every 10 minutes we have two cron tasks competing to run at the same time, and every 20 minutes we have three cron tasks competing. That's no good at all!

Instead, let's set the system maintenance to run every 23 minutes, the import to run every 11 minutes, and the email runner every 7 minutes. It's almost the same schedule, but because the numbers are prime they will only very rarely overlap. (Every 77 minutes in the shortest case.) That spreads the load out far better and avoids any process blocking on another.

Now if we want to add a nightly backup, we can have it run at, say, 17 minutes past 4:00 am. It will be extremely rare for the other cron tasks to hit at the 17 minute mark exactly, so our snapshot will almost never need to block on another cron task and our site won't freeze while it waits.

Isn't it nice when bugs end up helping your software run faster?

Aug 07 2018
Aug 07

Microservices have been all the rage for the past several years. They're the new way to make applications scalable, robust, and break down the old silos that kept different layers of an application at odds with each other.

But let's not pretend they don't have costs of their own. They do. And, in fact, they are frequently, perhaps most of the time, not the right choice. There are, however, other options besides one monolith to rule them all and microservice-all-the-things.

What is a microservice?

As usual, let's start with the canonical source of human knowledge, Wikipedia:

"There is no industry consensus yet regarding the properties of microservices, and an official definition is missing as well."

Well that was helpful.

Still, there are common attributes that tend to typify a microservice design:

  • Single-purpose components
  • Linked together over a non-shared medium (usually a network with HTTP or similar, but technically inter-process communication would qualify)
  • Maintained by separate teams
  • And released (or replaced) on their own, independent schedule

The separate teams part is often overlooked, but shouldn't be. The advantages of the microservice approach make it clear why:

  • Allow the use of different languages and tools for different services (PHP/MongoDB for one and Node/MySQL for another, for instance.)
  • Allows small, interdisciplinary teams to manage targeted components (that is, the team has one coder, one UI person, and one DB monkey rather than having a team of coders, a team of UI people, and a team of DB monkeys)
  • Allows different components to evolve and scale scale independently
  • Encourages strong separation of concerns

Most of those benefits tie closely to Conway's Law:

Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure.

A microservice approach works best when you have discrete teams that can view each other as customers or vendors, despite being within the same organization. And if you're in an organization where that's the case then microservices are definitely an approach to consider.

However, as with any architecture there are tradeoffs. Microservices have cost:

  • Adding network services to your system introduces the network as a point of failure.
  • PointS of failure should always be plural, as a network, even a virtual and containerized one, has many, many points of failure.
  • The network will always be 10x slower than calling a function, even a virtual network. If you're using a shared-nothing framework like PHP you have to factor in the process startup cost of every microservice.
  • If you need to move some logic from one microservice to another it's 10x harder than from one library to another within an application.
  • You need to staff multiple interdisciplinary teams.
  • Teams need to coordinate carefully to avoid breaking any informal APIs
  • Coarse APIs
  • Needing new information from another team involves a much longer turnaround time than just accessing a database.

Or, more simply: Microservices add complexity. A lot of complexity. That means a lot more places where things can go wrong. A common refrain from microservice skeptics (with whom I agree) is

"if one of your microservices going down means the others don't work, you don't have a microservice; you have a distributed monolith."

To be sure, that doesn't mean you shouldn't use microservices. Sometimes that is the right approach to a problem. However, the scale at which that's the is considerably higher than most people realize.

What's the alternative?

Fortunately, there are other options than the extremes of a single monolith and a large team of separate applications that happen to talk to each other. There's no formal term for these yet, but I will refer to them as "clustered applications".

A clustered application:

  • Is maintained by a single interdisciplinary team
  • Is split into discrete components that run as their own processes, possibly in separate containers
  • Deploys as a single unit
  • May be in multiple languages but usually uses a single language
  • May share its datastore(s) between processes

This "in between" model has been with us for a very long time. The simplest example is also the oldest: cron tasks. Especially in the PHP world, many applications have had a separate cron process from their web request/response process for literally decades. The web process exists as, essentially, a monolith, but any tasks that can be pushed off to "later" get saved for later. The cron process, which could share, some, all, or none of the same code, takes care of the "later". That could include sending emails, maintenance tasks, refreshing 3rd party data, and anything else that doesn't have to happen immediately upon a user request for the response to be generated.

Moving up a level from cron are queue workers. Again, the idea is to split off any tasks that do not absolutely need to be completed before a response can be generated and push them to "later". In the case of a queue worker "later" is generally sooner than with a cron job but that's not guaranteed. The workers could be part and parcel of the application, or they could be a stand-alone application in the same language, or they could be in an entirely different language. A PHP application with a Node.js worker is one common pattern, but it could really be any combination.

Another variant is to make an "Admin" area of a site a separate application from the front-end. It would still be working on the same database, but it's possible then to have two entirely separate user pools, two different sets of access control, two different caching configurations, etc. Often the admin could be built as just an API with a single-page-app frontend (since all users will be authenticated with a known set of browser characteristics and no need for SEO) while the public-facing application produces straight HTML for better performance, scalability, cacheability, accessibility, and SEO.

Similarly, one could make a website in Django but build a partner REST API in a separate application, possibly in Go to squeeze the last drop of performance out of your system.

There's an important commonality to all of these examples: Any given web request runs through exactly one of them at a time. That helps to avoid the main pitfall of microservices, which is adding network requests to every web request. The fewer internal IO calls you have the better; just ask anyone who's complained about an application making too many SQL queries per request. The boundaries where it's reasonable to "cut" an application into multiple clustered services are anywhere there is, or can be, an asynchronous boundary.

There is still additional complexity overhead beyond a traditional monolith: while an individual request only needs one working service and there's only one team to coordinate, there's still multiple services to have to manage. The communication paths between them are still points of failure, even if they're much more performance tolerant. There could also be an unpredictable delay between actions; an hourly cron could run 1 minute or 59 minutes after the web request that gave it an email to send. A queue could fill up with lots of traffic. Queues are not always perfectly reliable.

Still, that cost is lower than the overhead of full separate-team microservices while offering many (but not all) of the benefits in terms of separation of concerns and allowing different parts of the system to scale and evolve mostly independently. (You can always throw more worker processes at the queue even if you don't need more resources for web requests.) It's a model well worth considering before diving into microservices.

How do I do either of these on

I'm so glad you asked! is quite capable of supporting both models. While our CPO might yell at me for this, I would say that if you want to do "microservices" you need multiple projects.

Each microservice is supposed to have its own team, its own datastore, its own release cycle, etc. Doing that in a single project, with a single Git repository, is rather counter to that design. If your system is to be built with 4 microservices, then that's 4 projects; however, bear in mind that's a logical separation. Since they're all on and presumably in the same region, they're still physically located in the same data center. The latency between them shouldn't be noticeably different than if they were in the same project.

Clustered applications, though, are where especially shines. Every project can have multiple applications in a single project/Git repository, either in the same language or different language. They can share the same data store or not.

To use the same codebase for both the web front-end and a background worker (which is very common), we support the ability to spin up the same built application image as a separate worker container. Each container is the same codebase but can have different disk configuration, different environment variables, and start a different process. However, because they all run the same code base it's only a single code base to maintain, a single set of unit tests to write, etc.

And of course cron tasks are available on every app container for all the things cron tasks are good for.

Within a clustered application processes will usually communicate either by sharing a database (be it MariaDB, PostgreSQL, or MongoDB) or through a queue server, for which we offer RabbitMQ.

Mixing and matching is also entirely possible. In a past life (in the bad old days before existed) I built a customer site that consisted of an admin curation tool built in Drupal 7 that pulled data in from a 3rd party, allowed users to process it, and then exported pre-formatted JSON to Elasticsearch. That exporting was done via a cron job, however, to avoid blocking the UI. A Silex application then served a read-only API off of the data in Elasticsearch, and far faster than a Drupal request could possibly have done.

Were I building that system today it would make a perfect case for a multi-app project: A Drupal app container, a MySQL service, an Elasticsearch service, and a Silex app container.

Please code responsibly

There are always tradeoffs in different software design decisions. Sometimes the extra management, performance, and complexity overhead of microservices is worth it. Sometimes it's... not, and a tried-and-true monolith is the most effective solution.

Or maybe there's an in-between that will get you a better balance between complexity, performance, and scalability. Sometimes all you need is "just" a clustered application.

Pick the approach that fits your needs best, not the one that fits the marketing zeitgeist best. Don't worry, we can handle all of them.

Jun 25 2018
Jun 25

PHP is a rapidly-evolving language. Recent years have seen the adoption of a very clear and mostly consistent release process with a new version coming out every fall, with previous versions falling to maintenance support, then security-only support, then unsupported on a regular, predictable schedule.

That's great for those who appreciate the new language features that continually come down the pipeline, and the performance improvements they bring. It does mean it's important to stay up to date, though. PHP 5.6, the last release of the PHP 5 series, got an extra year of security support for people that would have a hard time updating to the new PHP 7, but even that expires at the end of this year.

In fact, as of the end of this year the oldest supported version of PHP will be PHP 7.1. Yeah, really.

Which begs the question... Do you know what your PHP version is?

How to check

On it's easy. Just check your file for the type key. If it looks like this:

type: php:7.1

or this:

type: php:7.2

Then you're good! If it says php:7.0 then you should really start planning your update to 7.2. If it says anything older... well, you're missing out.

What happens on 31 December 2018?

Aside from uncorking some champagne, nothing. still offers container images all the way back to PHP 5.4, and we have no immediate plans to drop those images any time soon. However, they are completely unsupported. If there's a bug in them, no one is going to fix them. In some cases they're still built using older versions of Debian, so other related software is out of date as well. We won't be updating those.

If security vulnerabilities are found in PHP versions older than 7.1 no one is going to be fixing them. There are, in fact, known security holes in older versions of PHP that are no longer supported, and thus have never been fixed. That's normal and it's what unsupported means. Over time no doubt other issues will be found in PHP 5.6 and 7.0 that will also not be fixed as they are no longer supported.

If you want to keep your site secure, it's time to upgrade.

Why else should I upgrade?

Glad you asked! Security is a good reason to keep your software up to date, but it's far from the only reason. If you're still running PHP 5.x, then the even bigger reason is speed.

PHP 7.anything blows PHP 5 out of the water on performance. Benchmarks from dozens of companies have shown over and over again that twice the requests/second and half the memory usage is a normal improvement; some code bases can see even more. Rasmus Lerdorf (creator of PHP) publishes benchmarks periodically. His most recent, from earlier this year, shows PHP 7 smoking PHP 5 on WordPress performance, specifically:

Wordpress is twice as fast on PHP 7 as on PHP 5

Wordpress uses a tiny fraction as much memory on PHP 7 as PHP 5.

Other benchmarks show similar (although not quite as dramatic) impact on Drupal, Magento, Symfony, Moodle, and various other systems.

It's rare in tech that any "silver bullet" appears, but upgrading from PHP 5 to PHP 7 is about as close to a performance silver bullet as you're ever going to see.

Of course, there's ample new functionality available for developers, too:

PHP 7.0 brought scalar type hints, return types, anonymous classes, and vastly improved error handling.

PHP 7.1 brought the void and iterable types, nullable types, and multi-catch exceptions.

PHP 7.2 brought the strongest security and encryption library available to the core language, along with even more improvements to the type system.

For a full rundown of the best parts of PHP 7, see my presentation from this year's php[tek] conference.

And of course PHP 7.3 is coming out this fall. (We'll support it when it comes out, too.)

OK, so how do I upgrade?

It's, which means upgrading is easy. Just change your type key in to php:7.2, and push. You're done.

Well, you really should test it in a branch first. Push that change to a branch and give it a whirl. Assuming everything is good, click Merge.

There's tooling available to help audit your code for future compatibility, too. For instance, the PHPCompatibility extension for PHP_CodeSniffer can flag most places where you may want to tweak your code to keep it compatible with newer versions. You can run it locally over your code base to ensure you're ready to update, then update.

If you're using Drupal, WordPress, Symfony, Zend, Laravel, or most other major systems, their latest versions are already tested and stable on PHP 7.2. That makes upgrading even easier. In fact, several systems have already made PHP 7.1 a minimum requirement for their latest version, which gives both their developers and you more power in your PHP.

Enjoy your faster, more robust PHP! Don't get left behind on unsupported versions, especially when the benefits of upgrading are so high and the cost is so low. And don't forget to plan for upgrading to PHP 7.3 later this year. It should be just as easy then, too.

Apr 25 2018
Apr 25

The Drupal project today released another security update to Drupal 7 and 8 core, SA-CORE-20108-004. It is largely a refinement of the previous fix released for SA-CORE-2018-002 a few weeks ago, which introduced a Drupal-specific firewall to filter incoming requests. The new patch tightens the firewall further, preventing newly-discovered ways of getting around the filters, as well as correcting some deeper issues in Drupal itself.

We previously added the same logic to our own network-wide WAF to address SA-CORE-2018-002. With the latest release we've updated out WAF rules to match Drupal's updates, and the new code is rolling out to all projects and regions as we speak.

The upshot?

  1. You really need to update Drupal to 7.59 or 8.5.3 as soon as possible. We believe that some of the attack vectors fixed in the latest patch cannot be blocked by a WAF. See our earlier post for quick and easy instructions to update your Drupal 7 or 8 sites on in just a few minutes.

  2. Still, most of the attack vectors fixed in the latest release are covered by the WAF. That should help keep your site safe from most attacks until you can update. But please, update early and often.

Stay safe out there on the Internet!

Apr 04 2018
Apr 04

A key part of's benefit comes from its integrated build and deploy hooks. Deploying a new version of your site or application rarely means simply dumping what's in Git onto a server anymore. was built from the ground up to let you execute whatever commands you need to "build" your application — turning what's in Git into what's actually on the server — and then to "deploy" the application — cleanup tasks like database migrations that should be run before the site is opened to visitors again.

There's a caveat there, however. Some deploy tasks need to block the site from new visitors until they complete; think updating the database schema, for instance. Others may not really need exclusive access to the site, but they still get it. That keeps the site unresponsive for critical seconds until those tasks complete.

So, let's fix that. We've now added a third hook, post_deploy. It works pretty much as you'd expect. You can do all the same things in it that you can do with a deploy hook, but it runs after the site is reopened to the world to accept new requests. Any tasks that don't need exclusive access to the database can be moved there, keeping the site up and responsive as much as possible while allowing for more robust and flexible automated tasks.

For example, the following configuration would run any pending database updates as part of the deploy hook but then import new content in the post_deploy hook. The new content will become available as soon as possible but the site will still be up and running while it's being updated. Once the import is done we'll also clear the cache a second time to ensure the new content is visible to the next request.

    deploy: |
        set -e
    post_deploy: |
        set -e
        migrate_content.php import/

What's "safe" to move to the post_deploy hook? That's up to you. What does or does not need an exclusive database lock will vary by site. Sometimes a cache clear is safe to do post-open, other times not. You get to make that determination for your application.

See the hook documentation for more information, and enjoy faster deploy tasks.

Apr 04 2018
Apr 04

We've offered customers the ability to subscribe to updates and notices about our service for a long time, using That's great if you want to know when we have maintenance planned but as we've grown and added new regions to our network network it's become apparent that not all customers want to know what's happening on every part of our network. (Who knew?)

For that reason we've now added support for separate notification channels on our status service. When creating a new subscription or editing your existing one you should see a screen something like this:

A list of available regions to select (Screenshot).

That will let you select just the regions and message types you care about.  That way, you can safely ignore maintenance windows in the Netherlands you don't care about for your Australian site.  (Of course, if you really do care what's happening to servers on the other side of the world that's fine by us. We don't judge.)

If you aren't already subscribed to get status notifications this is a good time to do it.  And while you're at it, make sure you have health notifications setup for your specific project, too.

Mar 28 2018
Mar 28 customers should visit Safe from DrupalGeddon II aka SA-CORE-2018-02 for the specific steps we took to protect all our Drupal instances.

Earlier today, a critical remote code execution vulnerability in Drupal 6, 7, and 8 was disclosed. This highly-critical issue affects all Drupal 7.x and 8.x sites and most Drupal 6.x sites. You should update immediately any Drupal site you have to versions 8.5.1, 8.4.6, or 7.58, as appropriate.

How to know if I am affected?

We are currently not aware of exploits of this vulnerability in the wild but this will undoubtedly change in the next few hours. Writing an exploit for this is trivial and you should expect automated internet-wide attacks before the day is out.

You should take immediate steps to protect yourself. This is as bad or worse than the previous highly-critical vulnerability SA-CORE-2014-05 that wreaked havoc three and a half years ago affecting more than 12 Million websites.

(Like, seriously, if you are reading this and you are not on or another provider that has put a platform-level mitigation in place, go update your sites and then come back and finish reading. Please. customers, see below for how to quickly update your site.)

Where does the vulnerability come from?

The issue is in Drupal's handling of HTTP request parameters that contain certain special characters. These characters have special meaning in various places in Drupal, which if misinterpreted could lead to unexpected code paths being executed. The solution in the latest patch is to filter out such values before passing them off to application code.

Fortunately that same strategy can be implemented at the network layer. We have therefore applied the same logic to our Web Application Firewall to reject requests containing such values and deployed it across all projects in all regions, both Professional and Enterprise. That should protect all Drupal and Backdrop installations running anywhere on until they are upgraded.

What to do?

You must update any and all Drupal instances with 6.x, 7.x and 8.x or Backdrop CMS, or verify that your hosting provider has put in place an automated mitigation strategy for this vulnerability. (All clients are safe; our new WAF now detects and blocks all variants of this attack). Even if your hosting provider has a mitigation strategy in place you should update immediately anyway.

Drupal 6.x is no longer maintained and unlike Drupal 7.x and 8.x it does not support automated updates. Third-party support providers may provide a patch but you should make plans to upgrade from Drupal 6 to Drupal 8 as soon as possible.

Hopefully you are using Composer for your Drupal 7.x and 8.x or Drush make for Drupal 7.x, as is the default with installations.

To upgrade Drupal via Composer

To update your Drupal instances, and test nothing breaks you can follow the following simple procedure:

Verify that your composer.json file does not lock down drupal core to a minor version it should be something like "drupal/core": "~8.0". Then run:

git checkout -b security_update
composer update

Make sure that Drupal Core was updated to 8.5.1 or higher. (Check composer.lock using git diff). Commit and push your changes:

git commit –am ’fix for SA-CORE-2018-02’ && git push

On you can test that everything is fine on your automatically-generated staging environment, then merge to master putting this to production.

If you do not use you should test this either locally or your testing server; and follow your normal procedure to update your live sites.

To upgrade Drupal using Drush Make

If you are using "Drush Make" style of dependency management, again, make sure you are not locked down to a vulnerable version such as:

projects[drupal][version] = 7.57

if it is, bump it up to 7.58. Then make a branch and update it:

git checkout -b security_update
drush pm-update

Commit the changes and push the result to for testing. Once you're satisfied nothing is broken merge back to master and deploy.

To upgrade Drupal if you're checking Drupal core into your repository

If you're running a "vanilla" Drupal setup, with all of Drupal checked into Git, the easiest way to upgrade is using drush.

In your local environment, go to your Drupal document root and run:

git checkout -b security_update
drush pm-update drupal

Commit the changes and push the result to for testing. Once you're satisfied nothing is broken merge back to master and deploy. Afterward, look into how to migrate your site to a dependency managed configuration, preferably Composer. It will make maintenance far easier and more robust in the future.

As a reminder, your instances are not vulnerable as they are protected by our WAF. You should still apply the fixes ASAP.

Mar 28 2018
Mar 28

An hour ago the SA-CORE-2018-002 critical Drupal vulnerability was disclosed. It was announced a week ago PSA-2018-001. That allowed us to gather our technical team and make sure we can develop and deploy a mitigation to all our clients immediately as the issue is made known.

If you're not running on, please stop reading this post and go update your Drupal site to version 8.5.1 / 8.4.9 / 8.3.8 / 7.58 right now. We're serious; upgrade first and ask questions later.

If you are running on You're safe and can continue reading... then upgrade.

The vulnerability (also referred to as CVE-2108-7600) affects the vast majority of Drupal 6.x, 7.x and 8.x sites and allows arbitrary remote code execution that allow anonymous remote users to take full control of any affected Drupal site prior to 8.5.1 / 8.4.9 / 8.3.8 / 7.58.

The same issue is present in Backdrop CMS installations prior to 1.9.3.

If your Drupal site is not hosted on we encourage you to immediately update all your Drupal sites to 8.5.1 / 7.58 or to take your site offline. This is serious and trivially exploitable. You can expect automated attacks to appear within hours at most. If you are not on or another provider that has implemented a mitigation your site will be hacked. This is as critical as the notorious “DrupaGeddon” episode from three and a half years ago.

If you are hosting on is pleased to announce all Drupal sites hosted on all our regions and all our plans are automatically safe from this attack. has many security layers that make attacks such as this much harder than on comparable services. Starting from our read-only hosts and our read-only containers, through our auditable and reproducible build-chain, and static-analysis based protective block.

In response to this latest vulnerability, we've taken two important steps:

  1. We've added a new rule to our Web Application Firewall (WAF) on all regions and on all Enterprise clusters that detects and blocks requests trying to exploit this latest attack vector, even if your site hasn't been updated. (But still, please update.)

  2. We are adding a check to our protective block to prevent deployment of affected Drupal versions. If you try to push an insecure Drupal version our system will flag it for you and warn you that you are pushing known-insecure code. Please update your code base as soon as possible.

As a client if you need any further assistance or want more information about the vulnerability, how it may affect you, and our mitigation strategy don’t hesitate to contact support. We have set our WAF to an especially aggressive stance for now and this may result in some users seeing a "400 Bad Request" message in some edge cases for legitimate traffic. If you experience this, please contact our support immediately they will be able to help.

Jan 25 2018
Jan 25

We always aim to offer our customers the best experience possible, with the tools they want to use. Usually that means expanding the platforms and languages we support (which now stands at six languages and counting), but occasionally it means dropping tools that are not being used so that we can focus resources on those that are.

For that reason, we will be dropping support for the HHVM runtime on 1 March 2018.

HHVM began life at Facebook as a faster, more robust PHP runtime. Although it never quite reached 100% PHP compatibility it got extremely close, and did see some success and buy-in outside of Facebook itself. Its most notable achievement, however, was providing PHP itself with much-needed competition, which in turn spurred the work that resulted in the massive performance improvements of PHP 7.

Similarly, Facebook's "PHP extended" language, Hack (which ran on HHVM), has seen only limited use outside of Facebook itself but served as a test bed and proving ground for many improvements and features that have since made their way into PHP itself. Like HHVM itself, though, Hack never achieved critical mass in the marketplace outside of Facebook.

Back in September, Facebook announced that they would be continuing development of Hack as its own language, and not aiming for PHP compatibility. Essentially Hack/HHVM will be a "full fork" of the PHP language and go its own way, and no longer try to be a drop-in replacement for PHP. has offered HHVM support as a PHP alternative for several years, although as in the broader market it didn't see much use and with the release of PHP 7 the performance advantage of HHVM basically disappeared, leading people to migrate back to vanilla PHP 7. Looking at our own statistics, in fact, we recently found that HHVM was virtually unused on our system.

"Give the people what they want" also means not giving them what they clearly don't want, and the PHP market clearly doesn't want HHVM at this point. We will therefore be dropping support for it on 1 March. If Hack/HHVM develops its own market in the future and there's demand for it we may look into re-adding it at that time, but we'll wait and see.

Good night, sweet HHVM, and may a herd of ElePHPants sing thee to they REST!

Dec 28 2017
Dec 28 allows users to create a byte-for-byte snapshot of any running environment, production or otherwise, at any time with a single button click or command line directive.

That's great for one off use, like preparing to deploy a new version or run some large batch process, but what about for routine disaster recovery backups? Can we do those?

Believe it or, not, it's possible to automate yourself! And it's only a 3 step process.

The basic idea is that the CLI can be triggered from any automation tool you'd like... including cron from within the application container. It just needs an authentication token available in the environment.

Step 1: Get a token

Create an authentication token for your user or a dedicated automation user. That's easily done through the UI.

Set that token as a variable on your project, like so:

platform project:variable:set env:PLATFORMSH_CLI_TOKEN your_token_value

Step 2: Install the CLI

The CLI can be installed as part of a build hook within your project. Simply add the following line to your build hook:

curl -sS | php

Now the CLI will be available in cron hooks, the deploy hook, or when logging in via SSH. It will use the token you provided a moment ago, and will automatically pick up the project and environment name from the existing environment variables.

Step 3: Snapshot on cron

You can now add a new cron entry to your file, like so:

        spec: '0 5 * * *'
        cmd: |
            if [ "$PLATFORM_BRANCH" = master ]; then
                platform snapshot:create --yes --no-wait

That will run the cmd once a day at 5 am UTC. (Adjust for whenever low-traffic time is for your site.) Then if and only if it's running on the master environment (production), the platform snapshot:create command will run and trigger a snapshot, just as if you'd run the command yourself. Poof, done.

Of note, though, are the --yes --no-wait flags. The first skips any user-interaction, since the command is running from cron. The second is extra important, as it tells cron to not block on the snapshot being created. If you forget that, cron will block on the snapshot which means so will any deploy you happen to try and trigger. That can result in extra-long deploys and site downtime. You don't want that, we don't want that, so make sure to include --no-wait.

That's it that's all, you're done! Rejoice in daily automated backups of your production environment.

Dec 20 2017
Dec 20

Time flies. It's been quite a year for as our product continues to improve. One of the great things about a managed services product is that it can continually improve without you even realizing it. The sign of a successful product feature is that it feels like it's always been there. Who can imagine life without it?

Let's take a look at what we've improved just in the last 12 months...

January opened with support for HTTP/2 on all projects. HTTP/2 changes the way browsers and servers communicate, making it faster, more streamlined, and better tailored to modern, asset-heavy web sites. HTTP/2 "just works" automatically as long as you're using a reasonably modern browser and HTTPS.

And as of April, you're using HTTPS. Courtesy of Let's Encrypt, you now get a free, automatic SSL certificate provisioned for every environment. No one should have to think about HTTPS in 2017. It's just a part of the package.

April also saw the launch of our tiered CDN for Enterprise. The Global CDN combines a flexible, high-feature CDN for dynamic pages with a low-cost, high-bandwidth CDN for static assets. That offers the best of both worlds for sites that want the best performance for the least cost.

We've also continued to expand our available services. We kicked off the year with support for persistent Redis as a key/value store rather than just as a cache server. March saw the addition of InfluxDB, a popular time-series data service for recording time-based data. In June, we added support for Memcached in case Redis doesn't do it for you.

We also beefed up the functionality of our existing services, adding support for multiple-core Solr configurations and multi-database MySQL configurations. We even now support regular expressions in the router for more fine-grained cookie control.

And of course we've kept up with the latest releases of your favorite languages, be that Python, Ruby, NodeJS, or perennial favorite PHP 7.2. We even added preliminary support for Go and Java, both of which are in beta now. (Interested in trying them out? Please reach out to us!)

August included support for arbitrary worker processes in their own container. That allows an application to easily spin up a background task to handle queue processing, image generation, or other out-of-band tasks with just a few lines of YAML with no impact on production responsiveness.

As of October, we've added health notification support for all projects. At the moment they only cover disk usage, but in time will expand to other health notices. (If you haven't configured them on your project yet we strongly recommend you do so.)

We're also closing out the year with new support for GitLab, as well as more flexible control over TLS and Client TLS, plus a few other changes that line us up for even bigger things in the year to come.

Last but not least, all of that goodness is available down under as of July with our new Sydney region for Professional.

And that's all been just this year! What do we have coming in 2018 to further redefine "modern hosting"?

You'll just have to join us in 2018 to find out...

Dec 11 2017
Dec 11

PHP 7.2 introduced a neat new feature called "type widening". In short, it allows methods that inherit from a parent class or interface to be more liberal in what they accept (parameters) and more strict in what they return (return values) than their parent. In practice they can only do so by removing a type hint (for parameters) or adding one where one didn't exist before (return values), not for subclasses of a parameter. (The reasons for that are largely implementation details far too nerdy for us to go into here.) Still, it's a nice enhancement and in many ways makes PHP 7.2 more compatible with earlier, less-typed versions of PHP than 7.0 or 7.1 were.

There's a catch, though: Because the PHP engine is paying more attention to parameter types than it used to, it means it's now rejecting more invalid uses than it used to. That's historically one of the main sources of incompatibilities between different PHP versions: Code that was technically wrong but the engine didn't care stops working when the engine starts caring in new version. Type widening is PHP 7.2's case of that change.

Consider this code:

interface StuffDoer {
  public function doStuff();

class A implements StuffDoer {
  public function doStuff(StuffDoer $x = null) {}

This is nominally valid, since A allows zero parameters in doStuff(), which is thus compatible with the StuffDoer interface.

Now consider this code:

class A {
  public function doStuff(StuffDoer $x = null) {}

class B extends A {
  public function doStuff() {}

While it seems at first like it makes sense, it's still invalid. We know that B is going to not do anything with the optional $x parameter, so let's not bother defining it. While that intuitively seems logical the PHP engine disagrees and insists on the parameter being defined in the child class, even though you and I know it will never be used. The reason is that another child of B, say C, could try to re-add another optional parameter of another type; that would technically be compatible with B, but could never be compatible with A. So, yeah, let's not do that.

But what happens if you combine them?

interface StuffDoer {
  public function doStuff();

class A implements StuffDoer {
  public function doStuff(StuffDoer $x = null) {}

class B extends A {
  public function doStuff() {}

There's two possible ways to think about this code.

  1. B::doStuff() implements StuffDoer::doStuff(), which has no parameters, so everything is fine.
  2. B::doStuff() extends A::doStuff(), which has a parameter. You can't leave off a parameter, so that is not cool.

Prior to PHP 7.2, the engine implicitly went with interpretation 1. The code ran fine. As of PHP 7.2.0, the engine now uses interpretation 2. It has to, because it's now being more careful about when you're allowed to drop a type on a parameter in order to support type widening. So this wrong-but-working code now causes a fatal error. Oopsies.

Fortunately, the quickfix is super easy: Just be explicit with the parameter, even if you know you're not going to be using it:

interface StuffDoer {
  public function doStuff();

class A implements StuffDoer {
  public function doStuff(StuffDoer $x = null) {}

class B extends A {
  public function doStuff(StuffDoer $x = null) {}

The more robust fix is conceptually simpler: Don't do that. While adding optional parameters to a method technically doesn't violate the letter of an interface, it does violate the spirit of the interface. The method is now behaving differently, at least sometimes, and so is not a true drop-in implementation of the interface.

If you find your code is doing that sort of stealth interface extension, it's probably time to think about refactoring it. As a stopgap, though, you should be able to just be more explicit about the parameters in child classes to work around the fatal error.

Enjoy your PHP 7.2!

Nov 03 2017
Nov 03

Transport Layer Security (TLS) is the encryption protocol used by all secure websites today. It's the "S" in "HTTPS", which you'll see on virtually all projects (thank you, Let's Encrypt!), and has replaced SSL for that task. For most sites simply enabling it by default is sufficient to keep a site secure, and that happens automatically in's case. However, in some cases it's helpful to tweak even further.

That's why we're happy to announce that as of today we're rolling out several new TLS-related features for all sites.

TLS version restriction

Like any protocol, TLS is periodically updated with new versions that address security weaknesses in older versions. Almost all browsers today TLS 1.2, which is the latest, as well as all earlier versions including SSL. That means when a browser connects to your site it will use the most up to date version that both the server and browser both support. In most cases that's perfectly fine.

If you want to really lock down your site, however, at the cost of banning a few really old web browsers, you can now set a minimum TLS version that a browser must use. That's a requirement of some security compliance programs, too. If the browser tries to use an older, insecure version of TLS it will get blocked. Just add the following snippet to a particular route in your routes.yaml file.

    min_version: TLSv1.2

And now that domain will reject any HTTPS connection that isn't using at least TLS 1.2.

HSTS support

HTTP Strict Transport Security (HSTS) lets you tell browsers that they should use HTTPS for all requests to a site, even if a stray link happens to use HTTP. You can now enable it by simply adding the following block to a route in routes.yaml:


Now, that site will send an HSTS header with all requests, telling browsers to enforce HTTPS usage.

Client-authenticated TLS

Often when a site is being used as an API backend for an IoT device or mobile client application it's necessary to lock the site down to access just from selected users using TLS. This process is called "Client-authenticated TLS", and requires loading custom root TLS certificates on the server that determine whether or not a particular client request is authorized.

Starting today, it's also possible to provide those custom certificates as part of your route. Once again, it's just a few lines in a route definition:

    client_authentication: "require"
        - !include
            type: string
            path: file1.key
        - !include
            type: string
            path: file2.key

More information on all three options is available in our documentation.

Enjoy your more-secure sites!

Oct 12 2017
Oct 12 aims to be a complete solution for web development and hosting, while at the same time offering the flexibility to slot into your own development tools and methodologies. That's a tricky balance to strike at times: Providing a complete solution while offering maximum flexibility at the same time.

One area where we generally favor flexibility is in your local development environment. You can use whatever local development tools you're most comfortable with: MAMP, WAMP, VirtualBox, Docker, or just a native local install of your needed tools.

For those who fear analysis-paralysis from so many choices, though, we've decided to start reviewing and green-lighting recommended tools that we've found work well. And the first local development tool we can recommend is Lando.

Lando is a Docker-based local development environment that grew out of Kalabox, a VirtualBox-based local dev tool for Drupal. Lando is much more flexible and lighter-weight than a virtual machine-based solution, and has direct support for a variety of systems including Drupal, Laravel, Backdrop, and WordPress. It even goes beyond PHP with support for Node.js, Python, and Ruby as well, just as we do.

Like, Lando is controlled by a YAML configuration file. Although being Docker-based it cannot directly mimic how a project works, it can approximate it reasonably well.

We've included a recommended Lando configuration file in our documentation. It's fairly straightforward and easy to adapt for your particular application. It's also possible to synchronize data from a environment to your local Lando instance in just a few short commands. Lando's own documentation provides more details on how to trick out your local system with whatever you may need.

We still believe in allowing you to pick your own development workflow, so you don't have to change anything if you already have a workflow that works for you; if you want our advice, though, Lando is a solid option that should get you up and running locally in minutes, while handles all of your staging and production needs.

Sep 08 2017
Sep 08

One of our most requested features is better built-in monitoring and notification support for user projects. We've just made it easy monitor your projects' health.

Many of the applications we host have a tendency to use up disk space faster than expected with cache and temp data, and when a disk gets full applications tend to misbehave. And really, no one wants misbehaving applications.

We are therefore happy to report that we now offer health notification checks on all Platform Professional projects, at no extra cost.

Health notifications can be sent via email, Slack, or Pager Duty. Any time disk space drops below 20% or 10%, or when it goes back up (because you cleared up space or increased your disk size), a notification will get sent to whatever destinations you have configured. For example, to get email notifications you can simply run the following command using the Platform CLI tool:

platform integration:add --type --from-address [email protected] --recipients [email protected] --recipients [email protected]

Then, any time one of those thresholds is crossed, both [email protected] and [email protected] will be emailed. (As a side note, your email address is still webmaster? Neat! That's so retro...)

Slack and Pager Duty can be configured in the same way. See the documentation for all the details.

For now disk space is the only notification that gets generated but we plan to add more health checks in the future. Until then, sleep tight knowing that your disk won't start misbehaving with your knowledge.

Jun 07 2017
Jun 07

One of the requests we've gotten in the past few months is the ability to customize the HTTP headers that get sent with static assets. For requests coming from a PHP or Ruby application it's easy enough to send any headers you want, but for static files there was no way to customize the headers. While that seems like an obscure and nerdy feature it's actually quite important. Custom headers are necessary for supporting atypical file types, for CORS security measures, or for "Same-Origin" restrictions to prevent click-jacking.

So we said to ourselves, "selves, we try to be a flexible host, we should just add that feature." And ourselves responded "OK, let's do that."

And it's now available on all new projects, too.

On all new projects you can now specify additional headers to send in your file. Those can apply to all files (say for a Same-Origin or CORS header) or selectively by file extension or any other regular expression. For instance, the following lines will add an X-Frame-Options header to every static file.

            # ...
              X-Frame-Options: SAMEORIGIN

Again, though, that applies only to static files; for responses from your application you can still set whatever headers you need directly in code. See the documentation for more details, and the provided example.

For now this feature is only available for newly created projects. We'll be rolling out updates to existing projects over time. If you want to use it before that just file a support ticket and we'll bump your project to the head of the line.

May 25 2017
May 25 has always prided itself on offering our customers as much flexibility to control their own projects as we can. What language to use, what services to use, how the server should be configured, what applications to run, all of these are under the user's control. We even allow users to set various control variables and environment variables per-environment.

And now there's even another way to set them, via your application configuration file.'s variable support is designed to allow users to set per-environment configuration (such as API keys for 3rd party services) as well as to control aspects of the environment. Some applications, though, have their own environment variables they rely on for various reasons, such as to set a dev/prod toggle or control a build process. Those generally shouldn't vary by environment.

For that reason it's now possible to set variables from Those values will be tracked in Git just like the rest of your code base, keeping all of the important bits in the same place.

If you're using PHP, you can even use this system to set php.ini values. Need to change your memory limit? Set a prepend file? Control the error reporting level? That can all be done now directly from the file.

For environment variables that should change per-environment or contain sensitive information the current mechanism (setting variables through the UI or using the CLI tool) nothing changes. Your current workflow is fine.

Apr 18 2017
Apr 18

At, we believe that all websites deserve to be secure, fast, and feature-rich, and that it should be easy to have all three. Secure has always meant that a site is encrypted using SSL, which is why we’ve never charged for an SSL certificate. Fast means using HTTP/2, which we added support for earlier this year, but most browsers only support HTTP/2 over SSL. And feature-rich means allowing the full range of newer web functionality such as geolocation, access to media devices, or notifications, many of which browsers are now only permitting over SSL connections. You know what? The modern web only works properly with SSL so let’s cut out the middleman. Let’s Encrypt everything.

We’re happy to announce automatic support for Let’s Encrypt SSL certificates on every production site on Professional, at no charge.

Starting today for all new projects, on every deploy we will automatically provision and install an SSL certificate for you using the free Let’s Encrypt service. You don’t have to do anything. It will just be there.

For existing projects, we're bringing that functionality online in batches to avoid overwhelming the Let's Encrypt servers. We expect to finish getting through them all within the next week or two. If you're about to bring a site live and want to make sure you get Let's Encrypt functionality before that, just file a support ticket and we'll bump you to the front of the line.

Wait, what does this mean for my site?

If you currently just have HTTP routes defined in your routes.yaml file, then as of your next deploy HTTPS requests will be served as HTTPS requests rather than being redirected to HTTP. Both will “just work”.

If you want to serve your entire site over HTTPS all the time (and yes, you do), simply change all http:// routes in your routing file to be https://. That will automatically redirect HTTP requests to HTTPS going forward.

See the Routes section of the documentation for more details, but really, there’s not many details beyond that. It just works.

What about Enterprise?

Most Enterprise sites are served through a Content Delivery Network already, in which case the SSL certificate is handled by the CDN. This change has no impact on Enterprise customers.

Neat! So what should I do?

You don’t have to do anything. HTTPS just works now. As above, you can configure your site to use HTTPS exclusively by adding the letter "s" to your routes.yaml file in a few places. (We told you it was easy.)

Of course, now that you know your site will use SSL, you also know it will be using HTTP/2. All SSL-protected sites on use HTTP/2. HTTP/2 is supported by nearly 80% of web browsers in the world. That makes it safe, and a good investment, to start optimizing your site for HTTP/2, layering in HTTP/2-specific capabilities like server push, and so forth.

Secure, fast, feature-rich, and easy. Welcome to!

Mar 28 2017
Mar 28

Elasticsearch is one of the most popular Free Software search engines available. It’s fast, flexible, and easy to work with.

It’s also now fully up to date on

We’ve offered Elasticsearch for a while, but only the older 1.7 version. Newer versions of Elasticsearch offer better functionality and speed, though, so we’re happy to report that we now have versions 2.4 and 5.2 available.

Switching is, as you’d expect from, super easy. Just switch the 1.7 in your services.yaml file to 5.2 and poof, your next deploy will be on Elasticsearch 5.2. Like so:

    type: elasticsearch:5.2
    disk: 1024

Note, however, that we do not support Elasticsearch’s rolling upgrade process, so you will likely need to rebuild your index. You can also run both a new and old service in parallel and switch your application over to the new one whenever you’re ready.

But wait, there’s more!

By popular demand, we’ve also added support for a number of Elasticsearch plugins to both the 2.4 and 5.2 services. The full list is available in the documentation, and they’re just a few lines of YAML to enable. If there’s another Free Software plugin you need that’s not supported let our support team know. We may be able to add it to the list.

Happy searching!

Larry Garfield Mar 28, 2017
Mar 27 2017
Mar 27

(With due apologies to Jim Croce.)

Our efforts to expand the power and capability of users of continues! The latest addition to our services suite is Influx DB 1.2. Influx DB is a time-series database well suited to recording large volumes of data over time. That makes it a good choice for high-volume logging, data collection, and metrics.

In fact, we are planning to use it ourselves for that last point. Stay tuned…

How can you use it? Same as any other service, really. Add the following to your services.yaml file:

    type: influxdb:1.2
    disk: 1024

And on next deploy you have an InfluxDB service running, named influx, with 1 GB of space reserved for it. (Add gigs as needed…)

Have a look at our documentation for more, and at the InfluxDB documentation for what cool things you can do with it. (And let us know what you do with it! We’re always curious to see what neat things our customers come up with.)

Larry Garfield Mar 27, 2017
Feb 10 2017
Feb 10

(Because alliterations are always appropriate.)

As we hinted at previously when rolling out Apache Solr 6.3, we’re rolling out new functionality for many of our containers to support multiple databases on a single service. We’re happy to report that next up is the big one: MySQL now supports multiple databases, and restricted users.

If all you need is a single database, as is often the case, nothing changes. If you want multiple databases or want to allow only read-only access in some cases then this is for you. When declaring a MySQL/MariaDB service in your services.yaml file, you can now specify additional configuration. That includes multiple databases, like so:

    type: mysql:10.0
    disk: 2048
            - main
            - legacy

That creates two separate databases, main and legacy. Of course, to access them you still need an endpoint, which is a set of credentials. You can define those, too, and give them access to one or multiple databases and give them read-only, read-and-write, or admin (do everything) permissions. Then each endpoint can be referenced separately in your application’s to give some applications different access to different databases. See our newly-updated documentation for step by step instructions.

What can I do with it?

As suggested by the example above, one main use is to host a legacy database for content migration. Or perhaps one that’s being staged from an external legacy system. In those cases it’s common to want your application to have only read-access or read-write-no-schema access to it. You can now do that. Your application will have two different MySQL connections with the appropriate permissions and do whatever it’s going to do.

Another use case is multiple applications. fully supports multi-app projects, where a single project is composed of multiple microservices, possibly even written in different languages. Now each of those mini-applications can have its own database without the added overhead and cost of another service instance. Plus, because they can use different endpoints the applications can’t read from each other’s databases… unless you want them to.

It’s also useful for Drupal’s multi-site functionality. Drupal has the ability to run multple instances off of a single code base, each with its own database. That’s generally only useful in cases where the sites are virtually identical in code and configuration, just differ in content (and maybe theme). Now, each instance can have its own database without spinning up a dozen extra service instances, and they can all be updated and backed up together. (If the sites are distinct sites, though, it’s still going to be far less work to just make them separate projects so that they can be developed and maintained separately.)

Of course, that’s just the use cases we envision. What other uses do you have? We’d love to hear about what cool stuff you’re doing with

Larry Garfield Feb 10, 2017
Jan 25 2017
Jan 25

Redis is a popular key-value database, well-regarded for its speed and simplicity. has offered a Redis service for quite some time, which is configured not to store data but to keep it only in memory. That makes it an excellent choice for a cache server, and we recommend that configuration for most projects.

Of course, Redis can do far more than caching. And we’re therefore happy to report we now offer a persistent configuration of Redis, too.

Available only for Redis 3, the new service is called redis-persistent. (It seemed self-descriptive.) The only difference from the redis service is that it is configured to store data permanently rather than toss data out when it runs out of memory (as a cache configuration would do). That also means data stored in Redis is replicated when an environment is branched, just like for MySQL, Elasticsearch, or MongoDB.

The configuration for redis-persistent is essentially the same as any other service we offer. Simply add the following to your services.yaml file:

    type: "redis-persistent:3.0"
    disk: 2048

That will give you a new service named redisdata that will permanently store up to 2 GB of data. (Make sure your plan size has the space available). You can then expose that service to your applications in your relationships block and access it exactly as you would an ephemeral redis instance. Have a look at the updated documentation for more details and examples.

What can you do with a super-fast persistent key-value store? Anything you want. Let us know what you did with it.

Larry Garfield Jan 25, 2017
Jan 17 2017
Jan 17

After ending 2016 with a new PHP version and starting 2017 with a new HTTP version, we’re happy to report that there’s still plenty of new left for us to launch. This time around it’s a new Apache Solr version, 6.3.

Support for a more modern version of Apache Solr has been one of our most requested features for a while. Unfortunately some packaging issues made it more difficult than we expected but I’m happy to report that we’ve managed to work around them. Starting today, you can now launch a Solr 6 container by simply specifying the appropriate version in your services.yaml file, like so:

    type: 'solr:6.3'
    disk: 1024

However, you can also go a lot farther than that.

The Solr 6 container is also our first container to support multiple databases (“cores” in Solr-speak) on a single service. Each core can have its own schema and custom configuration, and each can be mapped to a different “endpoint”, which can then be accessed by your application container. See our documentation for more details. Over time we intend to add similar functionality to other services, too.

Please note that multi-core support is only available on Solr 6.3, not on older Solr versions. Also, Solr doesn’t support direct upgrades from one major version to another. If you want to upgrade your existing Solr service, create a new, additional Solr service with a new name, populate it from your application, then remove the old one.

Larry Garfield Jan 17, 2017
Jan 10 2017
Jan 10

You might have heard about the MongoDB scare with titles like: MongoDB Apocalypse Is Here as Ransom Attacks Hit 10,000 Servers!

Rest assured, your MongoDB instances are safe and sound if they are running on And this is a very strong argument to why our architecture is superior to other PaaS providers.

Unlike other providers, with all the services you use are inside the managed cluster and included in the plan’s price. These are not outside services that expose application ports on the internet. This is what allows us to clone entire clusters, this is what allows us to offer a 99.99% SLA on the entire stack for our enterprise offering, but this is also a security feature.

Each cluster has only two ways in: HTTP or SSH. Our entrypoints simply will not answer anything else.

Your application containers in the cluster have direct connectivity to the service containers, but this happens on a non-routable IP class. There is simply no possible way for the exterior world to access a service directly. And if you are running (in micro-service style) multiple services in the cluster you can even control which has access to which services through the relationships key in your file. Because secure by default makes sense to us.

If you want to connect to a MongoDB instance from the exterior (to run for example an admin interface) you can still do it! But the only way to connect is through an SSH tunnel that relies on your private SSH key (platform tunnel:open on the command line will do the trick). You get all the benefits, all the ease of use of running a modern stack, but none of the hassle and risks of running a patchwork of services.

With you can be WebScale and secure!

Ori Pekelman Jan 10, 2017
Jan 09 2017
Jan 09

Platformsh 2017

First, a joyous and productive 2017 to you all. 2016 was really great for us as a growing company and the new year is a great time to look back and share with you, our dear clients and community, our journey.

The title of the post is audacious, very possibly a hyperbole. There are bigger players than us out there. We don’t claim the highest market share. We claim we have become an obvious choice for ambitious projects. Let me make the case.

Over the course of last year, the leading vendors in the PHP enterprise space Magento, eZ Platform, Typo3, and most recently Symfony - the PHP framework of frameworks - announced their cloud platform to be on Since its inception two and a half years ago, has already become a leader in the whole PHP space. How did this come about?

Some technologies were born to be great, some have had greatness thrust upon them.

We set out working on with humble ambitions. As a company we were going to solve eCommerce. We believed that Open Source was the way and we believed that the best Open Source platform we could leverage to make an eCommerce solution was Drupal, with its correct mix of wide-spread adoption, code quality and extensibility. This was how Drupal Commerce was born.

We originally built to be the hosted version of this project, with a bunch of unique features that would make it the killer eCommerce service: built-in high-availability, and unmatched development to production workflow. We had to go deep and low for that (when we started the project no one was talking about containers, micro-services or hybrid cloud infrastructures, but we knew it was the way to go.)

To cut a long story short a few short months after presenting to the world the reaction was tremendous. Our clients loved it. But they also quickly asked us… “why can’t we use this for our non-eCommerce Drupal site, what about our Symfony based projects, and Wordpress? And Magento? We use the Akeneo PIM alongside the Magento, and there is a NodeJs based notification service…”.

The 2016 pivot

So like startups do, we pivoted. Commerce Guys has become its own company. And as an independent entity set out to conquer the PaaS market. This happened in the beginning of 2016. We have more than doubled our team since then. We now have people in 10 time zones from the West Coast to the East Coast, from Europe to Asia.

Why keep the PHP focus?

The technology we built was runtime-agnostic. Setting out as an independent company we could very well have shifted our focus from Drupal and PHP. We chose not to.

First a couple of words on the PHP space. There was a moment three or four years ago when there was a widespread perception that PHP was faltering. That it belonged to the realm of legacy, soon to be replaced. Of course, that was before the likes of Slack were born. Before PHP 7.0 went out of the gate. Before Composer took hold. Before Drupal 8.0 was finally released. Before this world started standardizing on Symfony. Today, we know PHP is here to stay, with both its great advantages and its weaknesses. It is powering much of the internet, from Facebook and Wikipedia to the millions and millions of sites running Wordpress and Drupal. It is powering most of online commerce. It is chosen by startups and enterprises alike.

We understood this from the beginning. We understood its importance.

Of course this does not mean we dislike other programming languages and environments. Our team is composed of true polyglots and within it you will find as many people that love functional programming from Lisp addicts to Elixir fans. Both Python and Ruby are loved. Rust is a passion. GoLang highly considered for what it does best. Then there is the herd of C nerds. We even have people that like NodeJS. We really do.

But at the time when PHP seemed to lose its lustre, everybody in the new shiny tools department started building for the new shiny languages. This happened for probably two reasons:

  1. Shiny people like shiny stuff (and who cares if 80% of the web works with something else).
  2. Doing PHP right is hard. Harder than the other stuff.

Why is PHP hard? Because of its immense popularity, PHP is more diverse. It is diverse in the number of frameworks, in the number of versions people run, in the quality of code. And because of its execution model, the topologies in which PHP applications may run can vary wildly.

As such we built a lot of flexibility in. We made it build-oriented so we can support any form of project. Unlike all other PaaS providers we added support for non-ephemeral local storage, so you could run even legacy PHP applications and still benefit from the cloud.

As such, we built it for highly-available micro-services architectures. You can get RabbitMQ, MongoDB, Redis, Postgres, Elastic Search, Solr and of course MySQL available in every Cluster. Doing PHP right meant that we also built it so that you can easily migrate from your “Legacy PHP” to this “Modern PHP” world. A world where no one has root access to a production server. A world of automated consistent deployments.

PHP Leadership

It was our mission to make it easy to do PHP right. That is why we built for “Modern PHP” from the beginning. This is also why early on we added NodeJS, Python, Ruby or Java (modern PHP is no longer as island). And we will be adding ever more services and runtimes which won’t make us lesser of a PHP platform, on the contrary it makes us a better one. Those that have specifically built their systems to run Drupal 7.0 with PHP 5.6 find themselves with an aging platform ill-equipped for new requirements, less performant and less agile. But going wide, we have better and more up-to-date support not only for legacy Drupal and PHP, but also for everything new that is coming. Count on us to be the best Drupal 9.0 hosting service; the best Symfony 4.0 one. The coolest Magento 3.0.

Appreciating this mindset and impressed by our technology, major PHP folks also joined us. We announced the arrival of Larry Garfield, AKA Crell, as our DevEx director and Sara Golemon, of HHVM fame, left Facebook in San Francisco to join our R&D team. Sandro Groganz, a true PHP community veteran joined us just last week to work shore-up our marketing automation team. These people complement our foundation team, that includes people like Robert Douglass and Damien Tournoud. This is how serious we were about investing in PHP by recruiting the best of talents.

In return, we saw how seriously the PHP world is taking us. As early as February 2016, Magento announced their flagship product Magento Enterprise Cloud Edition as a white-label of Early December, it was announced that the Symfony cloud platform Sensio.Cloud is using as well. In between, we signed deals with TYPO3 community and eZ Systems.

All the while, hundreds of Drupal and Drupal Commerce, Wordpress and custom PHP sites launch every week on And we are getting more and more people that deploy multi-app and micro-services oriented architectures (with more and more NodeJS, Python and Ruby apps in there as well).

PHP is here to stay, and we are here to make it run

Over the last days of 2016 and the first of 2017 we announced PHP 7.1 support as well as Private Packagist support, and today we can announce HTTP/2 active by default on all projects. Making all the fastness even faster. You can fully expect even more incredible features to be coming your way. We mean to keep on being the best Drupal and the best PHP hosting platform. Stay posted.

Ori Pekelman Jan 9, 2017
Jan 08 2017
Jan 08


All the fastness just got faster. A couple of days ago we finished 2016 in beauty, announcing PHP 7.1 which allows you to do some incredibly fast PHP apps with stuff like ReactPHP for example.

Let’s start 2017 with some more fastness juice flowing. HTTP/2 is now supported on all public regions.

What do you need to do to benefit from the incredible performance gains this will give your site?

Nothing. It just works (as long as you have HTTPS enabled, which you should anyway).

HTTP/2 is a replacement for how HTTP is expressed “on the wire.” It is not a ground-up rewrite of the protocol; HTTP methods, status codes and semantics are the same, and it should be possible to use the same APIs as HTTP/1.x (possibly with some small additions) to represent the protocol.

The focus of the protocol is on performance; specifically, end-user perceived latency, network and server resource usage.


Basically it makes any site load faster. Much faster. To see a cute demo of the speed difference, just visit

As I said, you don’t need to do anything to benefit from it, it is already active for all clients. But you can play around with specific HTTP/2 features with your preferred framework (look for example at for Drupal Assets Preloading).

Enjoy! 2017 is going to be so fast.

Ori Pekelman Jan 8, 2017
Dec 29 2016
Dec 29

We were hoping to have this announcement out in time for Christmas gift but it was not to be. Instead it’s an early New Years gift. Nonetheless, we’re happy to announce a whole slew of new options for PHP projects to make them faster and more robust on PHP 7.1 support, async support, and PThreads support.

PHP 7.1

Big ones first: As of today we offer support for PHP 7.1. It’s just like PHP 7.0, only better in every way. To use it, simply swap out the type line in your application configuration file:

type: 'php:7.1'

And now when deployed your application will use PHP 7.1. That’s it, you’re done. We recommend doing so if you can. PHP 7.1 includes a number of improvements, including a small performance boost and a number of new syntactic features. See the PHP announcement and Migration Guide for all the shiny goodness.

This is also a good time to remind everyone that if you’re running a version of PHP older than 5.6 you are running unsupported software, often with known security vulnerabilities, and after this week even 5.6 will get only critical security fixes. See the PHP Supported Versions page for details. We strongly urge you to upgrade to PHP 7.0 or 7.1 as quickly as possible, for the continued support, vastly improved speed, and plethora of new features. It’s a one line, possibly one character change to update your site on Why not make that your New Years resolution?

Async PHP support

PHP usually runs as a CGI process, and at we use PHP-FPM behind Nginx: a fairly standard and robust configuration. However, CGI-style PHP has some limitations such as no support for long-running processes, WebSockets, or other new fanciness.

A handful of projects have sought to address that by implementing their own non-blocking IO libraries, allowing PHP to run as a single-process asynchronous process in much the same way as Node.js. The most notable of these are ReactPHP and AmPHP. Both run as their own command line application and listen to incoming requests through Nginx, allowing them to handle WebSockets, run as queue workers, or other such tasks.

Thanks to our new support for start commands on PHP containers, it’s now very straightforward to to spin up a ReactPHP- or AmPHP-based container on We’ve provided a basic Hello World ReactPHP example as well as a WebSocket chat server in both ReactPHP and AmPHP to show how it works, but of course your WebSocket or queue processing decisions are limited only by your imagination.

What’s more, if you’re running on the new-and-shiny PHP 7.1 container then ext/event is available as well. ext/event moves the core event loop of supporting async libraries down into optimized C code for even more speed. It’s also trivial to enable, so we recommend using it whenever possible. Just add the following to your file and ReactPHP will start using it automatically:

        - event

Pthreads: Multithreaded PHP

Ya, rly!

The Pthreads extension is a PHP extension that offers support for real, honest-to-goodness posix threads in PHP. It’s not always easy to setup, though. For one, it only works with a special Zend Thread Safe (ZTS) version of PHP, which is not the default on most distributions. It also then requires an extra extension to work. To top it all off, pthreads is only compatible with the CLI version of PHP, not the CGI version. Like async support, though, that makes it a very good choice for queue workers, WebSockets, and other stand-alone persistent daemons.

For that reason, we’re happy to announce that our new PHP 7.1 containers are running PHP 7.1 ZTS, and include the Pthreads extension. In our testing we found no performance penalty to using the ZTS version of PHP (in fact it was very slightly faster in general, which is quite curious!), and enabling Pthreads via is dead simple:

        - pthreads

Poof. You can now use the web.commands.start directive to start a script that uses Pthreads and do… whatever you want it to do. Queue workers, WebSockets, something else weird-but-cool? Let us know if you find something especially cool to do with it.

Happy New Year, PHP! Let’s build an amazing (and fast!) 2017.

Larry Garfield Dec 29, 2016
Dec 23 2016
Dec 23

In my former life I was a Drupal consultant and architect with One of my former colleagues there, Michelle Krejci, had a saying she liked to repeat: “Production is an artifact of development.” She even presented on the topic a few times at conferences. What she was saying made sense, but I didn’t fully grok it at the time.

Now I do.

As a PHP developer of 15 years, I like many developers had gotten into the habit of deployment being a question of “put code on server, run”. That’s PHP’s sales-pitch; it’s the sales pitch of any scripting language, really. The same could be said just as easily of Python or Ruby. What is an artifact of development other than code?

Quite a lot, of course. Developers in compiled languages are used to that; source code needs to be compiled to produce something that can actually run. And as long as you have a build process to compile code, throwing other tasks in there to pre-process the source code, or generate other files, is a pretty short leap. For scripting language users, though, that is a weird concept. Isn’t the whole point of a scripting language that I don’t need to compile it?

Well, yes and no. While PHP or Python execute “on the fly” as needed, that doesn’t mean other parts of the system do the same. Sass files need to be converted to CSS. Javascript is frequently written in a non-runnable form these days (TypeScript, ES6, CoffeeScript, JSX, or a dozen other forms) and compiled into ES5-compatible Javascript. Even if you’re not using a code generator or transpiler or whatever is in vogue these days, almost any serious project is (should?) be using CSS compression and JS compression to reduce download size.

And let’s not forget 3rd party dependencies. Any mainstream PHP project these days is built with Composer. Python uses pip, but the effect is the same. 3rd party dependencies are not part of your code base, and do not belong in your Git repository, but are pulled in externally.

On top of that, many systems today do have a PHP code generation step, too. Symfony’s Dependency Injection Container and Routing systems, Doctrine ORM’s generated classes, and various others all entail turning easy-to-edit-but-slow code into uneditable-but-fast code.

For years I’ve been largely avoiding such tools, because I worked mostly with either heavily-managed hosts that had no support for such steps (or their support was far too hard-coded) or client-hosted servers that still believed in hand-crafted artisanal server management. Short of checking the generated CSS, JS, and PHP code into my repository (which we did with Sass/CSS for years), there wasn’t much way to square the clear shift toward even scripting languages having a compile step with the 2005-era thinking of most of the servers I used.

And then I found

From the very beginning, has been built on the “production is an artifact of development” model. Your application doesn’t consist of just your code. It’s your code, plus 3rd party code, plus your server configuration, plus some CI scripts to generate CSS, compressed JS, optimized PHP, and so forth. was built specifically to address that modern reality. Your git repository may only contain your code, but that gets turned into, repeatably and reliably, a set of application containers, service containers, and “output code” that will actually get run. What that process looks like is up to you and your application; it could involve Sass, or not; compiling TypeScript, or not; dumping a dependency container or routes, or not.

The compiled artifact of development isn’t just your code; in fact it’s not even an application container that includes your code. It’s the entire constellation of tools that form your application — your code, your database, your cache server, your search index, etc. That’s exactly how works, and why it offers far better support for modern web applications than any other managed host I’ve used. (Give it a spin.) And no, I’m not just saying that because I work here. :-)

So thank you, Michelle, for convincing me of what modern web hosting should be. And thank you for making it a reality.

Larry Garfield Dec 23, 2016
Dec 20 2016
Dec 20

As we wrote about recently, all of the inputs to an application running on are clearly defined, predictable, and with one exception entirely under your control as a user. We’re happy to announce that we’ve added one more input, and it’s also under user control: Project-level variables available at build time.

Project variables are, as the name implies, just like the other variables we already support but bound to a project rather than to a specific environment. That is, they become available for all environments in a project unconditionally rather than just selected environments/branches. If a given environment has a variable of the same name it will override the project variable, but otherwise a project-level variable will be available to all environments.

“So what?” you may say. Variables defined on the master environment already inherit that way, so what’s the big deal? The big deal is that project variables are available at build time, too.

Environment variables are only available on a running container. That is, they’re available to the running application and can be used in a deploy hook, but not in a build hook. Project variables, by contrast, are also available during a build hook.

Why would you want build-time variables? Shouldn’t the build be controlled entirely by what’s in Git? Generally speaking yes, but just as you may need runtime values that are not stored in Git, such as API keys, you may need secret values for the build process, too. The most common use for that is for downloading non-public 3rd party dependencies. In other words, and at the risk of burying the lead:

We now support private Composer repositories on

That includes the newly-announced Private Packagist.

We’ve written up a tutorial to show how to use a private Composer repository. Give it a try.

There are of course plenty of other things you can do with build-time variables. Let us know what else you are able to do with them.

Larry Garfield Dec 20, 2016
Dec 13 2016
Dec 13

One of the key selling points of is predictable, repeatable deployments. We view the read-only production file system as a key advantage for our customers. But why is that, and how do we manage it? Just how repeatable and predictable is it?

The key to reliability is predictability. If I know my code works in testing, I should be able to predict, with a very high degree of confidence, that it will work in production, too. To do that, we need to minimize the number of variables that differ between staging and production. Zero is ideal, but anything close to zero is generally good enough.

At, the build process for all environments, production or not, is exactly the same. And the inputs to that build process are all consistent and largely predictable.

Overall there are four inputs to any running application:

  1. Your code, as tracked by Git. That includes any configuration directives in configuration files, your php.ini file, etc.
  2. Any 3rd party dependencies that are downloaded at build time, such as from Composer, npm, or Ruby gems.
  3. Any configuration variables defined in a project via the UI.
  4. The underlying container image we provide.

If those inputs are all the same, the output should be identical. (Functional programming DevOps?) So when can those change?

Your container

First of all, let’s clarify exactly how an application container works. There are three parts to every container: The container image, the application image, and the writeable file system.

The container image is provided by us, and contains nginx, the runtime appropriate for your application (PHP, Python, etc.), and custom tools we built to manage our orchestration. It is shipped as a read-only file system, using squashfs. It gets mounted at /.

The application image is the result of running your build hook on the code in your git repository. The resulting files are compressed into a read-only file system, using squashfs, and mounted at /app.

The writeable file system is specified by your file, and can be zero or more locations that get mounted as writeable disks at a directory (under /app) that you specify.

When “deploys” your container, it means we:

  1. “Close” the application container so it doesn’t accept new requests
  2. Shutdown the container
  3. Replace the container image with a new version, if there is one
  4. Replace the application image
  5. Mount the writeable file system
  6. Inject any configured environment variables
  7. Run user-specified deploy hooks
  8. ”Open” the application container to accept requests again

Assembling the container.

Now that we know how a container works, let’s look at those inputs again.

Your code

Your code in Git, of course, only changes when you tell it to. That’s the point of Git. It changes only when you want it to change. It changes only on the branch you want to change. You can merge to another branch, at which point that branch has changed because you wanted it to change. It’s entirely under your control.

3rd party code

3rd party dependencies depend primarily on the configuration in your code, in Git. Most package managers these days support a lock file (such as composer.lock for PHP) that will ensure that the exact same code is downloaded every time, even if a newer version is available. That’s why you should always check your lock file into your Git repository for an application. That way, your 3rd party dependencies are entirely under your control.

Environment variables

Environment variables are configured by you, through either the Platform CLI or the web interface. They can be any values you want, but are mostly useful for things like API keys, login credentials, and other values that should not be in Git (either for security reasons or because they need to be different between a environment and your local environment). Because these are set by you, the user, they’re entirely under your control.

The container image

The container image is the only input that is controlled by, not by the user. We currently have over a dozen container images for various languages and language versions. We only release new versions of these when there’s a new version of our integration software available or there’s a bug or security fix in one of the tools we include. That means new bugfix or security releases of PHP or Python, or an update to nginx, for instance. We will never change your container type out from under you (say, switching from PHP 5.5 to 7.0), but we may update the type as needed (such as the monthly security release of PHP 7.0.x).

The guarantee

Of particular note is that everything that goes into the application image (your code and 3rd party code) is controlled by Git, and Git is controlled by you, thus the application image is based exclusively on things within your control. It also means it’s predictable for us, too: If the hash of all files in a given branch already exists, and we’ve already built a container image for it, then we know, with certainty, that the resulting application image would be the same, too. That means we don’t even rebuild it, saving time and resources. (We don’t use the git hash directly for that, as if you have multiple application containers we want to treat them separately.)

It also guarantees consistency for your deployment. Suppose you have a branch feature-x on which you’ve done some development and deployed it to a dev environment. That means we’ve built an application image out of it. Now you merge it back to master. As long as the merge is a fast-forward merge in Git, the files that were at the HEAD of the feature-x branch will now be identical to the master branch. Identical files means we can reuse the existing application image.

Your application image in production isn’t just like the one that was in your feature branch. It is the one that was in your feature branch. The same exact bits.

What can still vary

While the application image is reused as-is, the container image may get a new version if one is available. That will only be different from the dev branch if a new version has been released between when you last pushed to that branch and when you merged it to master. The odds of that are low, but if you want to be certain you can always add an empty commit to the dev branch to trigger a deploy, which would pick up a new container image if one exists. If all is well, merge to master with confidence.

The writeable file system may also vary between production and a development environment. When a new environment is created its data is cloned from production, but depending on how long your dev environment lives before getting merged the two could diverge. Again, if you want to be sure just click the “Sync” button in the UI to copy production’s data to your dev environment and check it over. It only takes a minute or two, regardless of how much data you have.

Your environment variable configuration may also vary between environments, but in that case it’s generally because you want it to; you don’t want your development instance sending purchase orders with real credit cards against a real payment gateway. (At least, we assume not.)

So there you have it: Absolutely identical code between development and production, and usually absolutely identical environments (that can be guaranteed identical if you want them to be). It’s as close to functional programming as DevOps can get, and it’s just another day at the office around here.

Happy deploying!

Larry Garfield Dec 13, 2016
Dec 09 2016
Dec 09

PHP was’s first supported language, and so has had a few quirks as we’ve grown to support more programming languages. Those quirks have resulted in a few limitations to functionality that we didn’t like, and you probably didn’t like, either.

Fortunately, we’ve refactored our container support to reduce the uniqueness of PHP and added a bit of functionality along the way.

There’s two main improvements for PHP containers and one small change.

.environment files

On all languages, we now support an extra configuration file named .environment, which should be at the top-level of an application (as a sibling of the file in a multi-app setup). This file will get sourced as a bash script by the system when a container boots up, as well as on all SSH logins. The main use for it is to set extra environment variables or modify the PATH. The latter case is useful to allow tools like Drush, Drupal Console, or the Symfony Console to be installed locally by Composer but still available to execute without explicitly specifying a full path name.

See the documentation for more details.

web.commands.start support

In Node.js, Ruby, and Python, there are several application runners available such as uwsgi and gunicorn for Python, Unicorn for Ruby, and pm2 for Node.js. To allow users to pick their own, we support a directive in the application configuration file to start a process when a container boots up after deploy. For example, most Node.js applications will have a block like this in their file:

    start: "PM2_HOME=$PLATFORM_APP_DIR/run pm2 start index.js --no-daemon"

While on Ruby, it would look something like this:

     start: "unicorn -l $SOCKET -E production"

That option wasn’t available for PHP as PHP only has one applicable application runner, PHP-FPM. With the latest changes, though, it is now available for any container, including PHP.

What can you do with it? Any command you want. Let us know what cool things you’ve done with it.

Be aware, however, that using web.commands.start will skip starting the PHP-FPM runner. If you still want it to run, you can start FPM yourself in addition to your own commands. The effective default if you don’t specify anything is:

    start: "/usr/sbin/php-fpm7.0"

(Adjust for your appropriate PHP version.)

Log files change

This isn’t as much a feature as it is a heads-up. Previously, PHP error messages from your application would get logged to /var/log/php.log. Every other app container would log messages from your application to /var/log/app.log. With the latest changes, all containers now use app.log.

That means as of your next deploy, your PHP error logs will be in /var/log/app.log. That likely won’t have any impact on you, unless you have an auxiliary script that is reading the PHP log file. (Of course, we know your code is perfect and never has errors so it shouldn’t matter anyway.)

Larry Garfield Dec 9, 2016
Jul 05 2016
Jul 05

When launched, the majority of our business was Drupal 7 sites running Drupal Commerce. While we still host many of those, our business has expanded to cover many application stacks and languages. Drupal 8 has been out for 8 months now, Symfony’s market is growing, and we support both PHP and NodeJs with more languages on the way (stay tuned!).

As a result some assumptions we baked into the system no longer make sense for the majority of or users. We are therefore removing the default configuration files that were previously used if your project didn’t include one.

Wait, but what about my existing sites!

If you already have an existing project with, it is completely unaffected. This change only affects newly created projects as of Monday 25 July 2016.

We still recommend that all projects ensure they have the appropriate configuration files committed to Git, but only new projects are technically required to do so.

Whew. OK, so what’s the problem?

There are three files that drive your entire cluster with

  • defines your application container, where your code runs.
  • .platform/routes.yaml defines your routing container, and how it maps and caches incoming requests to your application container.
  • .platform/services.yaml defines what other services should be included in each cluster, such as MySQL, Redis, or Elasticsearch.

(No, really, that’s it. That’s your entire server cluster definition. Neat, eh?)

Previously, if one of those files was missing we would create a default file automatically. Those default files were designed around a specific use case: Drupal 7 running on PHP 5.4 with Redis caching and Solr for search. However, that is increasingly not the typical case; Drupal 8 is growing fast, PHP 5.4 is no longer supported by the PHP team, the various services have new versions available, and offers a lot more than just Drupal and PHP. (A default PHP container with drush make makes little sense if your application is written in Node.js…) That makes those defaults less and less useful to keep around.

It also meant that to entirely disable additional services, say for a statically generated site (like the site itself), required adding a blank file to the repository to override the default 3 services. That’s just silly.

So what changes?

We no longer add default files. No file, no behavior. That means you must provide, at least, a file and a .platform/routes.yaml file for a site to work. If you don’t provide those, trying to push a branch to our Git repository will fail as the code cannot be deployed. (The .platform/services.yaml is optional; if you don’t need any services, skipping this file will simply not create any.)

If you’re already in the habit of adding those files to your Git repository for a new project, congratulations, nothing changes for you. :-)

We are also dropping version-defaults for the app container and services. That is, if you ask for a mysql service you must also specify the version; we won’t magically pick one for you if not specified, for the same reason: The defaults would be old-forever. We want you to be able to move your site to the latest and greatest version of your language and services of choice on your schedule, not ours.

If you want to see the old defaults that were created, in case you want to use them yourself, they’re listed in in our documentation site.

For more information on those configuration files, see the documentation:

Larry Garfield Jul 5, 2016
Jun 07 2016
Jun 07

Drupal 8.1 has made a significant shift toward embracing Composer as the standard way to install and manage a Drupal site. Starting today, with Drupal 8.1.2,’s Drupal 8 templates are also Composer-based, and default to using PHP 7.

Wait, what about my existing sites?

Absolutely nothing changes for sites already hosted with If you’re using Drush Make or just checking your entire Drupal site into Git, you can continue to do so. Nothing in this post applies to you (unless you want it to).

Oh good. So what actually changes?

When you create a new site with, you’re given the opportunity to select a “template” for a site. The template is really just a starter Git repository, complete with a recommended file and .platform directory for a given application. Until now, the template for Drupal 7 and Drupal 8 used Drush Make as their build script. The Drupal 8 template now uses Composer, just like our Symfony template and most other PHP applications.

The Composer template is closely based on the (excellent) Drupal-Composer project built by the Drupal community. It adds only two patches to make Drupal install cleanly on a read-only file system, both of which have already gone through the Drupal issue queues and are just waiting to be committed. Once they’ve been incorporated into a point release we’ll drop those patches from our composer.json file.

As Drupal 8 is also fully tested with PHP 7, we’ve defaulted to PHP 7.0 for all newly created Drupal 8 sites.

As containers are always “production ready”, the composer command we use is optimized for production. Specifically, we run:

composer install --no-progress --prefer-dist --optimize-autoloader

Neat. But wait, which Composer repository are you using for Drupal?

Drupal currently has two different Composer services, a legacy one hosted at and a new, experimental one at We’ve been in contact with the Infrastructure team, and they’ve given us the go-ahead to default to the new, official service.

If you want to switch back to the legacy service, be sure to check the documentation page for notes on the different way it handles module versions.

But, but, I have legacy code that doesn’t work with PHP 7 yet!

Not to worry! If you need to start a new Drupal 8 site but want to run it on PHP 5.6 instead, simply edit your file and change the line

type: "php:7.0"


type: "php:5.6"

Then git push. Yes, it really is that easy.

(PHP versions before 5.6 are not supported by the PHP development team. We only provide those images to support legacy projects. Please use PHP 5.6 or, preferably, PHP 7 for all new projects. Security experts around the world thank you.)

I already have a Drupal 8 project using Composer. Will it still work?

Absolutely! Simply go to the template Git repository and copy the file and .platform directory, then stick those in your project’s Git root. If you used the Drupal-Composer project to create it initially, all of the paths should still work fine. You will also need the settings.php and settings.platformsh.php files to ensure your site gets the correct database credentials and such. You can tweak the file if needed, such as if your files are organized differently.

You can also tweak those files as needed to configure your cluster exactly how you need. See the documentation for more details.

What about Drupal 7?

We’re still investigating whether we want to switch our Drupal 7 template over to Composer. (If you have thoughts on the matter, let us know.) Currently, Drupal 7’s test suite doesn’t fully pass under PHP 7 as there’s still a few edge case bugs, and a number of contrib modules need minor tweaks. We may default Drupal 7 to PHP 7 in the future when we feel it’s safe to do so. For now, we recommend PHP 5.6 for Drupal 7 sites.

Wow, thanks, this is great!

Happy to help! See you in the Cloud…

Larry Garfield Jun 7, 2016
May 19 2016
May 19

You’re developing your site on and you love the fact that you get exact copies of your production site for every Git branch that you push.

But now that you think about it, you realize that all those copies used by your development team to implement new features or fixes contain production data (like user emails, user passwords…). And that all the people working on the project will have access to that sensitive data.

So you come up with the idea to write a custom script to automatically sanitize the production data every time you copy the production site or synchronize your development environments. Next you think of a way to automatically run that script. Possibly a custom Jenkins job that you will maintain yourself. But, of course, you will need to update this Jenkins job for every new project you work on. Plus, you will have to figure out the permissions for this script to give proper access to your site.

So Simple

But wait, what if I told you that all this hassle can be handled in a simple deployment hook that provides?

Indeed, with, every action will trigger specific hooks where you can interact either with the build phase or the deployment phase of the process.

For example with Drupal, you can use the drush sql-sanitize command to sanitize your database and get rid of sensitive live information.

Also you need to make sure that the sanitization only runs on the development environments and never on the master environment (you will hate me if that happens):

type: php:7.0
    flavor: drupal
    build: |
    # Whatever you want to do during the build phase.
    deploy: |
        cd /app/public
        if [ $PLATFORM_ENVIRONMENT = "master" ]; then
            # Do whatever you want on the production site.
            drush -y sql-sanitize --sanitize-email=user_%[email protected] --sanitize-password=custompassword
        drush -y updatedb

If you are not working with Drupal, you can even run your own sanitization script. Read more about build and deployment hooks on our public documentation.

To access the deploy hook logs on the server:

$ platform ssh
[email protected]:~$ cat /var/log/deploy.log

[2016-05-18 10:14:13.872085] Launching hook 'cd /app/public
if [ $PLATFORM_ENVIRONMENT = "master" ]; then
    # Do whatever you want on the production site. 
    drush -y sql-sanitize --sanitize-email=user_%[email protected] --sanitize-password=custompassword
drush -y updatedb

The following operations will be done on the target database:
* Reset passwords and email addresses in users table
* Truncate Drupal's sessions table

Do you really want to sanitize the current database? (y/n): y
No database updates required                                           [success]

That’s it! Sleep well now ;)

Augustin Delaporte May 19, 2016
May 05 2016
May 05

Are you joining us at DrupalCon New Orleans next week? It’s going to be a blast!

Those who have attended a DrupalCon before know how intense they can be. For first-timers, a DrupalCon can be overwhelming. The Drupal Community is an amazing and welcoming group of people, almost unnervingly so at times. The energy around a DrupalCon is palpable, but that means it can be a shock to those used to a calmer event.

So how do you get the most out of a DrupalCon? Glad you asked…


No, really. DrupalCons are big and you’ll be walking a lot and talking to a lot of people. Have a water bottle on you. A sholder-sling or belt clip bottle is best because it’s easier to keep with you, but if your laptop bag or backpack has a bottle holder that works well, too.

Sure, there will be coffee breaks. But the lines may be long, you don’t want to wait for them to get a drink, and water is healthier for you anyway (which matters, really).

Bring a notebook

It doesn’t have to be paper, of course. A tablet, phone, Chromebook, laptop, or whatever else lets you take notes is fine. You’ll be exposed to a million new ideas this week, and your odds of remembering everything you found really cool I need to use that this will change my life are slim. Write it down! At least write down key terms, phrases, tool, and links to Google later.

Have lunch with strangers

What good is hanging out with a conference of thousands of people if you only talk to the people you know? Take advantage of the general friendliness of the Drupal community to meet new people. Break away from your usual team and talk to someone else’s team. Maybe it’s developers you don’t know. Maybe it’s a vendor you’re considering hiring and want to get to know better. (Yep, we’ll be there!) Maybe it’s the marketing director for another institution like yours. Or all of them at the same table. Spend time with new people and come away with new friends.

One caveat,though: Most Drupal developers are very friendly, but please don’t fawn. Yes, you may be casually chatting with the person who wrote the module that runs your entire business, but they’re still just a (really smart!) person hanging around, learning stuff, and eating lunch. Please treat them as such.

Mix in the Hallway Track

DrupalCon New Orleans has 130 sessions across 13 tracks, with 11 concurrent sessions. That’s a lot of content. Fortunately, it’s also all recorded. DrupalCon has one of the best session recording programs of any conference I’ve been to, so if there are too many simultaneous sessions you really want to attend, worry not! The Drupal Association has you covered. (Unless it’s my session on PHP 7 on Wednesday at 1pm. Then just go to the session.)

So well covered, in fact, that you shouldn’t try to pack a session into every time slot. Take some time to just talk and mingle with people. Get into heated (but polite) debates about technical issues with someone you just met. Stop by the expo hall to chat with the team (and the other sponsors, too). Step by the Business Showcase sessions in the vendor hall, especially at 2:00 on Wednesday to see’s resident astronaut. :-) Collect swag from the vendor hall. (That’s why you go to a conference, right? All the free swag?)

Pace yourself

There’s so much to do at DrupalCon, including the after-parties, that it’s easy to lose track of time, or have one too many beers with those new friends you just met. Be sure to pace yourself. DrupalCon is a week-long event; don’t spend all your energy on day one. In addition to hydrating, get a good night’s sleep every night. (Note: “Good night’s sleep” is relative. A full 6 hours is generally considered a lot during DrupalCon week.)

Also, eat healthy! Although the conference lunch tries to be reasonably healthy, it’s very easy to fall into the “pizza and beer and beer” trap at the after parties with all of your new friends. Be careful to mix in plenty of protein and vegetables while you’re at it, so that you can stay upright for the next night. You want to be awake and coherent for Thursday night’s Trivia Night.

Come for the sessions, stay for the sprints

DrupalCon doesn’t end with the closing ceremony! Drupal is all about contributing and giving back. That’s how you pay for Open Source. And the best place to do that at DrupalCon is at the Sprints on Friday. You do not need to be an accomplished developer, or any developer, in order to help out. There are sprint areas for coders, for front-end devs, for documentation, for UX testing, for marketing, you name it. If there’s not yet a planned sprint for a topic you’re interested in… guess what, you’re now organizing it. (Hat tip to Cal Evans…)

Not sure what to do or where to start? There’s even a First-time Sprinter Workshop, where people will be on hand to help you get started. Even if that means starting from “So, who’s this Git I keep hearing about?” someone will be able to get you onboarded and on your way.

Go to the Prenote

Most importantly, of course, plan to get up early enough on Tuesday to attend the DrupalCon Prenote. The Prenote is a DrupalCon tradition, and a great way to break the ice, whether you’re a new attendee or seasoned DrupalCon veteran. Past years have included sketch comedy, super heros, sacrificing Dries for Christmas dinner, crustaceans, and musical comedy. I can’t give away too much for this year’s plans, but I will leak out… it will definitely sound better in person than on the recording. ;-)

Always start the Con with Dries’ favorite session.

We’ll see you in NOLA!

Larry Garfield May 5, 2016

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web