Aug 01 2018
Aug 01

This is part two of a two-part series.

In part one, we cogently convinced you that regardless of what your organization does and what functionality your website has, it is in your best interest to serve your website securely over HTTPS exclusively. Here we provide some guidance as to how to make the move.

How to transition to HTTPS

To fully transition to HTTPS from HTTP means to serve your website HTML and assets exclusively over HTTPS. This requires a few basic things:

  • A digital certificate from a certificate authority (CA)
  • Proper installation of the certificate on your website’s server
  • Ensuring all assets served from your website are served over HTTPS

Let’s break these down.

Acquire a digital certificate

As briefly discussed in part one, to implement HTTPS for your website you must procure a digital certificate from a certificate authority. Just like domain name registrars lease domain names, CAs lease digital certificates for a set time period. Each certificate has a public and private component. The public component is freely shared and allows browsers to recognize that a “trusted” CA was the one to distribute the certificate. It is also used to encrypt data transmitted from the browser. The private complement is only shared with the purchaser of the certificate, and can uniquely decrypt data encrypted by the public key. CAs use various methods through email or DNS to “prove” that the person who purchased the certificate is a rightful administrator of the domain for which they purchased it. Once you’ve received the private key associated with the certificate, you’ll need to install it on your server. Annual certificate costs can be as much as $1,000 or as little as nothing. More on that in a moment.

Install the certificate

Lots of wires

Installing an HTTPS certificate manually on a server is not a trivial engineering task. We explain this at a high-level in part one. It requires expertise from someone who is experienced and comfortable administering servers. There are many different installation methods unique to each permutation of server software and hosting platform, so I won’t expend any real estate here attempting to lay it out. If you have to do a manual installation, it’s best to search your hosting provider’s documentation. However, depending on the complexity of your website architecture, there are ways to ease this process. Some hosting platforms have tools that substantially simplify the installation process. More on that in a moment as well.

Serve all resources over HTTPS: avoid mixed content

Once you’ve installed your certificate, it’s time to ensure all assets served from your pages are served over HTTPS. Naturally, this entire process should be completed in a staging environment before making the switch to your production environment. Completing a full transition to HTTPS requires attention to detail and diligence. “Mixed content”, or serving assets from a page over both HTTP and HTTPS, can be both tedious and insidious to rectify. The longer your site has been around, the more content there is and the more historic hands have been in the pot of content creation, the more work there will be to make the switch. Depending on your platform (CMS or otherwise) and how it was designed, there may be many avenues for different stakeholders to have included assets within a page over time. Developers, site admins, and content editors usually have the ability to add assets to a page. If any assets start with http://, they’ll need to be updated to https:// to prevent mixed content warnings.

We have recently helped a client who has been publishing articles at a high cadence for over 10 years with many different stakeholders over that period. Practices weren’t consistent and uncovering all the ways in which HTTP resources were served from the page was a substantial undertaking. Therefore, be prepared for a time investment here – there may be many areas to audit to ensure all assets from all your pages are being served over HTTPS. Some common ways mixed content HTTP assets are served from a site whose HTML is served over HTTPS are:

  • Hard-coding a resource: e.g. http://www.example.com/img/insecure-image.jpg
  • Using a 3rd-party library or ad network reference: http://www.example.com/js/analytics.js
    • This is common for libraries that haven’t been updated in a while. Most all of them now serve the same assets over the same path securely with HTTPS.

Even practices that were previously encouraged by Paul Irish, a leading web architect for the Google Chrome team, may have contributed to your mixed content problem, so don’t feel bad. Just know, there will likely be work to be done.

The risk of mixed content

These “not secure” bits of mixed-content expose the same risk that your HTML does when served over HTTP, so browsers rightfully show the user that the experience on your site is “not secure”.

Mixed content is categorized in two ways: active and passive. An asset that can contribute to passive mixed content would be an image; it doesn’t interact with the page but is merely presented on the page. An example that can be an active asset of mixed content is a javascript file or stylesheet since its purpose is to manipulate the page. Passive mixed content, albeit categorically less severe than active, still exposes opportunities for an attacker to learn a lot about an individual’s browsing patterns and even trick them into taking different actions than they intend to on the page. Therefore passive mixed content still constitutes enough of a threat for the browser to issue a warning display.

mixed content browser errors

In the case of an active asset that is compromised, if it were a javascript file, an attacker can take full control of the page and any interaction you may have with it like entering passwords, or credit card information. The mechanism behind this is a somewhat sophisticated man-in-the-middle attack, but suffice it to say, if the browser recognizes the vulnerability, the best scenario is the poor user experience we discussed in part one, the worst is total data compromise. Your audience, and by association your organization, will be seeing red.
Chrome misconfigured HTTPS

The good news about moving to HTTPS

Ensuring your website serves assets exclusively over HTTPS is not as hard as it used to be, and is getting easier by the day.

There are free digital certificates

There’s no such thing as a free lunch, but a free certificate from a reputable CA? It would seem so. People are just giving them away these days… Seriously, many years in the making since its founding by two Mozilla employees in 2012, the Let’s Encrypt project has vowed to make the web a secure space and has successfully endeavored to become a trusted CA that literally does not charge for the certificates they provide. They have shorter lease cycles of 60 to 90 days, but they also offer tooling around automating the process of reinstalling newly provided certificates.

There are easier and cheaper ways to install certificates

With the advent of the aforementioned free certificate, many platform-as-a-service (PaaS) hosting options have incorporated low cost or free installation of certificates through their hosting platform sometimes as easily as a few clicks. Let’s Encrypt has been adopted across a broad range of website hosting providers like Squarespace, GitHub Pages, Dreamhost, all of which we use alongside many others.

For many of our Drupal partners, we prefer to use a platform as a service (PaaS) hosting option like Pantheon, Acquia, or Platform.sh. Both Pantheon and Platform.sh now provide a free HTTPS upgrade for all hosting plans; Acquia Cloud, another popular Drupal PaaS, is still a bit behind in this regard. We have found that the efficiency gains of spending less time in server administration translates to more value to our clients, empowering additional effort for the strategy, design, and development for which they hired us. In addition to efficiency, the reliability and consistency provided by finely tuned PaaS offerings are, in most cases, superior to manual installation.

A good example of the evolution of hosting platforms maturing into the HTTPS everywhere world is our own Jekyll-based site, which we’ve written about and presented on before. We first set up HTTPS over GitHub pages using CloudFlare guided by this tutorial since we found it necessary to serve our site over HTTPS. However, about a year later GitHub announced they would provide HTTPS support for GitHub pages.

Similarly, we had previously implemented Pantheon’s workaround to make HTTPS on all of their tiers accessible to our clients on their hosting platform. Then they announced HTTPS for all sites. We’re thankful both have gotten easier.

There are tools to help with the transition to HTTPS

Through its powerful Lighthouse suite, Google has a tool to help audit and fix mixed content issues. Given the aforementioned tedium and potential difficulty of tracking down all the ways in which people have historically added content to your site, this can be an invaluable time saver.

You can also use tools like Qualys SSL Labs to verify the quality of your HTTPS installation. See how our site stacks up.

Wrap-up

Given the much greater ease at which many modern hosting platforms allow for HTTPS, the biggest barrier, primarily a matter of effort, is to clean up your content and make sure all assets are served over HTTPS from all pages within your website. So, if you haven’t already, start the transition now! Contact us over the phone or email if you need assistance and feel free to comment below.

Jun 26 2018
Jun 26

This is part one of a two-part series on transitioning to HTTPS

For some time, major internet players have advocated for a ubiquitous, secure internet, touting the myriad benefits for all users and service providers of “HTTPS everywhere”. The most prominent and steadfast among them is Google. In the next week, continuing a multi-year effort to shepherd more traffic to the secure web, Google will make perhaps its boldest move to date which will negatively impact all organizations not securely serving their website over HTTPS.

To quote the official Google Security Blog

Beginning in July 2018 with the release of Chrome 68, Chrome will mark all HTTP sites as “not secure”

Chrome insecure message for HTTP
Google blog

Given the ambiguous “in July 2018”, with no clearly communicated release date for Chrome 68, it’s wise to err on the side of caution and assume it will roll out on the 1st. We have readied our partners with this expectation.

So what does this mean for your organization if your site is not served over HTTPS? In short, it’s time to make the move. Let’s dig in.

What is HTTPS?

HTTP, or HyperText Transfer Protocol, is the internet technology used to communicate between your web browser and the servers that the websites you visit are on. HTTPS is the secure version (s for secure) which is served over TLS: Transport Layer Security. What these technical acronyms equate to are tools for internet communication that verify you’re communicating with who you think you are, in the way you intended to, in a format that only the intended recipient can understand. We’ll touch on the specifics in a moment and why they’re important. Put simply, HTTPS enables secure internet communication.

Why secure browsing matters

Leaving aside the technical details for a moment and taking a broader view than communication protocols reveals more nuanced benefits your organization receives by communicating securely with its audience.

HTTPS improves SEO

Since Google accounts for 75-90% of global search queries (depending on the source) SEO is understandably often synonymous with optimizing for Google. Given their market domination, competitors are taking queues from Google and in most cases it’s safe to assume what’s good for SEO in Google is good for optimizing competing search engines.

In the summer of 2014, Google announced on their blog that they would begin to favorably rank sites who used HTTPS over HTTP. It’s already been nearly four years since we’ve known HTTPS to be advantageous for SEO. Since then, Google has consistently advocated the concept of HTTPS ubiquity, frequently writing about it in blog posts and speaking about it at conferences. The extent to which serving your site over HTTPS improves your SEO is not cut and dry and can vary slightly depending on industry. However, the trend toward favoring HTTPS is well under way and the scales are tipped irreversibly at this point.

HTTPS improves credibility and UX

Once a user has arrived at your site, their perceptions may be largely shaped by whether the site is served over HTTP or HTTPS. The user experience when interacting with a site being served over HTTPS is demonstrably better. SEMrush summarizes well what the data clearly indicate; people care a great deal about security on the web. A couple highlights:

You never get a second chance to make a first impression.

With engaging a participant of your target audience, you have precious few moments to instill a sense of credibility with them. This is certainly true of the first time a user interacts with your site, but is also true for returning users. You have to earn your reputation every day, and it can be lost quickly. We know credibility decisions are highly influenced by design choices and are made in well under one second. Combining these two insights, with the visual updates Chrome is making to highlight the security of a user’s connection to your site, drawing the user’s attention to a warning in the URL bar translates to a potentially costly loss in credibility. Unfortunately it’s the sort of thing that users won’t notice unless there’s a problem, and per the referenced cliché, at that point it may be too late.

Browsers drawing attention to insecure HTTP

Much like search, browser usage patterns have evolved over the last five years to heavily favor Google Chrome. Therefore, what Google does carries tremendous weight internet-wide. Current estimations of browser usage put Chrome between 55% and 60% of the market (again, depending on sources). Firefox has followed suit with Chrome as far as HTTP security alerts go, and there’s no indication we should expect this to change. So it’s safe to assume a combined 60-75% of the market is represented by Chrome’s updates.

Google Chrome HTTP warning roll out

Google (and closely mirroring behind, Firefox) has been getting more stringent in their display of the security implications of a site served over HTTP (in addition to sites misconfigured over HTTPS). They’ve shared details on the six-step roll out on their general blog as well as on a more technical, granular level on the Chrome browser blog.

In January 2017, they began marking any site that collects a password field or credit card information, served over HTTP as subtly (grey text) not secure.

Chrome insecure message for HTTP
Laravel News

Then, in October 2017, they tightened things up so that a site that collected any form information over HTTP, would have the same “not secure” messaging. They added the more action-based aspect of showing the warning on the URL bar when a user entered data into a form. This is an especially obtrusive experience on mobile due to space constraints, which more deeply engages the user cognitively as to exactly what is unsafe about how they’re interacting with the site.

Chrome insecure message for HTTP
Google blog

Next, in July 2018, all HTTP sites will be marked as not secure.

In September 2018, secure sites will be marked more neutrally, removing the green secure lock by default connoting a continuing expectation that HTTPS is the norm and no longer special.

Chrome insecure message for HTTP
Google blog

In October 2018, any HTTP site that accepts any form fields will show affirmatively not secure with a bold red label, much like a misconfigured HTTPS site does now.

Chrome insecure message for HTTP
Google blog

Though they haven’t yet announced a date, Google intends to show affirmatively not secure for all HTTP sites. The drive is clearly to establish the norm that all the web traffic should be served over HTTPS and that outdated HTTP is not to be trusted. This is a pretty strong message that if Google has their way (which they usually do) HTTPS will inevitably be virtually mandatory. And inevitably in internet years, may be right around the corner.

HTTPS vastly improves security for you and your users

Returning to the technical, as mentioned previously, HTTPS helps secure communication in three basic ways.

  • Authentication “you’re communicating with who you think you are”
  • Data integrity “in the way you intended to”
  • Encryption: “in a format that only the intended recipient can understand”

What authentication does for you

In order for the browser to recognize and evaluate an HTTPS certificate, it must be verified by a trusted certificate authority (CA). There are a limited amount of CAs who are entrusted to distribute HTTPS certificates. Through public-key cryptography, a fairly complex but interesting topic, through inherent trust in the CA who has provided the HTTPS certificate for a given site, the browser can verify any site visitor is positively communicating with the expected entity with no way of anyone else posing as that entity. No such verification is possible over HTTP and it’s fairly simple to imagine what identify theft would be possible if you were communicating with a different website than you appeared to be. In the event any of the major browsers cannot validate the expected certificate, they will show a strong, usually red warning that you may not be communicating with the expected website, and strongly encourage you to reconsider interacting at all.

Chrome misconfigured HTTPS

Therefore, the authentication gives your users the confidence you are who you say you are, which is important when you’re engaging with them in any way whether they’re providing an email, credit card or simply reading articles.

How data integrity helps you

Ensuring perfect preservation of communication over the internet is another guarantee HTTPS provides. When a user communicates with a website over HTTPS, the browser takes the input of that communication and using a one-way hashing function creates a unique “message digest”: a concise, alphanumeric string. The digest may only be reliably recreated by running the exact same input through the same hash algorithm irrespective of where and when this is done. For each request the user makes to the website, the browser passes a message digest alongside it and the server then runs the input it receives from the request through the hash algorithm to verify it matches the browser-sent digest. Since it is nearly computationally impossible to reverse engineer these hash functions, if the digests match, it proves the message was not altered in transit. Again, no such data integrity preservation is possible over HTTP, and there is therefore no way to tell if a message has been altered en route to the server from the browser.

What encryption does for you

Communicating over an unencrypted HTTP connection allows for some easily exploitable security risks in the case of authentication to a site. To demonstrate how easy it can be to take over someone’s account on an HTTP connection, a tool called Firesheep was developed and openly released in mid 2010. Major social media platforms Facebook and Twitter were both susceptible to this exploit for some time after Firesheep was released. The identity theft is carried out through a means called session hijacking. With Firesheep installed, a few clicks could log you in as another user who was browsing over WiFi nearby on any HTTP website. This form of session hijacking is possible when the authentication cookies, small identifying pieces of information that live in your browser while you’re logged into a site, are transmitted to the server on each request over HTTP. Over WiFi these messages are broadcasted into the air in plain text, and can be picked up by anyone listening. HTTPS prevents this since the communication is encrypted and unintelligible to eavesdroppers.

In the example of a CMS like Drupal or any other system in which there is a login, if an administrator with elevated site permissions is logged in over HTTP, they’re subject to the same risk if that traffic is monitored or “sniffed” at any point along its path from the browser to the server. This is especially easy over WiFi but is not relegated to only WiFi. The cookies are sent to the server upon every request, regardless of whether or not the user entered their password during the active session or not. Depending on the admin’s privileges, this access can be easily escalated to complete control of the website. Encryption is a big deal.

HTTPS is required for the modern web

One of the more promising developments of the last few years, is the pervasiveness and effectiveness of Progressive Web Apps (PWAs). PWAs is the name coined for a set of technologies that provide a feature-set for mobile browsing akin to native applications, yet is entirely served through the web browser. PWAs require all communication to be done over HTTPS. Some of the possibilities with PWAs that were previously relegated to native applications only are:

  • Providing content and services based on the user’s location data
  • Providing interaction with the user’s camera and microphone within the browsing experience
  • Sending push notifications
  • Serving off-line content

If you aren’t taking advantage of any of these features that are possible through PWAs, it’s something your organization should strongly consider to further engage users. Before the ambitions to be on feature parity with native applications are fully borne-out, PWAs will continue to evolve the power of layering deeper engagement with users on top of your existing mobile experience with minimal effort. PWAs simply do not work over HTTP. HTTPS is required to open the door to their possibilities.

Barriers to HTTPS have been lifted

Historically, considering a move to HTTPS has been held back by some valid concerns for webmasters whose job it was to select where and how their websites were hosted. A few of the fundamental apprehensions could be categorized as:

  • No perceived benefit. People often assumed if they weren’t collecting financial or personal information, it wasn’t necessary. We’ve covered why holding this belief in 2018 is a misconception. Savas Labs made the move in July 2017 to serve exclusively over HTTPS for our statically-generated Jekyll website even though at the time we had no forms or logins.
  • Performance costs. We know reducing latency is crucial for optimizing conversions and HTTPS does require additional communication and computation. However, with the broad adoption of the HTTP/2 protocol over the last few years, HTTPS now usually outperforms HTTP.
  • Financial costs. HTTPS was too complex and costly to implement for some. Large strides have been made across many hosting providers who now bundle HTTPS into their hosting offerings by default, often at no additional cost. Let’s Encrypt, a relatively new and novel certificate authority, first began offering free certificates (which they still do) and then made it easy to automatically renew those certificates, helping to ease the burden and cost of implementation.

We cover each of these in more detail in part two that will help guide you on how to make the move to HTTPS.

Conclusion

To revisit Google’s announcement:

Beginning in July 2018 with the release of Chrome 68, Chrome will mark all HTTP sites as “not secure”.

Interpreting that and providing our perspective:

You’re not part of the modern web unless you’re exclusively using HTTPS.

A bold, if slightly controversial statement, but for ambitious organizations like the folks we’re fortunate enough to work with each day, HTTPS-only is the standard in mid 2018 and beyond. Given the benefits, the lifted previous barriers, and the opportunity for the future, very few organization have a good reason not to exclusively serve their sites over HTTPS.

Have we convinced you yet? Great! Read part two for some assistance on how to make the move.

Additional resources

Mar 25 2018
Mar 25

Updating Drupal 8 core with Composer has proven to be a problematic process for many developers. For some, this is nearly as upsetting as the fact that the Composer logo is actually a conductor, and some have abandoned the platform entirely, opting to stick with Drupal 7.

The process isn’t always as simple as running composer update drupal/core and going about your day — the update from 8.3 to 8.4 was notoriously difficult and I recently experienced an issue while updating from 8.4.5 to 8.5.0. In this article, I’ve provided instructions for updating D8 core with Composer, plus some tips for dealing with common issues.

This is especially important now as we await a highly critical security update to all versions of Drupal, to be released on Wednesday, March 28, 2018. This level of security update is quite rare, but the update needs to be implemented on all sites as soon as possible after its release.

As the PSA linked to above notes, the Drupal Security Team will be providing security releases for unsupported minor versions 8.3.x and 8.4.x due to the issues many have encountered when updating from these versions. If you’re still on one of those versions, the update may be more straightforward if you stick with the release for that minor version.

General instructions for updating core

First, let’s cover the steps needed to update Drupal 8 core with Composer.

  1. To update the core package, run:

     composer update drupal/core --with-dependencies -v
    
    

    It’s recommended to run the update command with the --with-dependencies flag to update any of Drupal core’s dependencies as well.

  2. To capture any included database updates, run drush updb -y.
  3. To capture any included configuration changes, run drush config-export -y and commit the changes.

All three of these steps are necessary whenever the core package is updated.

Dealing with errors

Core version doesn’t update

If you run the composer update command but core isn’t updating, edit your composer.json file to include the specific version of core you want, e.g. ^8.5. Then, run the composer update command again.

Composer command outputs errors

Composer may not be able to resolve all of the dependencies of core and will output an error like this:

Your requirements could not be resolved to an installable set of packages.

  Problem 1
    - Conclusion: don't install drupal/core 8.5.0
    - Conclusion: don't install drupal/core 8.5.0-rc1
    - Conclusion: don't install drupal/core 8.5.0-beta1
    - Conclusion: don't install drupal/core 8.5.0-alpha1
    - Conclusion: don't install drupal/core 8.6.x-dev
    - Conclusion: remove symfony/config v3.2.9
    - Installation request for drupal/core ^8.5 -> satisfiable by drupal/core[8.5.0, 8.5.0-alpha1, 8.5.0-beta1, 8.5.0-rc1, 8.5.x-dev, 8.6.x-dev].
    - Conclusion: don't install symfony/config v3.2.9
    - drupal/core 8.5.x-dev requires symfony/dependency-injection ~3.4.0 -> satisfiable by symfony/dependency-injection[3.4.x-dev, v3.4.0, v3.4.0-BETA1, v3.4.0-BETA2, v3.4.0-BETA3, v3.4.0-BETA4, v3.4.0-RC1, v3.4.0-RC2, v3.4.1, v3.4.2, v3.4.3, v3.4.4, v3.4.5, v3.4.6].
    - symfony/dependency-injection 3.4.x-dev conflicts with symfony/config[v3.2.9].
    - symfony/dependency-injection v3.4.0 conflicts with symfony/config[v3.2.9].
    - symfony/dependency-injection v3.4.0-BETA1 conflicts with symfony/config[v3.2.9].
    - symfony/dependency-injection v3.4.0-BETA2 conflicts with symfony/config[v3.2.9].
    - symfony/dependency-injection v3.4.0-BETA3 conflicts with symfony/config[v3.2.9].
    - symfony/dependency-injection v3.4.0-BETA4 conflicts with symfony/config[v3.2.9].
    - symfony/dependency-injection v3.4.0-RC1 conflicts with symfony/config[v3.2.9].
    - symfony/dependency-injection v3.4.0-RC2 conflicts with symfony/config[v3.2.9].
    - symfony/dependency-injection v3.4.1 conflicts with symfony/config[v3.2.9].
    - symfony/dependency-injection v3.4.2 conflicts with symfony/config[v3.2.9].
    - symfony/dependency-injection v3.4.3 conflicts with symfony/config[v3.2.9].
    - symfony/dependency-injection v3.4.4 conflicts with symfony/config[v3.2.9].
    - symfony/dependency-injection v3.4.5 conflicts with symfony/config[v3.2.9].
    - symfony/dependency-injection v3.4.6 conflicts with symfony/config[v3.2.9].
    - Installation request for symfony/config (locked at v3.2.9) -> satisfiable by symfony/config[v3.2.9].

This happens when one of Drupal’s dependencies is updated and the new version requires an updated version of another package. To resolve this, include the dependency package causing the issue in the composer update command. The --with-dependencies flag this will ensure that the dependency’s dependencies are also updated. To fix the error above, I ran:

composer update drupal/core symfony/config --with-dependencies -v

You’re not alone

If you continue to run into problems, the best advice I can give you is to search for the specific update you’re trying to make. Every time I’ve had an issue I’ve been able to find discussions online regarding that specific update and potential resolutions.

In fact, when I got the error above while trying to update to 8.5.0, I found this helpful article by drupal.org user eiriksm and was able to resolve the issue. Check out the article and its comments for more discussion on how to deal with Composer issues when updating Drupal 8 core.

Nov 08 2017
Nov 08

This is part two of a two-part series.

In part one, we discussed how Drupal 8’s adoption in its first two years was a bit lackluster compared to what many expected. Grounded in a better understanding of the shortcomings of the past two years, we’ll try to equip those of you considering Drupal 8 with the information you need to make the best decision for your organization as you continue to invest in the powerful framework of Drupal.

Image of Drupal 8 adoption curve
Credit: Angie Byron AGAIN on “Everything you need to know about the top 8 changes in Drupal 8” from May 2015

Drupal 8 adoption is certainly no longer in the “early adopter” phase, yet we still haven’t entered the “majority” phase. For most organizations not yet powered by Drupal 8, our stance is: it’s probably time to upgrade. The value you’re missing out on with the newer software is real. Perhaps less obviously, if you’re investing in your Drupal 7 site beyond passive maintenance, you may well be doubling your long-term costs by deferring and exacerbating what will need to be refactored later. For organizations who have web staff, work with an agency, or do any non-trivial customization to their Drupal website, this applies to you.

A disclaimer upfront

Drupal agencies like ours benefit from upgrades in the short-term because they are usually a substantial undertaking. This fact, in part, is why over the past two years people have written more often about encouraging an upgrade to Drupal 8 rather offering a more holistic and measured perspective. A small dose of healthy skepticism typically serves site owners best. If Savas Labs is to live into its values we must factor in the needs of two other stakeholder groups when advising on an upgrade: our clients, and the collective Drupal community. Given the substantial effort to upgrade, if we focus solely on the short-term, we do our clients a disservice. In doing that, the next time those site owners and admins have the option to select a tool to power their web systems, they may look elsewhere remembering their pain and disappointment in recent experience. This ripple effect has the potential to create many former Drupal users. Imbued with the open source ethos, we believe we owe it to the broader Drupal community from which we’ve gained so much to consult with honesty and integrity.

What you’re missing out on

As we discussed in part 1, we lived through the challenges of the complete re-architecture of the Drupal application from 7 to 8.

Angie Byron, the person I apparently can’t stop referencing, said in 2013:

For people who grew up learning PHP on Drupal, and there are a lot of people for whom that’s true, I think Drupal 8 will be kind of a big adjustment for them.

Though it wasn’t easy, at Savas Labs we feel strongly that it was a wise investment that’s just beginning to pay off. At this point in its maturation, we believe now (as other Drupal leaders have felt for some time), that Drupal 8 is superior to previous versions in nearly all use cases for which organizations currently use Drupal. Some argue that Drupal 8 has become too complex and left smaller sites behind, but it’s important to consider their incentives for a well-rounded perspective. Via Acquia, Pantheon and other hosting providers, you can serve up a Drupal 8 website within minutes equipped with more features and a superior user experience to previous versions on a free tier to boot! While simultaneously catering better to those not writing code, the engineers who have always pushed Drupal to its physical limits have more power to build sophisticated tools and integrations that can do more for their clients than ever before.

In exploring this deeper, let’s start with the technical, and dig into the more nuanced to answer the Drupal 8 question: “What’s in it for me?” (WIIFM?), for you.

WIIFM? Features.

It’s fairly easy to find information touting Drupal 8’s strengths around the web, and it’s pretty straightforward that software we write today (and have been writing for 4 years) is superior to software written 8.5 years ago (or 10.5 years ago with Drupal 6). Let’s look briefly at some high-impact improved features for site owners and admins.

Design/UX/Usability Improvements with Drupal 8

In developing Drupal 8, perhaps for the first time, the Drupal leadership took user experience work seriously and developed a cohesive strategy to improve UX for Drupal 8. The results paid off.

  1. Responsive out of the box: Given that Drupal 8’s release came long after responsive web design became popular enough to garner its own acronym, naturally, all themes (administrative and otherwise) were developed to be responsive. RWD has been a must for years, but it took heavy lifting to achieve in Drupal 7.
  2. Better content authoring: Drupal 8 has adopted a more Wordpress-like UX for editors, which for many years had been cited as a distinction between the two, rightfully favoring Wordpress. Content authoring layout improvements coupled with responsiveness have made administration from a phone a pleasant experience.
  3. Accessibility at lower cost: Accessibility efforts, though not prioritized by all, continue to gain traction as we continue to expand our ability to be inclusive. We’ve seen clients threatened with lawsuits over not adhering to accessibility standards. Whether motivated by benevolence or risk-aversion, accessibility should be on your radar, and it’s easier in Drupal 8.
  4. Multilingual in core: With a cohesive system now in core, we have been able to build a couple of multilingual sites with relative ease, not having to dedicate substantial additional time to the translation component.

Drupal 8 multilingual is a world of difference. What would take 22 or more modules in Drupal 7 you would do with 4 (and all in core). - @kristen_pol

RESTful possibilities

One of the developmental focuses of Drupal 8 we believe has tremendous impact on how organizations can maximize the value of their content is the API-first Initiative. We will likely write an entire post about this in the future, but in short the initiative makes Drupal 8 much better equipped to serve as a central content repository that can expose content to many types of devices in the formats they require for display. Historically, Drupal has been pretty exclusively focused on producing HTML (one format) for a web browser (the device/software). Drupal 8 now treats Roku, iOS & Android Apps, video game systems and the web browser all as first-rate citizens for content consumption. As the number and variety of devices that connect to the web continues to rapidly grow, Drupal 8 can serve as a powerful hub that provides relevant content and experiences to end-users. You’d be remiss to snooze on this one. To get an idea of the possibilities check out Contenta CMS, a Drupal 8 distribution built by some of the people behind the initiative.

WIIFM? Performance.

If you take performance seriously, which you should, there’s a lot to like about Drupal 8. Sticking with the theme of sophistication, Drupal 8 provides a much more granular ability to cache specific components than its predecessors. And as we know in the high-performance web world, cache is king. When Drupal 8 first came out, a leading Acquia engineer showed some mixed results on Drupal 8 performance. The heavier codebase invariably means having to swim upstream to make it outperform the lighter codebase in Drupal 7, but I’m happy to say the architects had their flippers on when working through these challenges. Take these two fundamental points:

  1. Regardless of how fast the underlying code executes, what matters to users is perceived performance, i.e. how long they have to wait to interact with the page. Perceived delay has been drastically reduced by an experimental-turned-core module (more on that later) called BigPipe. BigPipe loads components of a page in the order in which a user is expected to interact with them while delivering more expensive components as they’re available. This breaks with the tradition of all-or-nothing webpages served by Drupal that were either in the cache or not, lending itself to a Facebook-like experience, which is where BigPipe came from.

    GIF of Drupal 8 BigPipe Video
    Slower video here

  2. Modern performance tactics derive the largest gains from outside of the application leveraging services like a Content Delivery Network (CDN), and/or a web application accelerator, like Varnish, to serve up resources to anonymous traffic (users not signed-in) as quickly as possible. For most sites, anonymous traffic comprises a majority of overall traffic. Traditionally, there have been limitations to improving performance for authenticated traffic, and that’s where Drupal 8 shines. With BigPipe and a more granular caching system, Drupal 8 can substantially outperform Drupal 7’s authenticated user experience, so it’s a win-win.

If you’re made of time today, check out our other articles we’ve written about performance for a deeper dive into this broad and complex topic.

WIIFM? People.

We know that behind any powerful movement are powerful people. To quote Dries, as I did in my Drupalcon talk in New Orleans:

fostering the Drupal community is actually more important than just managing the code base.

Also, atop the Drupal.org homepage used to read

Come for the code, stay for the community.

Without needing to resonate with all the warm and fuzzies that many within the community do, these sentiments show the richness and value of the Drupal community. And that rich community, not out of neglect, but rather necessity, has moved on from Drupal 7. Top designers, developers, and strategists are working on few Drupal 7 projects these days, and most would prefer to move on. For those who work with web designers and engineers (or used to be one like me), you know that they often have an insatiable appetite for learning, and want to do that with increasingly relevant tools to their growth and output. Sticking to dated software is an effective way to weed out the best and brightest.

The Improved Developer Experience (DX) of Drupal 8

Just like happy customers tend to be repeat customers, happier developers also produce returns; they’re more productive.

There are a few improvements in Drupal 8 that make life substantially better for developers. The Configuration Management Initiative was a boon to developers who struggled with a module called “features” which was not designed to do what most of us used it for. The CMI addresses the previous workaround, rife with inconsistencies, of moving site configuration from development to staging and production environments. Although it may seem trivial, developers love this better system in Drupal 8, and it means more efficient development, therefore higher ROI.

Proudly found elsewhere / not invented here / getting off the island

A primary Drupal 8 philosophy that has largely been successful, but yet to fully bear fruit, is the concept to drastically reduce “Drupalisms” that had proven a challenge for newcomers to the system who had to learn a suite of things specific to only Drupal. The proudly found elsewhere paradigm seeks to mitigate this by leveraging the best of other open source tools when possible rather than reinvent the wheel. A few of the tools Drupal 8 now exploits are Symfony components, Twig templating engine, and Composer Dependency Manager. This “borrowing” has two positive consequences: 1) it reduces the workload for Drupal core contributors by utilizing what’s freely available and well vetted through other communities, 2) it allows people familiar with those other frameworks a smoother onramp to productivity in Drupal. I believe we haven’t yet seen a majority of the benefit to the Drupal 8 project from the many people who were already versed in Symfony and TWIG before working with Drupal.

To quote Angie Byron for the thousandth time (full video here):

For people who are classically trained or have experience in other languages, Drupal 8 is going to make a lot more sense to them than Drupal 7 did. We’re just falling more in line with what the larger people are doing… within the broader PHP community.

WIIFM? Cost savings.

The active decision to upgrade or passive indecision to wait both have cost implications. Perhaps this is the most useful section for readers whose primary responsibilities aren’t technical.

Continuing to invest in Drupal 7 (or earlier) can be costly in ways that may not be abundantly apparent on the surface. For most organizations who work with an agency, custom development is where a brunt of the efforts are spent, and therefore is the primary cost driver. “Custom development” occurs when the functionality a client requests is either not freely available on the open-source market or the agency is unaware of its existence and a developer will write code for the specific use case to “extend” the out-of-the-box functionality. The 80-20 rule applies well to software development in Drupal: roughly 20% of the functionality a client requests accounts for 80% of the effort of a project since that 20% must be built from scratch. When site owners request various functionality, it can be difficult for them to differentiate what may constitute custom development efforts vs. freely available from the contributed community. Given the high effort of customization and related technical debt accumulated, site owners should request a high degree of transparency to understand what requires custom development when establishing project budgets. This way, the organization can do a cost/benefit analysis on a granular, per-feature basis. The goal for developers should be to always start with exploring what already-made wheels are out there for the turning before crafting their own. Be wary of alternative thinking. Yet, as extensible and rich the Drupal community is, nearly all of our engagements require customization.

Easy Drupal upgrades forever

A happy Drupal sunrise
Image from Dries’s blog post

To the surprise of the community, in an abrupt departure from business-as-usual in early 2017, Dries committed to “easy upgrades forever”, starting with Drupal 8 of course. The short of it is Drupal 8 to 9 upgrades should be far easier (and less expensive) than any previous major version upgrade, and so will be from here on in. That means for those not yet on Drupal 8, you only have one final difficult upgrade left in your Drupal journey until the end of time.

This is a fairly natural outcome given the possibilities afforded by a more structured, object-oriented architecture coupled with the growing desire to ease upgrade pain that has been building for some time. Although difficult technical work is needed to flesh out exactly how this will be done, the commitment from the top is worth putting stock in and the community is on the way to making this grand proclamation a reality. When upgrades are far easier, they will help rectify some of the sentiment of leaving the smaller sites behind, since major version upgrades will be a much less daunting task with Drupal 8 and beyond.

Drupal 6 or 7 custom development is especially expensive

However, perhaps the most important point is that you may be doubling your efforts for the final time if you’re doing custom development on Drupal 6 or 7, since it will invariably need to be rewritten to work on Drupal 8 with the same 80-20 rate we mentioned earlier. Given the commitment to easy upgrades and guidelines for backwards compatibility, it’s quite likely that custom code written for Drupal 8 will be highly portable to Drupal 9 and 10 that won’t require an arduous rewrite.

A Drupal 6 house
Not our actual house, and it’s not this bad.

I live in the equivalent of a Drupal 6 house. My partner and I keep putting off things we’d like to do now in prep for a more substantial renovation “on the horizon.” We’re not going to get solar panels before replacing the roof, and we won’t upgrade to a high energy efficiency HVAC system until we restructure some of the foundation. We’re being mindful of mitigating our overall costs, which makes sense, but this all sets up a perverse incentive to make no improvements in the immediate. The same can be true for an older Drupal site. As she frequently reminds me, I’ll remind you: it’s probably time to take the plunge and build your Drupal 8 house.

Cost to upgrade is going down

While there remains one final decidedly not easy upgrade if you’re not yet on Drupal 8, the good news is the cost to upgrade has gone down and will continue to. As of the release of 8.4.0, migrating from Drupal 6 is nearly all the way there:

Core provides migrations for most Drupal 6 data and can be used for migrating Drupal 6 sites to Drupal 8, and the Drupal 6 to 8 migration path is nearing beta stability.

The sentiment on 7, expectedly so, is not as far along:

The Drupal 7 to Drupal 8 migration is incomplete but is suitable for developers who would like to help improve the migration and can be used to test upgrades especially for simple Drupal 7 sites. Most high-priority migrations are available.

So migration from 7 still requires some work. More on this ahead.

Early adopters paved the way

We all owe a debt of gratitude to those who were willing to take the risk of building on Drupal 8 in its earlier days. We commend both organizations and agencies who were ambitious and willing to incur some risk to help push the rest of the project forward. We’re proud to put ourselves on that list, starting 2.5 years ago, but it unsurprisingly came with challenges and lessons learned. Mistakes that come with experience are virtually entirely positive for the future since we’ve learned what to do and what to avoid. It’s time for you to benefit from the work of the early adopters.

WIIFM? The Future.

The future is uncertain; the only things guaranteed are death and taxes. Actually, even those I’m not so sure :wink: Regardless, the future for Drupal 6 and 7 are a known entity not likely to get much better. The upside of Drupal 8, while partially known, is largely still in the making and will keep getting better over time.

Continuous innovation with experimental modules! Who doesn’t want that?

Among the suite of other firsts, Drupal 8 has updated its minor version approach to accommodate for innovations in core and this is another game changer. Previously, the first version of Drupal 7 (7.0) was essentially functionally the same as the latest (currently 7.56). Now new minor releases introduce experimental modules which are driven by agreed-upon priorities and are then vetted over time to see if they’ll graduate from the “experimental” label and be fully baked into core. Therefore Drupal 8 can and will adapt; Drupal 7 cannot. The transparent structure leadership established provides a good balance of innovation and predictability with two minor version releases a year. To track these for your own planning, at any time you can check the development roadmap.

Javascript

The Drupal community has been abuzz with “headless” or “decoupled” Drupal since the advent of Drupal 8. The basic idea is that Drupal can lean on its strength of being an excellent tool for highly structured and organized data in the backend while allowing freedom and flexibility of choice on the presentation layer (front end). Though discussed two years ago to no formal conclusion, Dries has recently cited React as the go-to presentation layer for Drupal administrative interfaces come early 2018. This is a fairly big deal, and formally moving forward to more tightly link with React has many implications we haven’t yet fully explored. As the lines between websites and web applications continue to blur, this proudly-found-elsewhere addition may prove to be a powerful one that will not be possible for Drupal 6 or 7. We see this as another wise move to be more in-sync with the rapidly growing impact of JS frameworks.

Access to complementary tools

The re-architecture towards object-orientation helped Drupal join the modern PHP community’s framework (Symfony, Laravel, Cake, Phalcon, Zend, Slim, CodeIgniter, Yii, and Fuel to name the most popular) development practices. One subtle yet substantial value to that move is now many tools that are built to help support these other frameworks are available to Drupal as well. As the toolkit for modern web development grows richer and more robust, the more Drupal can utilize, the better. A couple examples we’ve recently used to help inform project quality and future maintenance costs are Code Climate and Scrutinizer. These tools have much less value analyzing a Drupal 6 or 7 site.

Our advice

So we’ve dumped a lot of information on you at this point, but it may still not be entirely clear what you should do with your outdated Drupal site. Ahead we provide general suggestions as well as what is pertinent to site owners for each version separately (Drupal 6 and 7).

To everyone on Drupal <8

We still have <3 for Drupal <8. Drupal 5, 6, and 7 got us to where we are. But here’s what we think you should consider about where you’re going.

  1. Plan 3 years out if possible. A stitch in time saves nine, and every minute of planning saves 10 in execution. They’re clichés, but true. Planning well requires a real dedication to strategic and investigative work; there’s no way around it. The upside is it allows you to be intentional about when to incorporate an upgrade, rather than being at the mercy of expiring security support. Organizational stakeholders are usually not compelled by upgrading for the sake of upgrading without other bells and whistles that come with it. An experienced partner can help shepherd the long-term planning process to provide guidance on efforts and things to consider. If you’re not working with an agency, do it yourself. Expect to redesign and do a software upgrade every 3-5 years and time those together if possible. Factor in upgrades to other systems that integrate with your website as well as any initiatives that may require functional improvements. Put all of these larger investments on a roadmap with a timeline and be clear about what components are dependent on or impacted by other components. With technical work, the devil is in the details, so a thorough assessment or “discovery project” is usually the best next step. Discovery work is light on upfront investment yet thorough enough to guide your organization through the many choices in your roadmap. This is really the best way to use your resources most efficiently. If your organization hasn’t historically done this, it handicaps you a bit at the moment, but if you’ll excuse one final cliché: there’s no time like the present.
  2. Be mindful of what you don’t need. We all get excited about the possibilities of new functionality. However, when things we’ve built have outlived their purpose, let them go. Given the complexity and interdependence of the tools we build, customizations take the form of mounting, insidious and potentially crippling technical debt if left unaddressed. The cost to upgrade the technical debt is likely the major cause for most of those who have not yet upgraded; it is certainly the case for all of our partners who haven’t. This debt can be hard to track, and it’s not something most agencies proactively share since they have a hand in creating it and can also be shortsighted. Ask for answers as to how your partner is managing your technical debt. If you don’t get good answers, keep asking. Another subset of this concept is that even if you want to maintain certain functionality, it needn’t be done in the with the same modules on the Drupal 8 platform. So don’t take a given module’s absence in Drupal 8 as a certainty that it cannot be efficiently achieved in Drupal 8. In many cases, it can.
  3. Training will be required. If you plan to build with the same team that built your <8 site, and they have not worked on any other Drupal 8 or object-oriented PHP projects, make sure you dedicate time and budget resources for substantial training.

To those with production Drupal 6 sites.

I wrote a Drupal 6 series as Drupal 8 had announced its release (the overview, the risks, the options, Drupal 7 or 8) which is a good reference for both what was true then and what has changed now. Then, I certainly encouraged a conversation with your partner about what is right for you. That has not changed. What has changed is the maturity of the migration system, making it easier to port your content from 6 to 8. An upgrade to Drupal 8 by way of migration should be where you start based on all of the above, and the job before committing to that path is to well vet all requirements to migrate. Given that migrations are high-effort, you should explore alternatives with your development team. How much it’s worth to invest in a migration depends on how valuable your old content is to preserve, which varies widely among organizations.

If through research you uncover that you’re still not ready for Drupal 8, you should make an action plan to follow up on the components that will allow you to upgrade to Drupal 8 and track those over time. You should look into efforts to upgrade to Drupal 7, being mindful of how you can mitigate the costs to the Drupal 8 upgrade. You should consider support with the MyDropWizard team in the immediate. It’s lead by David Snopek who is on the core security team and has an impressive Drupal resume. It’s hard to assess how the support provided by MDW compares to the core security team, but it’s much better than not having any security support. I would also caution those to not use the relief from having coverage through MDW as a reason to rest on your laurels. If Drupal is still working for you, you should be thinking about how to get to Drupal 8. Additionally, as we agreed earlier there’s no longer even certainty to death and taxes, it’s possible that things could change for MDW, and you’d be without support again.

To those with production Drupal 7 sites.

As far as building new on Drupal 7, I have a hard time conceptualizing for whom that is the right choice. MDW wants to keep the door open to building new in Drupal 7, but others point out incentives again. Much like the hat tip to Dries for accepting criticism, I must commend MDW accepting these comments on their blog.

If you have a high degree of customization and technical debt, keep track of the development of Drupal 9. Upgrade now cannot be prescriptive for the 900,000+ sites still on Drupal 7. We generally agree with Angie’s recent presentation at Acquia Engage called Drupal 9 and Backwards Compatibility: Why now is the time to upgrade to Drupal 8 for those on Drupal 7:

If it’s working for you that’s fine! (Until Drupal 9) But if D8 offers features you want, consider earlier adoption.

So, if you’ve determined that you’ll remain on Drupal 7 for some time, your development team should be aware of a couple Drupal modules (xautoload and service_container) that make writing Drupal-8 like code possible in Drupal 7. These tools will help familiarize developers with Drupal 8 paradigms and possibly reduce substantial technical debt in the future.

To those not on Drupal, but considering it

If you’re in the market for a CMS and have ambitious web goals, you should at least check out Drupal. It’s been holding fairly steadily in the CMS market, and given some of the problems of the past with versions before Drupal 8, we think this speaks very promisingly of the future for Drupal. This is not to say that it is the right fit for all websites. It is not. However, with free hosting options, an improved and simplified admin experience, and the most powerful backend of the open source CMSes, it does fit a lot of needs.

We want to hear from you

We want to hear from your experience, whether or not it resonates with what we’ve presented here. Are you having challenges to upgrading that you feel went unaddressed here? Notice anything we overlooked? Comment away, or write us privately if that’s appropriate. I’m also often on Drupal slack (@chrisarusso) checking in on our local #TriDUG meetup conversations.

Oct 27 2017
Oct 27

Drupal 8’s official release was nearly two years ago, and many ask how is it doing? Has it lived up to its ambition to revolutionize Drupal websites?

In the first of a two-part series, we’ll provide our insight into the evolution of Drupal 8 over its first two years in the wild. In part two, we’ll look at important factors to consider in your Drupal investments going forward.

(Drupal) Change is hard

To launch a website (much like to rock a rhyme that’s right on time) is tricky; to operate a web system in a way that uplifts your organization is just plain hard. Although keeping up with the most current software is typically advisable, there are costs to doing so, even for free software like Drupal. Without the vendor lock-in that comes with proprietary Content Management Systems, Drupal site owners have a high degree of freedom to consider how best to invest their web resources. However, this freedom also has a price (see a pattern emerging?) in the form of time and stress incurred from the responsibility to select the best digital tools to drive your organization for years with limited information to evaluate the nearly limitless options.

Of the 1 million+ organizations whose main window to the digital world is powered by Drupal, many have priorities that compete in time and budget with web system investments. When considering these priorities, determining the right time to invest in an improved user experience, design, feature-set, or software upgrade can be difficult.

With the ever-increasing complexity and interconnectedness of the software systems we build, even the world’s most prominent organizations, often with legions of engineers, have had colossal mishaps with upgrades. By default, upgrades are not easy.

To upgrade or not? That is the question.

Like any decision, whether you’re building new or upgrading, the fundamental question is: do the benefits outweigh the costs? In the specific case of investing in Drupal the question becomes: when does making the leap to Drupal 8 rather than continuing to invest in Drupal 7 (or possibly 6… don’t tell me it’s 5 :wink: ) outweigh the costs to take that leap? For any organization to properly answer that question, it’s necessary to look 3-5 years out with regard to budget and organizational goals. It’s also helpful to better understand how the broader community has approached this same decision over the past two years. Let’s take a look.

Taking stock of Drupal 8’s adoption

stock market image

After nearly two years since its public release, how has the adoption of Drupal 8 gone?

Analysis from the top

Before DrupalCon North America in May 2016 in New Orleans, Drupal founder and current project lead Dries Buytaert blogged that Drupal 8 was doing “outstanding,” citing statistics to substantiate his optimistic view.

Based on my past experience, I am confident that Drupal 8 will be adopted at “full-force” by the end of 2016.

Many in the community contested the veracity of his optimism in the article’s comments and I commended Dries (yes that’s me and not him, and definitely not him) for facilitating an open conversation that elicited a broad perspective.

About a month later, some six months after Drupal 8 was released, Savas Labs attended DrupalCon NOLA.

During the perennial “Driesnote,” Dries continued to present Drupal 8 as well on its way to match if not exceed the success of Drupal 7.

I really truly believe, Drupal 8 will take off. My guess is that by the end of this year [2016] Drupal 8 will serve an escape velocity… it will become the de facto standard.

and

The new architecture, features, as well as frequent releases: all of these things make me feel really, really optimistic and bullish about Drupal 8.

Adoption by the numbers

According to the usage statistics available on the Drupal website, when writing this nearly 80% of the world’s Drupal websites were powered by version 7.

drupal stats d8A graph started by Angie Byron of Acquia that I updated to present.

When Drupal 7 was released on January 5, 2011, there were already more Drupal 7 sites than sites powered by the major version two releases prior: Drupal 5 (A). The same feat for Drupal 8 took over nine months after its release to achieve (C). Total Drupal 7 sites eclipsed total sites of its predecessor version (6) about 13 months after the release of Drupal 7 (B). After nearly 2 years from the release of Drupal 8, it has not yet eclipsed Drupal 7 installations, and at present there are over 700,000 more Drupal 7 sites than Drupal 8.

Our take on Dries’s bullish-ness

To his credit, the future is notoriously difficult to predict, and even when predicting it, Dries spoke of the significant work that lay ahead to see his vision come to fruition. He also made the referenced comments well over a year ago, and I’ll concede speaking in hindsight is infinitely easier. Having said that, comparing the total number of upgraded Drupal 8 sites to Drupal 7 sites over the same period from release in a community that had grown ~220% since Drupal 7’s release, while factually indisputable, was probably not as accurate as using adoption percentages to analyze overall trends.

Even the most conservative interpretations of “escape velocity” or “full-force” would have to concede that we’re at least a year behind Dries’s hopes when he was reporting from DrupalCons Barcelona and New Orleans on impending rapid Drupal 8 adoption. But, what’s a dictator worth his salt to do, benevolent or not, other than to stretch the stats a bit to show what he would like to be true for his beloved community, from which he also profits?

Our assessment

After two years, the data unequivocally show, as I began discussing at DrupalCon New Orleans, the rate of Drupal 8 adoption is objectively slower than Drupal 7. At this point, a majority of organizations have not yet upgraded from 7 to 8, though likely many have begun efforts. Taking a simplistic view, this means Drupal 8 has either been more costly to upgrade, a comparatively less valuable product, or perhaps both.

Regardless, since it matters to our partners, we found it important to explore the reasons behind the slow adoption rather than to pretend it’s not happening. After architecting Drupal 8 web systems for 2.5 years, we have gained insight into the relatively slow adoption.

Drupal 8 adoption challenges

Drupalers haven’t written much about the retrospective analysis of the Drupal 8 adoption challenges. But without being able to take a real, honest look inward, we cannot improve. We must know thyself because the examined Drupal problems are worth fixing! We highlight here the most prominent challenges that have slowed Drupal 8 adoption.

1. Complete code re-architecture

The massive shift of the underpinnings of the Drupal code is a decision that has long been debated within the community. There’s no question it has proven a challenge for proficient Drupal 7 developers to develop on Drupal 8: for most, substantial training and learning is required. Training takes time, and time can often mean money. The loss in short-term efficiency for seasoned Drupal developers made early adoption riskier, and typically added to a project’s expense. Joining with other prominent frameworks known outside of Drupal like Twig and Symfony (colloquially referred to as “getting off the island”) was a collective decision by wise Drupal leadership with the long-term value of the product in mind, but in the short-term, for the average Drupal developer, it meant more new things to learn.

2. Slow contributed module porting

Historically Drupal has derived much of its usefulness from the rich contributed module ecosystem that extends the features of Drupal core. Contributed modules, although crucial to most live Drupal websites, by definition are not directly driven by those that oversee Drupal core development. This disconnect invariably leads to some important modules not having a usable upgraded version when a new major version of Drupal core is released. This is well-known within the Drupal community, explained at great length by Angie Byron (second reference), and not unique to the Drupal 8 release. Tremendous amounts of individual and community efforts are required to upgrade modules to the latest major version. Due to #1 from above, these efforts were further exacerbated by the re-architecture. Costs to upgrade even one module (it’s common for a Drupal 7 site to use 100) are often greater than clients or agencies are willing to absorb on a given project.

3. Incomplete upgrade path

We often describe websites as comprised of three main asset groups: the code (Drupal core, contributed and custom modules), files (think media assets like images), and the database where content and site configuration lives. When upgrading, you download the new Drupal code, which has a set of instructions that must be run to apply complex updates to the database. Files remain unchanged. A well-oiled upgrade process is required to update the content and configuration from the site being upgraded into a format intelligible to the new system. The approach to perform those upgrades has also changed in Drupal 8 to what is now referred to as “a migration”. As of the most recent minor release of Drupal 8 in October states:

…Drupal 6 to 8 migration path is nearing beta stability. Some gaps remain, such as for some internationalization data. The Drupal 7 to Drupal 8 migration is incomplete but is suitable for developers who would like to help improve the migration and can be used to test upgrades especially for simple Drupal 7 sites. Most high-priority migrations are available.

“Nearing beta stability” after two years out from release is not ideal though it is reality since perfecting these migration tasks is hard work. One can discern from the Drupal 7 -> 8 migration snippet that it’s clearly further afield, and for those who need to preserve their content, perhaps a non-starter for a 7 -> 8 upgrade. The inability to efficiently update database structures adds to project expense. Whatever doesn’t come over “for free” with the migration will need to be manually replicated by a human, and humans are costly, as our time is precious.

4. Stance on backwards compatibility

Drupal’s approach to backwards compatibility is famously “for data, not code”. Briefly put, in their words: “While the upgrade path will reliably preserve your data, there is no backward compatibility with the previous Drupal code.” If you want to dig deeper, there’s a lot of good discussion on this topic.

WordPress’s approach, perhaps more than anything, explains its ubiquity and ability to better keep sites on the latest version. In their words:

Major releases add new user features and developer APIs. Though typically a “major” version means you can break backwards compatibility (and indeed, it normally means that you have), WordPress strives to never break backwards compatibility. It’s one of our most important philosophies, and makes updates much easier on users and developers alike.

Albeit a bit confusing, even for the non-technologist, you get the sense they’re more worried about breaking stuff and want upgrades to Just Work™. The strength of the Drupal approach is it allows for more innovation, and in some ways, less baggage since preserving backwards compatibility often means hanging on to outdated code. The trade-off is, once old code is determined to be holding innovation back, it’s cast to the side, and new structures must be implemented in the updated version. Historically, this paradigm has caused many to get stuck in an outdated Drupal version for longer than they’d like because they cannot afford an upgrade.

5. Inertia, perceived value, and expense

A modern organization is focused on more than just their website, and for investments that don’t deliver direct, visual, tangible change, stakeholders often overlook them, even when they may present value. Examples where the value is oft-invisible to clients are investing in an automated testing framework that ensures perpetual site integrity, or vigilantly applying security updates as they become available. In either case, the client may perceive them as optional, but foregoing them is likely to cost the organization in the long-run.

Since Drupal only provides security support for two major versions at a time (presently 7 and 8), for many, the prime motive for a new release, often framed as a mandate, is to upgrade from the version two major releases prior, which has fallen out of support. When Drupal 8 came out, Drupal 6 fell out of support after a grace period of three months, generously extended from the day of release given some of the community’s recognition of some of the challenges we’ve documented here.

If an organization doesn’t heed the security warnings, and doesn’t find enough value in the new features, they may choose to ignore the upgrade completely. The truth is it’s hard to estimate the future risk of using outdated software. However that future risk is very real, and digital security compromises show no signs of slowing down. Savas Labs always advocates for timely security coverage, but it has not always been a budgetary possibility for our partners to upgrade from Drupal 6 to Drupal 8 upon release of 8.

An answer to the Drupal 6 problem

In addition to our experience, the usage data show many organizations did not plan sufficiently to upgrade from Drupal 6 to 7 or 8 upon Drupal 8’s release. Recognizing that, a Drupal agency My Drop Wizard set up long-term security support for the many Drupal 6 sites that were not ready to upgrade to Drupal 8. It’s debatable whether or not this was a good thing for the community. People forced to change, often will change sooner than they would otherwise, but they may resent you for it. Conversely, you’d be hard-pressed to find an MDW client who didn’t experience anxiety relief when offered an inertia-compliant alternative.

Organizations that don’t perceive opportunity in the value the new software provides will look at an upgrade strictly as an expense to avoid, likely citing topics we’ve covered here.

Takeaways

Through experience and analysis, we see there are many understandable and justifiable reasons why many organizations haven’t yet upgraded to Drupal 8. Now that we’ve done the hard reflection, the good news is that the present is a much brighter place for not only Drupal 8 but all future versions of Drupal. We have made it through most of the difficult growing pains, and there’s great reason to believe that the community has invested wisely in the future. In part two, we cover the costs of investing in Drupal 7, and why it’s probably time to move to Drupal 8.

Mar 06 2017
Mar 06

Creating and publishing quality content within time constraints is a common challenge for many content authors. As web engineers, we are focused on helping our clients overcome this challenge by delivering systems that are intuitive, stable, and a pleasure to operate.

During the architectural phase, it’s critical to craft the editorial experience to the specific needs of content authors to ensure the best content editing experience possible. Drupal 8 makes it even easier than previous versions for digital agencies to empower content creators and editors with the right tools to get the job done efficiently, and more enjoyably.

Our five tips to enhance the content editing experience with Drupal 8 are:

1. Don’t make authors guess - use structured content

2. Configure the WYSIWYG editor responsibly

3. Empower your editorial team with Quick-Edit

4. Enrich content with Media Embeds

5. Simplify content linking with LinkIt

1. Don’t make authors guess - use structured content

The abundance of different devices, screen sizes and form factors warrants the use of structured content. Structured content is content separated into distinct parts, each of which has a clearly defined purpose and can be edited and presented independently from one another according to context.

“How does that relate to a content editor’s experience?” - you may ask.

In years past, it was very popular to give content editors an ability to create “pages” using one big “MS Word-like” text box for writing their articles, news releases, product descriptions, etc. This approach produced content that was not reusable and was presented in one strict way. Who wants to navigate within one enormous text area to move images around?

Though those days are long behind us, and even though we all know about the importance of structured content, sometimes we still fail to utilize the concept correctly.

Drupal was one of the first Content Management Systems (CMS) to introduce the concept of structured content (node system - Drupal 3 in 2001). In fact, Drupal is no-doubt the best CMS for implementing the concept of structured content, but its ability to provide a good content authoring experience lagged behind this solid foundation.

Today, in Drupal 8, editing structured content is a joy!

With the WYSIWYG (What You See Is What You Get) editor and Quick Edit functionality in Drupal core, we can equip our content editors with the best of class authoring experience and workflow!

You can see the difference between unstructured and structured D8 content below. Instead of only one field containing all text, images, etc., the structured content stores each definitive piece of information in it’s own field, making content entry fast and presentation flexible!

Structured vs unstructured content

The benefits of Drupal 8 structured content approach:

  • The author clearly understands where each piece of information should reside and does not have to factor in markup, layout, and design while editing (see tip #2). Content entry becomes remarkably efficient and allows the author to concentrate on the essence of their message instead of format.
  • The publishing platform is easier to maintain while supporting system scalability.
  • The modular nature of structured content makes migrations between CMS versions or to a completely different CMS much more streamlined. A huge plus for those long-term thinkers!

2. Configure the WYSIWYG editor responsibly

Drupal 8 ships with WYSIWYG text editor in core. The editor even works great on mobile! In a society that is so dependent on our mobile devices - who wouldn’t like to be able to quickly edit a missed typo right from your phone?

Drupal 8 provides superior enhancements to the UX (User Experience) for content authors and editors out of the box. However, with a little configuration, things can be further improved.

When establishing the UI (User Interface) for content authors, site builders should focus on refining rather than whole-sale adoption of the available features. Customizing the WYSIWYG editor is the perfect example of subtle improvements that can immediately make a big difference.

The WYSIWYG text editor is an effective tool for simple content entry since it does not require the end user to be aware of HTML markup or CSS styles. Many great functions like text formatting options (font family, size, color, and background color), source code viewing, and indentation are available at our fingertips, but as site builders we should think twice before adding all those options to the text editor toolbar!

With great power comes great responsibility! Sometimes, when you give content editors control over the final appearance of the published content (e.g. text color, font family and size, image resizing, etc.), it can lead to an inconsistent color schemes, skewed image ratios, and unpredictable typography choices.

How do we help our content authors in avoiding common design / formatting mistakes? Simple!

Use a minimalist approach when configuring the WYSIWYG text editor. Give authors access to the most essential text formatting options that they will need for the type of content they create and nothing more. If the piece of content edited should not contain images or tables - do not include those buttons in the editor. The text editor should be used only for sections of text, not for the page layout.

A properly configured CMS should not allow content editors the ability to change the size of the text or play with image positioning within the text section or the ability to add H1 headers within auxiliary content.

Below is an example of a bad vs. good WYSIWYG configuration.

WYSIWYG editor configuration compared

Benefits of the minimal (thoughtful) WYSIWYG configuration:

  • Easy to use
  • Less confusion (though there are edge cases, most editors don’t use all the buttons)
  • Better usability on mobile devices
  • Less risk of breaking established website design

Let’s keep our content editors happy and not overcrowd their interfaces when it’s absolutely not necessary. It is our duty as software engineers to deliver systems that are easy to use, intuitive, scalable and uphold design consistency.

3. Empower your editorial team with Quick-Edit

The Quick Edit module is one of the most exciting new features that is included in Drupal 8 core. It allows content authors and editors to make updates to their content without ever leaving the page.

The days of clicking “Edit” and waiting for a separate page to load just to fix a tiny typo are gone! The Quick Edit module eliminates that extra step and allows content editors to save a great deal of time on updating content. As an added bonus - content editors can instantly see how updated content will look within the page flow.

Here’s the Quick Edit functionality in action.

Quick Edit module demo

Quick Edit configuration tip for back-end and front-end developers

To make use of the Quick Edit functionality within the website pages, entities have to be rendered on the page via View Modes and not as separate fields.

This restriction presents a challenge when there’s a needs to provide Quick Edit functionality for a page constructed by the Views module. More often than not, Views are used to single out and output individual fields from the entities. The most used Views formats are “Table” and “Grid”. They currently do not support Quick Edit functionality for usability reasons.

A workaround for this issue is to use the custom View modes for Entities and create custom Twig templates for each View mode that should be outputted by Views in order to accommodate custom layout options.

4. Enrich content with Media Embeds

In the era of social media, content editors can’t imagine their daily routine without being able to embed their Tweets or videos into the stories they publish on their sites. In Drupal 6 and the early days of Drupal 7, it was pretty challenging to provide this functionality within the WYSIWYG editor. Developers had to configure many different plugins and modules and ask them politely to cooperate.

The Drupal 8 Media initiative has placed the content author’s experience and needs at the forefront of community efforts. As a result, we have access to a great solution for handling external media - CKEditor Media Embed Module. It allows content editors to embed external resources such as videos, images, tweets, etc. via WYSIWYG editor. Here’s an example of the Tweet embed – the end result looks beautiful and requires minimal effort.

"If you're going to build a new site, build it in D8." - someone who knows what they're talking about quotes @jrbeaton @TriDUG pic.twitter.com/8w9GAuuARu

— Savas Labs (@Savas_Labs) January 27, 2017

With all this media goodness available to us, there is no reason why we shouldn’t go the extra mile and configure the CKEditor Media Embed module for our content authors!

5. Simplify content linking with LinkIt

Linking to content has always been a clumsy experience for content editors, especially when linking internally within the same site.

There was always the risk of accidentally navigating away from the page that you were actively editing (and losing any unsaved information) while searching for the page to link to. Also, the default CKEditor link button allowed editors to insert a link, assign it a target value, title, maybe an anchor name, but that was about it. If the link to the internal content changed, there was no way for the page to update and links throughout the website would end up broken.

Let’s not put our content editors through that horrible experience again. LinkIt module for Drupal 8 to the rescue!

With the LinkIt module the user does not have to copy / paste the URL or remember it. LinkIt provides a search for internal content with autocomplete field. Users can link not only to pages, but also to files that are stored within Drupal CMS.

The new and improved linking method is much more sustainable, as it recognizes when the URL of the linked content changes, and automatically produces the correct link within the page without the need to update that content manually.

LinkIt File link demo

Linking to files with LinkIt

My personal favorite feature of the LinkIt module is the flexible configuration options. The LinkIt module makes it possible to limit the type of entities (pages, posts, files) that are searchable via the link field. You can also create a custom configuration of the LinkIt autocomplete dialog for each WYSIWYG editor profile configured on your site. Plus, it is fully integrated with Drupal 8 configuration synchronization.

Final Thoughts

As site builders, there are many improvements that we can make in order to streamline the process of content authoring.

With the right mix of forethought and understanding, Drupal 8 allows web engineers to deliver content publishing platforms that are unique to the client’s specific needs, while making web authoring a productive and satisfying experience.

Feb 20 2017
Feb 20

Overview

Savas Labs has been using Docker for our local development and CI environments for some time to streamline our systems. On a recent project, we chose to integrate Phase 2’s Pattern Lab Starter theme to incorporate more front-end components into our standard build. This required building a new Docker image for running applications that the theme depends on. In this post, I’ll share:

  • A Dockerfile used to build an image with Node, npm, PHP, and Composer installed
  • A docker-compose.yml configuration and Docker commands for running theme commands such as npm start from within the container

Along the way, I’ll also provide:

  • A quick overview of why we use Docker for local development
    • This is part of a Docker series we’re publishing, so be on the lookout for more!
  • Tips for building custom images and running common front-end applications inside containers.

Background

We switched to using Docker for local development last year and we love it - so much so that we even proposed a Drupalcon session on our approach and experience we hope to deliver. Using Docker makes it easy for developers to quickly spin up consistent local development environments that match production. In the past we used Vagrant and virtual machines, even a Drupal-specific flavor DrupalVM, for these purposes, but we’ve found Docker to be faster when switching between multiple projects, which we often do on any given Sunworkday.

Usually we build our Docker images from scratch to closely match production environments. However, for agile development and rapid prototyping, we often make use of public Docker images. In these cases we’ve relied on Wodby’s Docker4Drupal project, which is “a set of docker containers optimized for Drupal.”

We’re also fans of the atomic design methodology and present our clients interactive style guides early to facilitate better collaboration throughout. Real interaction with the design is necessary from the get-go; gone are the days of the static Photoshop file at the outset that “magically” translates to a living design at the end. So when we heard of the Pattern Lab Starter Drupal theme which leverages Pattern Lab (a tool for building pattern-driven user interfaces using atomic design), we were excited to bake the front-end components in to our Docker world. Oh, the beauty of open source!

Building the Docker image

To experiment with the Pattern Lab Starter theme we began with a vanilla Drupal 8 installation, and then quickly spun up our local Docker development environment using Docker4Drupal. We then copied the Pattern Lab Starter code to a new custom/theme/patter_lab_starter directory in our Drupal project.

Running the Phase 2 Pattern Lab Starter theme requires Node.js, the node package manager npm, PHP, and the PHP dependency manager Composer. Node and npm are required for managing the theme’s node dependencies (such as Gulp, Bower, etc.), while PHP and Composer are required by the theme to run and serve Pattern Lab.

While we could install these applications on the host machine, outside of the Docker image, that defeats the purpose of using Docker. One of the great advantages of virtualization, be it Docker or a full VM, is that you don’t have to rely on installing global dependencies on your local machine. One of the many benefits of this is that it ensures each team member is developing in the same environment.

Unfortunately, while Docker4Drupal provides public images for many applications (such as Nginx, PHP, MariaDB, Mailhog, Redis, Apache Solr, and Varnish), it does not provide images for running the applications required by the Pattern Lab Starter theme.

One of the nice features of Docker though is that it is relatively easy to create a new image that builds upon other images. This is done via a Dockerfile which specifies the commands for creating the image.

To build an image with the applications required by our theme we created a Dockerfile with the following contents:

FROM node:7.1
MAINTAINER Dan Murphy <[email protected]>

RUN apt-get update && \
    apt-get install -y php5-dev  && \
    curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer && \

    # Directory required by Yeoman to run.
    mkdir -p /root/.config/configstore \

    # Clean up.
    apt-get clean && \
    rm -rf \
      /root/.composer \
      /tmp/* \
      /usr/include/php \
      /usr/lib/php5/build \
      /var/lib/apt/lists/*

# Permissions required by Yeoman to run: https://github.com/keystonejs/keystone/issues/1566#issuecomment-217736880
RUN chmod g+rwx /root /root/.config /root/.config/configstore

EXPOSE 3001 3050

The commands in this Dockerfile:

  • Set the official Node 7 image as the base image. This base image includes Node and npm.
  • Install PHP 5 and Composer.
  • Make configuration changes necessary for running Yeoman, a popular Node scaffolding system used to create new component folders in Pattern Lab.
  • Expose ports 3001 and 3050 which are necessary for serving the Pattern Lab style guide.

From this Dockerfile we built the image savaslabs/node-php-composer and made it publicly available on DockerHub. Please check it out and use it to your delight!

One piece of advice I have for building images for local development is that while Alpine Linux based images may be much smaller in size, the bare-bones nature and lack of common packages brings with it some trade-offs that make it more difficult to build upon. For that reason, we based our image on the standard DebianJessie Node image rather than the Alpine variant.

This is also why we didn’t just simply start from the wodby/drupal-php:7.0 image and install Node and npm on it. Unfortunately, the wodby/drupal-php image is built from alpine:edge which lacks many of the dependencies required to install Node and npm.

Now a Docker purist might critique this image and recommend only “one process per container”. This is a drawback of this approach, especially since Wodby already provides a PHP image with Composer installed. Ideally, we’d use that in conjunction with separate images that run Node and npm.

However, the theme’s setup makes that difficult. Essentially PHP scripts and Composer commands are baked into the theme’s npm scripts and gulp tasks, making it difficult to untangle them. For example, the npm start command runs Gulp tasks that depend on PHP to generate and serve the Pattern Lab style guide.

Due to these constraints, and since this image is for local development, isn’t being used to deploy a production app, and encapsulates all of the applications required by the Pattern Lab Starter theme, we felt comfortable with this approach.

Using the image

To use this image, we specified it in our project’s docker-compose.yml file (see full file here) by adding the following lines to the services section:

node-php-composer:
 image: savaslabs/node-php-composer:1.2
 ports:
   - "3050:3050"
   - "3001:3001"
 volumes_from:
   - php

This defines the configuration that is applied to a node-php-composer container when spun up. This configuration:

  • Specifies that the container should be created from the savaslabs/node-php-composer image that we built and referenced previously
  • Maps the container ports to our host ports so that we can access the Pattern Labs style guide locally
  • Mounts the project files (that are mounted to the php container) so that they are accessible to the container.

With this service defined in the docker-compose.yml we can start using the theme!

First we spin up the Docker containers by running docker-compose up -d.

Once the containers are running, we can open a Bash shell in the theme directory of the node-php-composer container by running the command:

docker-compose run --rm --service-ports -w /var/www/html/web/themes/custom/pattern_lab_starter node-php-composer /bin/bash

We use the --service-ports option to ensure the ports used for serving the style guide are mapped to the host.

Once inside the container in the theme directory, we install the theme’s dependencies and serve the style guide by running the following commands:

npm install --unsafe-perm
npm start

Voila! Once npm start is running we can access the Pattern Lab style guide at the URL’s that are outputted, for example http://localhost:3050/pattern-lab/public/.

Note: Docker runs containers as root, so we use the --unsafe-perm flag to run npm install with root privileges. This is okay for local development, but would be a security risk if deploying the container to production. For information on running the container as an unprivileged user, see this documentation.

Gulp and Bower are installed as theme dependencies during npm install, therefore we don’t need either installed globally in the container. However, to run these commands we must shell into the theme directory in the container (just as we did before), and then run Gulp and Bower commands as follows:

  • To install Bower libraries run $(npm bin)/bower install --allow-root {project-name} --save
  • To run arbitrary Gulp commands run $(npm bin)/gulp {command}

Other commands listed in the Pattern Lab Starter theme README can be run in similar ways from within the node-php-composer container.

Conclusion

Using Docker for local development has many benefits, one of which is that developers can run applications required by their project inside containers rather than having to install them globally on their local machines. While we typically think of this in terms of the web stack, it also extends to running applications required for front-end development. The Docker image described in this post allows several commonly used front-end applications to run within a container like the rest of the web stack.

While this blog post demonstrates how to build and use a Docker image specifically for use with the Pattern Lab Starter theme, the methodology can be adapted for other uses. A similar approach could be used with Zivtech’s Bear Skin theme, which is another Pattern Lab based theme, or with other contributed or custom themes that rely on npm, Gulp, Bower, or Composer.

If you have any questions or comments, please post them below!

Feb 15 2017
Feb 15

We use Docker for our development environments because it helps us adhere to our commitment to excellence. It ensures an identical development platform across the team while also achieving parity with the production environment. These efficiency gains (among others we’ll share in an ongoing Docker series) over traditional development methods enable us to spend less time on setup and more time building amazing things.

Part of our workflow includes a mechanism to establish and update the seed database which we use to load near-real-time production content to our development environments as well as our automated testing infrastructure. We’ve found it’s best to have real data throughout the development process, rather than using stale or dummy data which runs the risk of encountering unexpected issues toward the end of a project. One efficiency boon we’ve recently implemented and are excited to share is a technique that dramatically speeds up database imports, especially large ones. This is a big win for us since we’re often importing large databases multiple times a day on a project. In this post we’ll look at:

  • how much faster data volume imports are compared to traditional database dumps piped to mysql
  • how to set up a data volume import with your Drupal Docker stack
  • how to tie in this process with your local and continuous integration environments

The old way

The way we historically imported a database was to pipe a SQL database dump file into the MySQL command-line client:

mysql -u{some_user} -p{some_pass} {database_name} < /path/to/database.sql

An improvement upon the default method above which we’ve been using for some time allows us to monitor import progress utilizing the pv command. Large imports can take many minutes, so having insight into how much time remains is helpful to our workflow:

pv /path/to/database.sql | mysql -u{some_user} -p {some_pass} {database_name}

On large databases, though, MYSQL imports can be slow. If we look at a database dump SQL file, we can see why. For example, a 19 MB database dump file we are using in one of our test cases further on in this post contains these instructions:

--
-- Table structure for table `block_content`
--

DROP TABLE IF EXISTS `block_content`;
/*!40101 SET @saved_cs_client     = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `block_content` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `revision_id` int(10) unsigned DEFAULT NULL,
  `type` varchar(32) CHARACTER SET ascii NOT NULL COMMENT 'The ID of the target entity.',
  `uuid` varchar(128) CHARACTER SET ascii NOT NULL,
  `langcode` varchar(12) CHARACTER SET ascii NOT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `block_content_field__uuid__value` (`uuid`),
  UNIQUE KEY `block_content__revision_id` (`revision_id`),
  KEY `block_content_field__type__target_id` (`type`)
) ENGINE=InnoDB AUTO_INCREMENT=12 DEFAULT CHARSET=utf8mb4 COMMENT='The base table for block_content entities.';
/*!40101 SET character_set_client = @saved_cs_client */;

--
-- Dumping data for table `block_content`
--

LOCK TABLES `block_content` WRITE;
/*!40000 ALTER TABLE `block_content` DISABLE KEYS */;
set autocommit=0;
INSERT INTO `block_content` VALUES (1,1,'basic','a9167ea6-c6b7-48a1-ac06-6d04a67a5d54','en'),(2,2,'basic','2114eee9-1674-4873-8800-aaf06aaf9773','en'),(3,3,'basic','855c13ba-689e-40fd-9b00-d7e3dd7998ae','en'),(4,4,'basic','8c68671b-715e-457d-a497-2d38c1562f67','en'),(5,5,'basic','bc7701dd-b31c-45a6-9f96-48b0b91c7fa2','en'),(6,6,'basic','d8e23385-5bda-41da-8e1f-ba60fc25c1dc','en'),(7,7,'basic','ea6a93eb-b0c3-4d1c-8690-c16b3c52b3f1','en'),(8,8,'basic','3d314051-567f-4e74-aae4-a8b076603e44','en'),(9,9,'basic','2ef5ae05-6819-4571-8872-4d994ae793ef','en'),(10,10,'basic','3deaa1a9-4144-43cc-9a3d-aeb635dfc2ca','en'),(11,11,'basic','d57e81e8-c613-45be-b1d5-5844ba15413c','en');
/*!40000 ALTER TABLE `block_content` ENABLE KEYS */;
UNLOCK TABLES;
commit;

When we pipe the contents of the MySQL database dump to the mysql command, the client processes each of these instructions sequentially in order to (1) create the structure for each table defined in the file, (2) populate the database with data from the SQL dump and (3) do post-processing work like create indices to ensure the database performs well. The example here processes pretty quickly, but if your site has a lot of historic content, as many of our clients do, then the import process can take enough time that it throws a wrench in our rapid workflow!

What happens when mysql finishes importing the SQL dump file? The database contents (often) live in /var/lib/mysql/{database}, so for example for the block_content table mentioned above, assuming you’re using the typically preferred InnoDB storage engine, there are two files called block_content.frm and block_content.ibd in /var/lib/mysql/{database}/. The /var/lib/mysql directory will also contain a number of other directories and files related to the configuration of the MySQL server.

Now, suppose that instead of sequentially processing the SQL instructions contained in a database dump file, we were able to provide developers with a snapshot of the /var/lib/mysql directory for a given Drupal site. Could this swap faster than the traditional database import methods? Let’s have a look at two test cases to find out!

MySQL import test cases

The table below shows the results of two test cases, one using a 19 MB database and the other using a 4.7 GB database.

Method Database size Time to drop tables and restore (seconds) Traditional mysql 19 MB 128 Docker data volume restore 19 MB 11 Traditional mysql 4.7 GB 606 Docker data volume restore 4.7 GB 85

In other words, the MySQL data volume import completes, on average, in about 11% of the time, or 9 times faster, than a traditional MySQL dump import would take!

Since a GIF is worth a thousand words, compare these two processes side-by-side (both are using the same 19 MB source database; the first is using a data volume restore process while the second is using the traditional MySQL import process). You can see that the second process takes considerably longer!

Docker data volume restore

Traditional MySQL database dump import

Use MySQL volume for database imports with Docker

Here’s how the process works. Suppose you have a Docker stack with a web container and a database container, and that the database container has data in it already (your site is up and running locally). Assuming a database container name of drupal_database, to generate a volume for the MySQL /var/lib/mysql contents of the database container, you’d run these commands:

# Stop the database container to prevent read/writes to it during the database
# export process.
docker stop drupal_database
# Now use the carinamarinab/backup image with the `backup` command to generate a
# tar.gz file based on the `/var/lib/mysql` directory in the `drupal_database`
# container.
docker run --rm --volumes-from drupal_database carinamarina/backup backup \
--source /var/lib/mysql/ --stdout --zip > db-data-volume.tar.gz

With the 4.7 GB sample database above, this process takes 239 seconds and results in 702 MB compressed file.

We’re making use of the carinamarina/backup image produced by Rackspace to create an archive of the database files.

You can then distribute this file to your colleagues (at Savas Labs, we use Amazon S3), or make use of it in continuous integration builds (more on that below), using these commands:

# Copy the data volume tar.gz file from your team's AWS S3 bucket.
if [ ! -f db/db-data-volume.tar.gz ]; then aws s3 cp \
s3://{your-bucket}/mysql-data-volume/db-data-volume.tar.gz db-data-volume.tar.gz; fi
# Stop the database container to prevent read/writes during the database
# restore process.
docker stop drupal_database
# Remove the /var/lib/mysql contents from the database container.
docker run --rm --volumes-from drupal_database alpine:3.3 rm -rf /var/lib/mysql/*
# Use the carinamarina/backup image with the `restore` command to extract
# the tar.gz file contents into /var/lib/mysql in the database container.
docker run --rm --interactive --volumes-from drupal_database \
carinamarina/backup restore --destination /var/lib/mysql/ --stdin \
--zip < db-data-volume.tar.gz
# Start the database container again.
docker start drupal_database

So, not too complicated, but it will require a change in your processes for generating seed databases to distribute to your team for local development, or for CI builds. Instead of using mysqldump to create the seed database file, you’ll need to use the carinamarina/backup image to create the .tar.gz file for distribution. And instead of mysql {database} < database.sql you’ll use carinamarina/backup to restore the data volume.

In our team’s view this is a small cost for the enormous gains in database import time, which in turn boosts productivity to the tune of faster CI builds and refreshes of local development environments.

Further efficiency gains: integrate this process with your continuous integration workflow

The above steps can be manually performed by a technical lead responsible for generating and distributing the MySQL data volume to team members and your testing infrastructure. But we can get further productivity gains by automating this process completely with Travis CI and GitHub hooks. In outline, here’s what this process looks like:

1. Generate a new seed database SQL dump after production deployments

At Savas Labs, we use Fabric to automate our deployment process. When we deploy to production (not on a Docker stack), our post-deployment tasks generate a traditional MySQL database dump and copy it to Amazon S3:

def update_seed_db():
    run('drush -r %s/www/web sql-dump \
    --result-file=/tmp/$(date +%%Y-%%m-%%d)--post-deployment.sql --gzip \
    --structure-tables-list=cache,cache_*,history,search_*,sessions,watchdog' \
    % env.code_dir)
    run('/usr/local/bin/aws s3 cp /tmp/$(date +%Y-%m-%d)--post-deployment.sql.gz \
    s3://{bucket-name}/seed-database/database.sql.gz --sse')
    run('rm /tmp/$(date +%Y-%m-%d)--post-deployment.sql.gz')

2. When work is merged into develop, create a new MySQL data volume archive

We use git flow as our collaboration and documentation standard for source code management on our Drupal projects. Whenever a developer merges a feature branch into develop, we update the MySQL data volume archive dump for use in Travis CI tasks and local development. First, there is a specification in our .travis.yml file that calls a deployment script:

deploy:
  provider: script
  script:
    - resources/scripts/travis-deploy.sh
  skip_cleanup: true
  on:
    branch: develop

And the travis-deploy.sh script:

#!/usr/bin/env bash

set -e

make import-seed-db
make export-mysql-data
aws s3 cp db-data-volume.tar.gz \
s3://{bucket-name}/mysql-data-volume/db-data-volume.tar.gz --sse

This script: (1) imports the traditional MySQL seed database file from production, and then (2) creates a MySQL data volume archive. We use a Makefile to standardize common site provisioning tasks for developers and our CI systems.

3. Pull requests and local development make use of the MySQL data volume archive

Now, whenever developers want to refresh their local environment by wiping the existing database and re-importing the seed database, or, when a Travis CI build is triggered by a GitHub pull request, these processes can make use of an up-to-date MySQL data volume archive file which is super fast to restore! This way, we ensure we’re always testing against the latest content and configuration, and avoid running into costly issues having to troubleshoot inconsistencies with production.

Conclusion

We’ve invested heavily in Docker for our development stack, and this workflow update is a compelling addition to that toolkit since it has substantially sped up MySQL imports and boosted productivity. Try it out in your Docker workflow and we invite comments to field any questions and hear about your successes. Stay tuned for further Docker updates!

Nov 15 2016
Nov 15

Our clients are often looking to reach their audiences via email campaigns, and MailChimp is one of the solutions we frequently recommend for this. MailChimp makes it easy to create and manage email campaigns while also providing beneficial analytics on user behavior.

Earlier this year I wrote a blog post showing how to use Composer Manager along with the Mailchimp API v2.0 PHP package to subscribe users to mailing lists in a Drupal 6 or 7 custom module without the need for the Mailchimp contributed module.

However, since then, MailChimp API v3.0 was released and Mailchimp announced that v2.0 (and all prior versions) will no longer be supported after 2016.

So in this blog post, I’ll demonstrate how to accomplish the same objective using the new MailChimp API v3.0, and I’ll expand the tutorial to also include some Drupal 8 specifics.

Background

To quickly summarize the key takeaways from my previous blog posts on Composer Manager and subscribing users to MailChimp lists using the old API:

  • Composer is a tool for managing PHP libraries that your project depends on.
  • Challenges arise managing project-wide dependencies when custom and contributed modules specify their own unique dependencies.
  • Composer Manager is a contributed module for Drupal 7 (and formerly Drupal 6) that addresses these challenges and allows contributed and custom modules to depend on PHP libraries managed via Composer.
  • Using a Composer managed PHP package for the MailChimp API, we can easily subscribe users to MailChimp lists in a Drupal custom module without relying on the Mailchimp module.
  • While the Mailchimp contributed module is great, sometimes all you need is a simple, lightweight method for subscribing users to mailing lists.

One important development since my previous posts is that Composer Manager has been deprecated for Drupal 8. Improvements introduced in Drupal 8.1.0 allow modules to rely on Composer managed dependencies without the need for the Composer Manager module.

Implementation

There are a few steps we must take so that we can subscribe users to mailing lists in our custom module. We’ll review each of these steps in detail:

  • Add the MailChimp API v3.0 PHP library as a dependency of our custom module.
  • Ensure that the library is installed for our project.
  • Properly use the library in our custom module to subscribe users to mailing lists.

Specify the dependency

ThinkShout maintains the Mailchimp contributed module and we were very excited to see that as part of the effort to “get Drupal off the island” they also released a PHP library for MailChimp API v3.0.

To use this new library, we must specify it as a dependency of our custom module. We do that in a composer.json file that sits in our custom module’s root directory and requires that library via the following code:

{
  "require": {
    "thinkshout/mailchimp-api-php": ">=1.0.3"
  }
}

Install the library

Composer is intended for projects and therefore requires a Drupal site to have a single composer.json, so things get complicated when individual modules specify their own dependencies.

For Drupal 7 sites (or still active Drupal 6 sites), the Composer Manager contributed module handles this by merging the requirements specified by each custom and contributed module’s composer.json files into a single, consolidated, site-wide composer.json file.

So for Drupal 6/7 projects we’ll need Composer Manager installed and enabled.

Once enabled, we can generate the consolidated composer.json and then install all of the site’s dependencies that file specifies (including the MailChimp API v3.0 PHP library specified by our custom module) in one of two ways:

From the command line, we can run the following drush commands:

$ drush composer-json-rebuild
$ drush composer-manager install

Alternatively, we could include the following lines in an update hook:

// Re-build Composer Manager composer.json and run composer update.
drush_composer_manager_composer_json_rebuild();
drush_composer_manager('update');

For Drupal 8 sites, the process is slightly different. As mentioned previously, as of release 8.1.0, Drupal core directly uses Composer to manage dependencies and the Composer Manager module is no longer necessary. For Drupal 8 sites, we should follow the Drupal.org instructions for managing dependencies for a custom project. Following those instructions ensures that all of the site’s dependencies, including the MailChimp library specified by our custom module, are installed.

Use the library

Once we have the MailChimp API v3.0 PHP library installed, we can use it in our custom module to subscribe users to mailing lists.

We suggest creating a dedicated function for subscribing users to email lists which can then be called throughout the custom module. For our purposes, we modeled that function off of the Mailchimp module (version 7.x-4.6) mailchimp_subscribe_process() function.

We implemented the following function, which can be reviewed and modified for your specific purposes:

<?php
/**
 * Add an email to a MailChimp list.
 *
 * This code is based on the 7.x-4.6 version of the Mailchimp module,
 * specifically the mailchimp_subscribe_process() function. That version of
 * the Mailchimp contrib module makes use of the ThinkShout PHP library for
 * version 3.0 of the MailChimp API. See the following for more detail:
 * https://www.drupal.org/project/mailchimp
 * https://github.com/thinkshout/mailchimp-api-php.
 *
 * @see Mailchimp_Lists::subscribe()
 *
 * @param string $api_key
 *   The MailChimp API key.
 * @param string $list_id
 *   The MailChimp list id that the user should be subscribed to.
 * @param string $email
 *   The email address for the user being subscribed to the mailing list.
 */
function mymodule_subscribe_user($api_key, $list_id, $email) {

  try {
    // Set the timeout to something that won't take down the Drupal site:
    $timeout = 60;
    // Get an instance of the MailchimpLists class.
    $mailchimp = new \Mailchimp\MailchimpLists($api_key, 'apikey', $timeout);

    // Use MEMBER_STATUS_PENDING to require double opt-in for the subscriber. Otherwise, use MEMBER_STATUS_SUBSCRIBED.
    $parameters = array(
      'status' => \Mailchimp\MailchimpLists::MEMBER_STATUS_PENDING,
      'email_type' => 'html',
    );

    // Subscribe user to the list.
    $result = $mailchimp->addOrUpdateMember($list_id, $email, $parameters);

    if (isset($result->id)) {
      watchdog('mymodule', '@email was subscribed to list @list.',
        array('@email' => $email, '@list' => $list_id), WATCHDOG_NOTICE
      );
    }
    else {
      watchdog('mymodule', 'A problem occurred subscribing @email to list @list.', array(
        '@email' => $email,
        '@list' => $list_id,
      ), WATCHDOG_WARNING);
    }
  }
  catch (Exception $e) {
    // The user was not subscribed so log to watchdog.
    watchdog('mymodule', 'An error occurred subscribing @email to list @list. Status code @code. "%message"', array(
      '@email' => $email,
      '@list' => $list_id,
      '%message' => $e->getMessage(),
      '@code' => $e->getCode(),
    ), WATCHDOG_ERROR);
  }
}

With that function defined, we can then subscribe an email address to a specific Mailchimp mailing list through the following function call in our custom module:

mymodule_subscribe_user($api_key, $list_id, $email);

Conclusion

By taking advantage of the modern PHP ecosystem built on reusable Composer managed packages, we can easily build or adapt a custom module to subscribe users to mailing lists without the MailChimp contributed module.

Lastly, a special thanks to ThinkShout for their hard work maintaining the MailChimp module and creating the library, on which this approach depends!

Sep 23 2016
Sep 23

Here at Savas Labs, we listen to our clients needs, and what many of our clients need is to reach their target audiences effectively. Let’s be honest here - perfectly coded, a pretty looking website will be of little use if it doesn’t produce leads / increase brand awareness / facilitate conversions or generate revenue! So how do we help our clients achieve their goals? We get there by balancing website objectives while giving priority to lead generation via SEO. The more quality traffic that comes to the website - the more conversions we can achieve. It’s that simple!

There are multiple ways of generating traffic to a website. The most popular methods are SEO (Search Engine Optimization) and PPC (Pay Per Click). Both bring traffic through search engines. The difference between the two is that SEO brings long-term results boosting organic traffic while PPC helps marketers achieve short-term goals by gaining instant exposure throughout the duration of an ad campaign.

In this post I’ll share some insight about current SEO trends. I’ll also describe new features of Drupal 8 that make it the most SEO-friendly content management framework available today.

Let’s start by taking a look at what exactly Search Engine Optimization is.

What is SEO?

Search Engine Optimization (SEO) is a marketing discipline focused on optimizing a website’s architecture and content so that it performs well (read “ranks high”) in organic search engine results.

Search engines (Google, Bing, etc.) change their search algorithms many times throughout the year. There are over 200 ranking factors that become updated with every algorithmic change. It is worth noting that Google, one of the leading search engines, is steadily growing its market share in the world and the United States for both desktop and mobile search (see chart below).

Search Engine Market Share 2016

Search Engine Market Share 2016

Google’s dominance in web search makes it clear that in 2017 marketers should pay close attention to Google’s algorithmic updates in order to stay ahead of the curve. Standards introduced to SEO by Google are likely to satisfy all other search engines. Given this reality, there are many techniques that website owners can use to optimize their digital property for search engine consumption. So where do we start? How do we know what efforts will bring us the best Return on Investment (ROI) and let our marketers do their job effectively in the long run?

SEO Outlook 2017

The Savas Labs team stays dialed in on the current trends of Search Engines Optimization. By leveraging aggregated research data and first-hand experience, we’ve developed a solid, yet constantly evolving, foundation of currently effective marketing methods.

Here are four ranking factors that we’ve identified as being most important as of Q4 2016. Our forecast is that these four factors will likely remain at the top of the list throughout 2017.

1. Content

Content is still king! Yes, that’s right. It is and it always will be! You’ve got to be relevant in order to even appear in search. And nothing will make you more relevant than carefully crafted, practical, awesome, juicy, shareable, actionable (you name it) CONTENT! It is important to note that marketers should stop thinking of content as purely text and focus their efforts on providing visual content that supports storytelling, is engaging and matches user intent.

Not just any good ‘ol link to your website. Good backlinks come from high authority domains that are in the same niche as your website. Strong backlinks bring quality traffic and are therefore considered highly desirable to your SEO cause.

3. Responsive Design

With more people using their handheld devices to browse the internet, it has become increasingly important to make a website look good across multiple platforms (smartphone, tablet, etc.). It is not an option in 2017 - it is a necessity! While we won’t get into the notoriously labeled Google algorithm update “Mobilegeddon” that happened in April 2015, we will provide some interesting statistics to back up the importance of responsive design.

There are more mobile internet users than desktop internet users; 52.7% of global internet users access the internet via mobile, and 75.1% of U.S. internet users access the internet via mobile.

4 out of 5 consumers use a Smartphone to shop.

4. Page Speed

In response to the substantial increase in mobile traffic growth, search engines have acknowledged the importance of page speed and the effect it has on user experience (UX) and now give more weight to fast-loading websites.

40% of people abandon a website that takes more than 3 seconds to load.

a 2-second delay in load time during a transaction resulted in abandonment rates of up to 87%. This is significantly higher than the baseline abandonment rate of 67%.

Can your business handle the loss in revenue that may occur from slow page load speed?

Drupal 8 - Built with SEO in Mind

The base of all our SEO efforts lies within the website’s architecture. There are many website engines and CMS’s to choose from and most of them will claim that they are SEO optimized. Don’t be fooled! No CMS will come search engine optimized out of the box. It may have some features, which, if configured correctly, may bring you some SEO benefits. SEO is not only about code, though it does start there. SEO is also about the continuous efforts of your marketing team. We all know that time = money. The more efficient your marketing team is in performing tasks within your CMS - the more ROI you get!

A good CMS must provide means for your marketing team to work independently from your development team. Drupal 8 does just that! It provides a solid framework that can be tuned to become a powerful marketing-machine.

Let’s take a look at some of the new features in core that make search engines love Drupal 8.

Drupal 8 is Responsive out of the Box

Drupal 8 comes with responsive themes in core. Now both public facing and admin facing themes are responsive and make user experience great on any device.

Drupal 8 Page Load is Fast

There has been a lot of debate about Drupal 8 vs. Drupal 7 performance / page load since the Drupal 8’s release. It is a fact that vanilla Drupal 8 is running much more code than vanilla Drupal 7. It runs vendor code like Symfony, which adds some overhead. However, Drupal 8 has a significant number of performance improvements that are making up for that overhead:

  • Javascript files are now loading in the footer. Due to this change pages build up faster and user can see and use them earlier.

  • Pluggable CSS/JS aggregation and minification to support more optimal optimization algorithms.

  • Highly improved caching. Drupal 8 uses “cache tags” that makes caching more efficient and includes Cache Context API which provides context-based caching. This means pages load faster while ensuring that visitors always see the latest version of your site.

  • BigPipe render pipeline. Sends pages in a way that allows browsers to show them much faster. First sends the cacheable parts of the page, then the dynamic/uncacheable parts. Uses the BigPipe technique.

These improvements have the potential to make your Drupal 8 website fly! And if after all that it is not “flying” - than you need someone to review the code that powers your website’s features. Contact us.

Semantic Markup

Search engines appreciate clean markup that explicitly describes the purpose of on-page elements. Thanks to the HTML5 Initiative for Drupal 8 development, we now have a number of great markup improvements right in Drupal core:

  • HTML5 themes with new semantic elements in core templates

  • Support for the new form elements to Drupal’s Form API

  • Rich media handling with <video> and <audio> elements

  • ARIA roles in markup to improve accessibility

  • Resource Description Framework (RDF) support that provides a standardized model for data interchange and facilitates Schema.org mappings

  • Twig theming engine - makes it harder for developers to create messy, non-semantic code

Content-as-a-Service

Another exciting new feature of Drupal 8 is a flexible content delivery.

Today, content owners want to get their content to as many platforms and channels as possible: web, mobile, social networks, smart devices, etc. It is expensive to have a separate solution for every channel. It is much more efficient to have a single editorial team and single software platform that allows for well-organized content management. Drupal 8 and its content-as-a-service capability provides a one-stop solution where content is created and managed via unified web-interface and then consumed by other channels with minimal effort.

Drupal 8 Multilingual Capabilities

To reach audiences from around the world, companies need to speak to users in their native language. In 2017, producing content in English language is not enough, even if English is considered an internationally accepted language. The United States is now the world’s second largest Spanish-speaking country after Mexico, which amplifies the necessity of serving multilingual content for U.S. based audience. To help put things in perspective we checked recent statistics.

English is a #1 language used in the Web, but it only amounts to 26.3% of the online market.

There are 41 million native Spanish speakers in the U.S. Around 79% of them using search engines on a daily basis for gathering information about a future purchase.

Reaching a global audience with Drupal has never been this easy! Previous versions of Drupal had partial support for multilingual websites. Luckily, Drupal 8 had a fundamental overhaul of its multilingual system. Every single component is translatable out of the box in Drupal core without any additional modules. Drupal core natively supports 94 languages. Also, the administration interface is now entirely translatable. Media assets (files or images), can now be assigned to a language or shared between languages. This gives a huge advantage to businesses that aim to reach a global audience.

SEO for Drupal 8 is off to a good start with just the core features! Drupal 8 also has a growing number of contributed modules that can amplify your SEO efforts. Just to name a few: Metatag, Google Analytics, Pathauto, Redirect, and more.

Drupal 8 satisfies current SEO trends enabling marketers do their job effectively and efficiently! Even with minimal configuration, Drupal 8 lays a solid base for future marketing performance.

Jun 16 2016
Jun 16

This is part 2 of a series on using XHProf for profiling Drupal modules.

After you’ve installed XHProf, what’s next? How can you make use of its recommendations to tune a module or a website? Unfortunately there are no hard-and-fast rules for optimizing code to run more efficiently. What I can offer here is my own experience trying to optimize a D8 module using XHProf.

Understanding an XHProf run report

The XHProf GUI displays the result of a given profiler run, or a group of runs. It can even compare 2 runs, but we’ll get to that in a minute. If you followed my previous post, you should have the xhprof_html directory symlinked into the root web directory for your Drupal site; so visiting <my-local-site>/xhprof/ should give you a list of all available stored run IDs, and you can click through one of those to view a specific run report.

You can also go directly to a specific run report via the URL <my-local-site>/xhprof/index.php?run=<run-id>&source=<source-id> (which you should have been logging already via an echo statement or dblog if you followed the last post).

Header of an XHProf run report

The core of the run report is a table of each function or method which your code called while it was being profiled, along with a set of statistics about that function. This allows you to understand which parts of your code are most resource-intensive, and which are being called frequently in the use case that you’re profiling. Clicking on any one of the column headers will sort the list by that metric. To understand this report, it’s helpful to have some terminology:

  • Incl. Wall Time - The total clock time elapsed between when a function call started and when the function exited. Note that this number is not a great basis for comparisons, since it can include other processes which were taking up CPU time on your machine while the PHP code was running, from streaming music in the background to PHPStorm’s code indexing, to random web browsing.
  • Incl. CPU Time - In contrast to wall time, CPU time tracks only the time which the CPU actually spent executing your code (or related system calls). This is a more reliable metric for comparing different runs.
  • Excl. Wall/CPU Time - Exclusive time measurements only count time actually spent within the given method itself. They exclude the time spent in any method/function called from the given function (since that time will be tracked separately).

In general, the inclusive metrics (for CPU time and memory usage) will give you a sense of what your expensive methods/functions are – these are the methods or functions that you should avoid calling if possible. In contrast, the exclusive metrics will tell you where you can potentially improve the way a given method/function is implemented. For methods which belong to Drupal Core or other contrib modules, inclusive and exclusive metrics are basically equivalent, since you don’t usually have the option of impacting the implementation details of a function unless you’re working on its code directly. Note also that because your overall parent method and any other high-level methods in your code will always show up at the top of the inclusive time chart, you may have better luck really understanding where your performance hits come from by sorting by exclusive CPU time.

Take a step back and plan your test scenarios

Before digging in to optimizing your module code, you need to take a step back and think about the big picture. First, what are you optimizing for? Many optimizations involve a tradeoff between time and memory usage. Are you trying to reduce overall run-time at the expense of more memory? Is keeping the memory footprint of a given function down more important? In order to answer these questions you need to think about the overall context in which your code is running. In my case, I was optimizing a background import module which was run via cron, so the top priority was that the memory usage and number of database optimizations were low enough not to impact the user-facing site performance.

Second, what use case for your code are you profiling? If this is a single method call, what arguments will be passed? If you’re profiling page loads on a website, which pages are you profiling? In order to successfully track whether the changes you’re making are having an impact on the metrics you’re concerned about, you need to be able to narrow down the possible use cases for your code into a handful of most-likely real world scenarios which you’ll actually choose to track via the profiler.

Keep things organized

Now it’s time to get organized. Write a simple test script so that you can easily run through all your use cases in an automated way – this is not strictly necessary, but it will save you a lot of work and potential error as you move through the process. In my case, I was testing a drush command hook, so I just wrote a bash shell script which executed the command three times in each of two different ways. For profiling page loads, I would recommend using Apache JMeter - and you’ll need to consider whether you want to force an uncached page load by passing a random dummy query parameter. Ideally, you should be running each scenario a few times so that you can then average the results to account for any small variations in run-time.

Keeping your different runs organized is probably the most important part of successfully profiling module code using XHProf! Each run has a unique run ID, but you are solely responsible for knowing which use case scenario and which version of the codebase that run ID corresponds to. I set up a basic spreadsheet in OpenOffice where I could paste in run numbers and basic run stats to compare (but there’s almost certainly a nicer automated way to do this than what I used).

Screenshot of an OpenOffice spreadsheet summarizing XHProf results for various runs

Once you have a set of run IDs for a given use case + codebase version, you can generate a composite xhprof report using the query string syntax http://<your-local-site>/xhprof/index.php?run=<first-run-id>,<second-run-id>,<third-run-id>&source=<source-string> Averaging out across a few runs should give you more precise measurements for CPU time and memory usage, but beware that if parts of your code involve caching you may want to either throw out the first run’s results in each version of the code base, since that’s where the cache will be generated, or clear the cache between runs.

Go ahead and test your run scripts to make sure that you can get a consistent baseline result at this point – if you’re seeing large differences in average total CPU times or average memory usage across different runs of the same codebase, you likely won’t be able to compare run times across different versions of the code.

Actually getting to work!

After all this set-up, you should be ready to experiment and see what the impact of changes in your code base are on the metrics that you want to shift. In my case, the code I was working on used a streaming JSON parser class, and I noticed that one of the top function calls in the inital profiler report was the consumeChar method of the parser.

Image of XHProf profiler report with the method consumeChar highlighted in yellow

It turns out that the JSON files I was importing were pretty-printed, thus containing more whitespace than they needed to. Sine the consumeChar method gets called on each incoming character of the input stream, that added up to a lot of unnecessary method calls in the original code. By tweaking the JSON file export code to remove the pretty print flag, I cut down the number of times this method was called from 729,099 to 499,809, saving .2 seconds of run time right off the bat.

That was the major place where the XHProf profiler report gave me insights I would not have had otherwise. The rest of my optimizing experience was mostly testing out some of the common-sense optimizations I had already thought of while looking at the code – caching a table of known Entity IDs rather than querying the DB to check if an entity existed each time, using an associative array and is_empty() to replace in_array() calls, cutting down on unnecessary $entity->save() operations where possible.

It’s useful to mention that across the board the biggest performance hit in your Drupal code will probably be database calls, so cutting down on those wherever possible will save run-time (sometimes at the expense of memory, if you’re caching large amounts of data). Remember, also, that if DB log is enabled each logging call is a separate database operation, so use the log sparingly – or just log to syslog and use service like Papertrail or Loggly on production sites.

The final results

As the results below show, using XHProf and some thoughtful optimizations I was able to cut total run time significantly in one use case (Case 2) and slightly in another use case (Case 1). Case 1 was already running in a reasonable amount of time, so here I was mostly interested in the Case 2 numbers (assuming I didn’t make anything too much worse).

Bar chart comparing the run time of various runs

Think of the big picture, part 2

Remember that controlled experimental metrics are just a means to understanding and improving real-world performance (which you can also measure directly using tools like blackfire, but that’s another blog post). In my case, at the end of the day we decided that the most important thing was to ensure that there wasn’t a performance impact on the production site while this background import code was running; so one of the optimizations we actually ended up implementing was to force this code to run slower by throttling $entity->save() operations to maximally 1 every 1/2 second or so, as a way to minimize the number of requests MySQL was having to respond to from the importer. XHProf is a powerful tool, but don’t lose the forest for the trees when it comes to optimization.

May 26 2016
May 26

XHProf is a profiling tool for PHP code – it tracks the amount of time and memory your code spends in each function call, allowing you to spot bottlenecks in the code and identify where it’s worth spending resources on optimization. There are have been a number of PHP profilers over the years, and XDebug has a profiler as well, but XHProf is the first one I’ve successfully managed to configure correctly and interpret the output of.

I had run across a number of blog posts about using XHProf + Drupal, but never actually got it to work sucessfully for a project. Because so much of the documentation online is incomplete or out-of-date, I thought it would be useful to document my process using XHProf to profile a Drupal 8 custom module here. YMMV, but please post your thoughts/experiences in the comments!

How to find documentation

I find the php.net XHProf manual entry super-confusing and circular. Part of the problem is that Facebook’s original documentation for the library has since been removed from the internet and is only accessible via the WayBack Machine.

If there’s only one thing you take away from this blog post, let it be: read and bookmark the WayBack machine view of the original XHProf documentation, which is at http://web.archive.org/web/20110514095512/http://mirror.facebook.net/facebook/xhprof/doc.html.

Install XHProf in a VM

If you’re not running DrupalVM, you’ll need to install XHProf manually via PECL. On DrupalVM, XHProf is already installed and you can skip to the next step.

sudo pecl install xhprof-beta

Note that all these commands are for Ubuntu flavors of linux. If you’re on Red Hat / CentOS you’ll want to use the yum equivalents. I had to first install the php5-dev package to get PECL working properly:

sudo apt-get update
sudo apt-get install php5-dev

And, if you want to view nice callgraph trees like the one below you’ll need to install the graphviz package sudo apt-get install graphviz

Image of a sample XHProf callgraph

Configure PHP to run XHProf

You need to tell PHP to enable the xhprof extension via your php.ini files. Usually these are in /etc/php5/apache2/php.ini and /etc/php5/cli/php.ini. Add the following lines to the bottom of each file if they’re not there already. You will also need to create the /var/tmp/xhprof directory if it doesn’t already exist.

[xhprof]
extension=xhprof.so
;
; directory used by default implementation of the iXHProfRuns
; interface (namely, the XHProfRuns_Default class) for storing
; XHProf runs.
;
xhprof.output_dir="/var/tmp/xhprof"

Lastly, restart Apache so that the PHP config changes take effect.

Set up a path to view the XHProf GUI

The XHProf GUI runs off a set of HTML files in the xhprof_html directory. If you’ve been following the install steps above, you should be able to find that directory at /usr/share/php/xhprof_html. Now you need to set up your virtual host configuration to serve the files in the xhprof_html directory.

I find the easiest way to do this is just to symlink the xhprof_html directory into the existing webroot of whatever site you’re working on locally, for example:

ln -s /usr/share/php/xhprof_html /var/www/my-website-dir/xhprof

If you’re using DrupalVM, a separate vhost configuration will already be set up for XHProf, and the default URL is http://xhprof.drupalvm.dev/ although it can be changed in your config.yml file.

Hooking XHProf into your module code

Generally, the process of profiling a chunk of code using XHProf goes as follows:

  1. Call xhprof_enable()
  2. Run the code you want profiled
  3. Once the code has finished running, call xhprof_disable(). That function will return the profiler data, which you can either display to the screen (not recommended), or…
  4. Store the profiler data to a file by creating a new XHProfRuns_Default(); object and calling its save_run method.

In the case below, I’m profiling a module that implements a few Drush commands from the command line which I’d like to optimize. So I created _modulename_xhprof_enable() and _modulename_xhprof_disable() functions – the names don’t matter here – and then added a --profile flag to my Drush command options which, when it is set to true, calls my custom enable/disable functions before and after the Drush command runs.

Here’s what those look like in full:

<?php
/**
 * Helper function to enable xhprof.
 */
function _mymodule_enable_xhprof() {
  if (function_exists('xhprof_enable')) {
    // Tell XHProf to track both CPU time and memory usage
    xhprof_enable(XHPROF_FLAGS_CPU + XHPROF_FLAGS_MEMORY,
      array(
        // Don't treat these functions as separate function callss
        // in the results.
        'ignored_functions' => array('call_user_func',
          'call_user_func_array',
        ),
      ));
  }
}

/**
 * Helper function to disable xhprof and save logs.
 */
function _mymodule_disable_xhprof() {
  if (function_exists('xhprof_enable')) {
    $xhprof_data = xhprof_disable();

    //
    // Saving the XHProf run
    // using the default implementation of iXHProfRuns.
    //
    include_once "/usr/share/php/xhprof_lib/utils/xhprof_lib.php";
    include_once "/usr/share/php/xhprof_lib/utils/xhprof_runs.php";

    $xhprof_runs = new XHProfRuns_Default();

    // Save the run under a namespace "xhprof_foo".
    //
    // **NOTE**:
    // By default save_run() will automatically generate a unique
    // run id for you. [You can override that behavior by passing
    // a run id (optional arg) to the save_run() method instead.]
    // .
    $run_id = $xhprof_runs->save_run($xhprof_data, 'xhprof_mymodule');

    echo "---------------\nAssuming you have set up the http based UI for \nXHProf at some address, you can view run at \nhttp://mywebsiteurl.dev/xhprof/index.php?run=$run_id&source=xhprof_mymodule\n---------------\n";
  }
}

The echo command here works fine for a Drush command, but for other tasks you could log the run url using watchdog.

Note: Another way to run XHProf on a Drupal site is using the XHProf module, but I haven’t had great luck with that.

Viewing profiler results

If everything is configured correctly, when you run your module you should get a run ID output either to the screen (via echo, as above, or however you’ve configured this logging). Visit the URL you configured above for xhprof, and you should see a list of all the stored runs. Clicking on a run will bring up the full profiler report.

Sample screenshot of an XHProf profiler report

Now what?

Now you’ve got all this data – how to make sense of it? What to do with it? Stay tuned for more discussion of how to interpret XHProf results and a real-world example of profiling a D8 module, next week!

Feb 24 2016
Feb 24

This is part 4 of a series investigating what to do with your Drupal 6 site as EOL has now come.

Part 1 - overview | Part 2 - the risks | Part 3 - the options | Part 4 - Drupal 7 or Drupal 8?

Should I upgrade my Drupal 6 site to 7 or 8?

Today Drupal 6 reaches retirement, or end of life (EOL) which means the Drupal community and security team will no longer officially support active development, security and bug fixing for the platform. We instead must focus our resources on maintaining Drupal 7 and 8.

Of course, since Drupal is open source software, even though EOL for Drupal 6 is today, some organizations will continue to operate their live websites on Drupal 6 for some time. Upgrading to a current, supported version of Drupal is recommended, but whether Drupal 7 or Drupal 8 is the best fit for your needs, is conditional. Drupal 8 provides enhanced multi-lingual capabilities, responsiveness out-of-the-box, improved UX with WYSIWYG out-of-the-box, in-place content editing, improved UX AND DX (developer experience) with staging and deployment improvements to name just a few advantages over Drupal 7. However, there are cases in which Drupal 8 may not be able to satisfy your organization’s needs cost-effectively in the short-term, before the community has matured around it. You may liken this comparison to deliberating on your next car purchase: a Tesla vs. the Honda Civic you’re accustomed to. The Tesla promises new and shiny features, but you may have to adjust to things like refueling with electricity, not gas, and automatic steering. With the Civic, you know you’re getting a reliable, solid machine that will get the job done, though it may not turn heads. Drupal 7 is the Civic; Drupal 8, the Tesla.

The if it ain’t broke don’t fix it perspective is valid as the enterprise-level websites that put Drupal on the map such as Whitehouse.gov and Weather.com are Drupal 7 sites.

Having said that, we’ll share the main factors to weigh in deciding which platform makes the most sense for your organization.

Top considerations in selecting Drupal 7 or 8

How familiar with OO PHP is the development team?

The mantra of Drupal 8 is getting off the island which means leveraging well-vetted systems and processes that already exist in the broader PHP and web development community. This is in contrast to the preceding Drupal approach which relied on custom, esoteric design decisions familiar only to the Drupal community, contributing to the steep learning curve for Drupal development. To that end, Drupal 8 was rewritten in object oriented PHP, a major architectural change. Although this point may be a bit Greek to non-technical site owners it is important to know how versed the team working on the upgrade is in object oriented programming, and is worth asking when vetting prospective development teams.

How complex is the site?

One of Drupal’s greatest strengths is its extensibility, and robust design to facilitate writing and maintaining custom code for specific client needs. Relatedly, the very large contributed module repository (33,289 as I write this) enables developers a wide array of “plug-and-play” functionality for free which enables fairly complex sites at a relatively low cost.

If your site has fairly complex functionality, it is likely that the development team(s) who have worked on it have installed many contributed modules, and/or written custom modules themselves.

A quick peek into your Drupal website files should shed some light on this:

 /sites/all/modules

Ideally previous developers would have had the foresight to distinguish directories between community contributed modules and custom modules written by them with the by implementing this simple directory structure:

 /sites/all/modules/contrib/
 /sites/all/modules/custom/

E.g. Tilthy Rich Compost

The fewer modules in those directories, the more you should lean toward Drupal 8 as the stronger candidate. If there are many, which there often are, the decision gets more complicated. You’ll likely need an estimate from a development team on the custom module functionality and you’ll want to consider if the previous customization you’ve requested is still relevant. In general, upgrading is a good time to trim the fat and focus on true organizational needs to simplify the process and limit future costs.

As far as contributed modules go, the reality is in the early months of Drupal 8 the quality and coverage of contributed modules will lag Drupal 7. The good news is there are public resources that can help you understand the status and availability of each module you require for your site which will help you make the determination of Drupal 8’s readiness for your needs.

Here are a few

In some cases a mission critical contributed module will not have an official Drupal 8 release, which may force the decision for Drupal 7 for budget reasons. We recently bid on a project that we recommended Drupal 7 since the project heavily relied on the Salesforce suite which Kosta has played a role in the development of.

The shibboleth authentication module is another module that some universities rely on, which has no official Drupal 8 path at the moment partially due to how complex the module is.

A final important point is that some of the most popular contributed Drupal 7 modules have been folded into Drupal 8 core. Therefore, a much higher threshold of sophistication can be achieved by out-of-the-box functionality with Drupal 8 than was previously possible with Drupal 7

What is my timeline to upgrade?

If you take the security concerns of unsupported software seriously, you want to upgrade as soon as possible. However, sometimes website upgrades are more beholden to internal budgeting than security threats. A general rule of thumb is the more time that goes by from today, the more you should lean toward Drupal 8. The contributed module community will continue to improve in the weeks, and months to come, and you’ll want to take advantage of the more mature and modern platform that is Drupal 8. Don’t forget, Drupal 7 was released in early 2011 so its foundation is over 5 years old as I write this.

What are my future website goals and budget?

The less your organization will budget for future site improvements, the stronger the pull for Drupal 8 to ensure your maximizing the time before you have to read an article like this again. The truth is, by the time we’re honestly talking about Drupal 10 (that which would phase out Drupal 8 if things continued as they have thus far) the upgrade path for a Drupal site, and the web in general will look very different. Predicting the web 5-7 years ahead of time is a tall task. Having said that, the total cost of ownership, and specifically upgrade burden for Drupal projects is something the community is looking at closely. A Drupal 6 to Drupal 8 upgrade path does exist and a lot of thought has been given to simplifying the release cycle which should ease the upgrade process as well. If your typical redesign cycle is within 2 to 4 years, you’re likely safe with a Drupal 7 site for at least that long.

Should I even be using Drupal at all?

A worthy partner should be knowledgeable enough to know when Drupal is not the right fit, and honest enough to share that information for you. We pride ourselves on being one of those. As broadly applicable as the Drupal platform is for modern web projects, it isn’t always the right choice. It’s worth asking the question of prospective teams if you should even be on Drupal.

Where does that leave me?

Ultimately, site owners should communicate clearly their needs and desires for the new and improved vision of their website, and lean on their web partner to guide them in the right direction with open and honest conversations. On new site builds leading up to and after the official release of Drupal 8.0.0 we have deferred the decision to the tail-end of our discovery phase. We have found making the most informed decision requires thorough analysis which takes time. I encourage you to take some time with your partners weighing the pros and cons.

Part 1 - overview | Part 2 - the risks | Part 3 - the options | Part 4 - Drupal 7 or Drupal 8?

Jan 25 2016
Jan 25

This is part 3 of a series investigating what to do with your Drupal 6 site as EOL approaches.

Part 1 - overview | Part 2 - the risks | Part 3 - the options | Part 4 - Drupal 7 or 8?

The options of operating Drupal 6 after EOL

Though some advisers (solicited or not) might tell you otherwise, you always have options as to what to do with your website if you’re running Drupal. One of them is of course nothing, and the others require more thought.

What is the right thing to do can be a hot-button topic. Some technologists are purists, and believe it sacrosanct to even consider running software beyond EOL. However, we appreciate that although ideally site owners would always have been prepared and budgeted for software upgrades, sometimes life happens, and other business priorities and budget availability don’t perfectly align with timing for a software upgrade.

Ben Affleck in the Hollywood movie Armageddon

One end of the spectrum is well captured by the following quotation from the Drupal security team that some (certainly not us :wink: alarmist reaction to one of the most prolific Drupal security vulnerabilities to-date colloquially referred to as “Drupalgeddon.”

You should proceed under the assumption that every Drupal 7 website was compromised unless updated or patched before Oct 15th, 11pm UTC, that is 7 hours after the announcement.

This is of course meant to err on the side of caution, but can also be very anxiety provoking. Similarly, when hearing of the 3 month window for Drupal 6 support after Drupal 8.0.0 was released “Le dendrite” is not wrong to say:

someone please settle my stomach

I was shocked to read this. it might seriously wreck me, i really hope i’m misunderstanding this

will my sites just die?

That entire thread is a pretty good exposé into the various perspectives and preparedness on upgrading Drupal 6 sites. If you’ve got a spare half-day, give it a once-over.

On the other hand, we have had at least two clients in 2015 who continued to run Drupal 5 (not a typo) websites in production successfully.

Neither end is entirely wrong, they just represent difference of opinion and available options.

The primary options for Drupal 6 site owners are

  1. Do not upgrade your website to a newer Drupal version
  2. Do upgrade your website to a newer Drupal version
  3. Use the time to seriously revision your website altogether

But how do you decide which option is best for you, the site owner?

That decision mostly rests on your organization’s

  • risk tolerance
  • likelihood to be targeted by hackers
  • budget to improve your website

Considerations to not upgrade

Choosing not to upgrade your website is typically dictated by your organization having neither internal technical expertise or sufficient funds to hire a partner. When this is the case, if a small budget can be afforded, a site audit is usually the next best option to have some peace of mind that you’re taking the measures you’re able to mitigate the risk of running outdated, unsupported software. A qualified partner can perform a thorough assessment of risk and make budget-conscious recommendations on how to best harden the site against outside threats short of doing the more costly full-site upgrade.

Primary attack vectors that we focus on lie in custom code, and under-supported contributed modules since core code represents much lower risk having been well vetted by years in the wild.

wolves in the wild

One other consideration independent of your specific website configuration is your server autonomy dictated by your hosting provider. Running old software is usually tolerated in a shared hosting environment for only so long before upgrades are enforced. Warning emails are often (not always) sent out in advance of server-level upgrades, however, it’s easy to miss those emails, and it hurts to be caught by surprise that some PHP version upgrade renders your site completely useless. Understanding the level of control you as the site owner has over your server is crucial.

Considerations to upgrade

Going through a typical Drupal upgrade (the update.php method) is usually the right option for you if you

  • Want to mostly preserve the site to be upgraded as-is.
  • Have a fairly straightforward site using few, and mainly popular contributed modules .
  • Decide you ought to be using supported software because you
    • have sensitive information within your site that would be disastrous if compromised.
    • have financial transactions or information housed within your site.
  • Have sufficient budget to do the work.

Upgrading a Drupal website is a non-trivial investment. We’ve definitely had conversations with clients to the tune of

“oh, upgrading is not a click of a button? Phooey!”

which is very understandable. The Drupal community’s stance of not retaining backwards compatibility for the sake of keeping pace with modern web development is solid and quite defensible as well. Acquia does a good job of addressing that in the “Support” section (unfortunately not directly linkable) in this informative article about total cost of ownership of a Drupal site, which is in itself a topic worth a lot of consideration and analysis.

When prospective clients are committed to the time-line and investment of an upgrade, we always advise them to seriously consider a comprehensive rebuild/redesign as it can often present the path of most value. It is uncommon that a client desires to invest significantly to create an exact replica of the site they built 3, 4, or 5 years ago.

In many cases, legacy content, functionality and style has outlived it’s relevancy after 3 years and it becomes more burdensome than valuable, in the same way a poorly maintained house on a nice lot may be less valuable than if the lot were empty.

Another consideration is whether or not Drupal is the right platform for your organization’s needs. The answer to this is usually yes, especially after there is organizational investment and familiarity, but for less complicated sites that are open to revisioning, there are other options that, for example, are free to host and have no upgrade requirements like the static site generator Jekyll from which our own web site is built. An honest consultant will tell you when Drupal is a good fit and when it’s not.

But alas, if you’re determined to stick with Drupal and you’d like to upgrade, the final consideration is what platform to upgrade to, Drupal 7 or 8?

Ah-ah-ah…

Dennis Nedry from Jurassic Park saying ah ah ah

that’s for the next episode!

Part 4 and beyond

Next, we’ll discuss Drupal 7 vs. Drupal 8 decision-making (or not Drupal at all!).

Part 1 - overview | Part 2 - the risks | Part 3 - the options | Part 4 - Drupal 7 or 8?

Jan 22 2016
Jan 22

Overview

In my last blog post, I wrote about the virtues of Composer Manager and how it allows modules to depend on PHP libraries managed via Composer. Basically, Composer Manager allows us to easily use PHP libraries that exist outside of the Drupal ecosystem within our own projects.

In this post, I’ll show you how we:

Custom vs. Contrib?

But first, why didn’t we just use the MailChimp contributed module? Contributed modules are often a great option and offer many benefits, such as security, maintenance, and flexibility.

But there is a cost to installing all those contributed modules. As The Definitive Guide to Drupal 7 explains “The more modules you install, the worse your web site will perform.”

With each installed module comes more code to load and execute, and more memory consumption. And in some cases, contributed modules add complexity and features that just aren’t necessary for the required task.

In our case, the decision to go with a custom solution was easy:

  • The MailChimp contributed module had many features we didn’t need.
  • We were already using Composer Manager on the project to manage other module dependencies.
  • The custom module we were building already included logic to determine when to subscribe users to mailing lists (don’t worry, we made sure they opted in!)

All we needed was a simple, lightweight method for subscribing a given user to a specific MailChimp mailing list.

Implementation

We were able to achieve this by adding the MailChimp PHP library as a dependency of our custom module. We were then able to make a simple call using the API to subscribe a user to the mailing list. We implemented this via the following code.

First, in our module’s root directory we created a composer.json file that specified the MailChimp PHP library as a dependency:

{
  "require": {
    "mailchimp/mailchimp": "*"
  }
}

We then installed the Mailchimp API using the Composer Manager drush commands:

$ drush composer-json-rebuild
$ drush composer-manager install

As explained in my last post, the first command builds (or rebuilds) the consolidated project wide composer.json file and the second command installs the dependencies.

Next, we created a function in our custom module to subscribe a user to a MailChimp mailing list.

<?php
/**
 * Add an email to a MailChimp list.
 *
 * @param string $api_key
 *   The MailChimp API key.
 * @param string $list_id
 *   The MailChimp list id that the user should be subscribed to.
 * @param string $email
 *   The email address for the user being subscribed to the mailing list.
 */
function my_module_subscribe_user($api_key, $list_id, $email) {

  $mailchimp = new Mailchimp($api_key);

  try {
    $result = $mailchimp->lists->subscribe($list_id, array('email' => $email));
  }
  catch(Exception $e) {
    watchdog('my_module', 'User with email %email not subscribed to list %list_id', array('%email' => $email, '%list_id' => $list_id), WATCHDOG_WARNING);
  }
}

With that function defined, we could then subscribe any user to any mailing list by simply calling

<?php
my_module_subscribe_user($api_key, $list_id, $email);

Conclusion

That’s it! A nice, simple, and clean approach to subscribing users to a MailChimp mailing list that doesn’t require installation of the MailChimp contributed module.

We hope you’re as excited as we are at the opportunities Composer and Composer Manager afford us to take advantage of PHP libraries and projects that exist outside of the Drupal ecosystem.

Dec 10 2015
Dec 10

This is part 2 of a series investigating what to do with your Drupal 6 site as EOL approaches.

Part 1 - overview | Part 2 - the risks | Part 3 - the options | Part 4 - Drupal 7 or 8?

The risks of operating Drupal 6 after EOL

risk

It is helpful to think of the risks of running publicly-accessible software, like a Drupal website, after end-of-life (EOL) from two distinct perspectives.

  1. Bad things that can happen to your organization due to known security vulnerabilities that may be exploited by the public.
    • These are the kinds of risks we typically think of when running outdated software. Hackers of the worst kind… the one’s that are after you!
  2. Good things that are less likely to happen (or even be possible) to your site when you run old software.
    • Though easy, it’s important not to underestimate this opportunity cost, as it is more subtle, often less pressing, harder to quantify, but ultimately often more costly than the former.

Bad things that can happen

hacker… Scary, huh? …

In the worst of scenarios, it is possible to experience

  • Complete site compromise and escalated server access due to a vulnerability exposed by website code.

Yes, it is possible that with outdated and insecure code, especially when the exploit is publicly disseminated, your site is exposed to complete control by a nefarious (or simply bored) hacker. The best and decreasingly recent example in the Drupal world is a security vulnerability (for sites running Drupal 7.31 and earlier and Drupal 8.0.0-beta2 and earlier) affectionately referred to as “Drupalgeddon” (yes, the world is coming to an end if your website is compromised). You can read until you’re blue in the face druplicon about this significant exploit, but suffice it to summarize that a visitor without an account on your site could completely control it with a sophisticated exploit of this vulnerability. Given the hacker will have access to nearly everything on the server, he may choose his means to best exploit your site. He may

  • Add spam marketing links throughout the site.
  • Defame the site if you’re a high-trafficked, reputable organization.
  • Use your server to send spam email.
  • Use any sensitive data especially financial he’s able to access to his own mean. It’s been done.

Unfortunately, we’ve had a client return to us to rehab a site that has undergone this vulnerability, suffering at least 2 of these un-pleasantries.

Having said that, given that Drupal 6 was released in February 2008, it is less likely (though not impossible) that such a large vulnerability still exists in the core code of the project. Higher risk vectors are contributed modules (i.e. plugins/extensions), especially those that haven’t been maintained, and certainly custom modules, which we’ve seen at Savas Labs vary widely in adhering to best security practices depending on the agency and/or developer who wrote it.

Therefore assessing risk factors for a Drupal 6 site as a whole is nuanced, and requires a comprehensive site audit.

More subtle risks

Outdated software tends to present unpredictable obstacles based on the fact that most of the world is no longer using the software. While your Drupal 6 site may be fairly static from a code standpoint, the world around it continues to adapt, update, and improve. This is the world of PHP, MySQL (the database software used by most Drupal sites, others are Post) and Apache (the web server used for most Drupal sites).

Hosting providers have an interest in keeping their servers and systems up to date for the same reasons we’re discussing here: security, performance, and functionality.

After we had finished a project with no maintenance contract with a former client they authorized routine server maintenance, and promptly rendered their site to the fabled White Screen of Death due to fatal errors caused by a PHP version update incompatibility with their code. Though we were happy to step in during the emergency, we appreciated how difficult it was to see coming for the client.

Much akin to the long-awaited Drupal 8 release, PHP 7 just came out and boasts, like Drupal, functionality and performance improvements. According to a d.o post Drupal 8 is 100% PHP 7 compatible.

Drupal 6 recommends a decidedly small range of PHP versions – between 5.25 and 5.3. This is potentially problematic for Drupal 6 sites since PHP 5.3 is itself over 6.5 years old. Hosting providers don’t like that, and as time goes on there will be upward pressure on the version number providers are willing to support.

Good things that can’t (are unlikely to) happen?

Given exploitative risks vary by installation and implementation of your Drupal 6 site, let’s look more definitively about what you won’t have access to with outdated software.

Your organization cannot reap the benefits of the modern web.

One example here is the concept of responsive web design. Pulling off a responsive design for Drupal 6 is very difficult. The concept was quite fresh when Drupal 7 was officially released (2011). Even though most Drupal 7 development took place before RWD, the community was able to evolve with it and provide solid offerings for responsive themes and frameworks in the Drupal 7 repository. Drupal 8 was built mobile-first and therefore is responsive out of the box. Another example would be the much improved, native content editing experience in Drupal 8. Content editing improved from Drupal 6 to 7 and made a leap from 7 to 8 with inline-editing and wysiwyg in core.

You expose yourself to a shrinking market of developers that are able to serve you.

small pool

Each of the last two major releases of Drupal entail significant architectural shifts from the former. Requisite skills in Drupal 6 for example look fairly different than those for Drupal 7. Even greater disparities exist when comparing the object-oriented Drupal 8 to the mostly procedural Drupal 6. We as developers like to build with what’s new, and most of the market does as well, so the longer you hold out past EOL the smaller the pool of accessible talent which could also drive up the cost to contract due to scarcity. Given the limited pool, you may have to settle for an inexperienced developer, which can result in poor code quality and the associated costs that come with it. Not least of which is the risk of increased technical debt that you may have to deal with down the line and has been estimated to cost $3-$5 per line of code written.

We have a client whose undocumented and non-standard code written by several former developers still provides the occasional production surprise and we’ve been working with it for 1.5 years. This has caused our client hundreds of hours of lost productivity spent debugging, commenting and improving legacy code rather than creating new functionality and features for marketing or other business needs.

The opportunity costs of using old software means more limited feature-set, and likely poorer performance.

A diminishing feature-set might mean you cannot easily or affordably access a hero image homepage carousel (all the rage these days) for example. Or it might mean you’re unable to provide an editorial workflow for content publication for different roles in your organization. Whatever the need may be, ultimately using EOL software means you will have access to what was popular while that software was in active development, which is likely many years prior. Upgrading to current and actively developed software means access to the web’s current needs.

And what about site speed; how important is that? According to many different sources, it’s VERY important.

Suffice it (once again) to say that site load speed is critical and says something about your organization’s credibility. One example of an advanced web development feature available in Drupal 8 is the impressive bigpipe project which leverages the power of a performance tool designed at Facebook to the betterment of the Drupal community! It’s a game-changer for caching and page responsiveness.

Your site looks/feels outdated.

Let’s face(lift) it: We can usually tell when a website is … aging. It’s somewhat unavoidable with a Drupal 6 site as it was likely designed many years ago using practices that were the norm then. Your website is your first and most important means to convey trust and credibility. Much of the research on users’ assessment of credibility from your website references the (ironically poorly designed) work of BJ Fogg of Stanford. Not surprisingly web design tops the list as most important credibility marker, and your Drupal 6 design and feature set is likely not going to cut it any longer.

Part 3 and beyond

Given we have described the risk of operating a Drupal 6 site after EOL, we’ll explore your options as dictated by your tolerance for risk, self-assessed attractiveness to exploit (are you a large retailer?), your budget to upgrade, your time line to upgrade, and other competing business priorities. In a follow up to that we’ll discuss Drupal 7 vs. Drupal 8 decision-making (or not Drupal at all!). If we make it to part 5, we’ll wrap it up with a bang!

Stay tuned

Part 1 - overview | Part 2 - the risks | Part 3 - the options | Part 4 - Drupal 7 or 8?

Nov 24 2015
Nov 24

This is part 1 of a series investigating what to do with your Drupal 6 site as EOL approaches.

Part 1 - overview | Part 2 - the risks | Part 3 - the options | Part 4 - Drupal 7 or 8?

The issue at hand

As most Drupal 6 site owners are aware, after a prolonged development period, Drupal 8 was officially released (8.0.0) last week on November 19th, 2015 Dries’s birthday with a corresponding many, many a lively party: Drupal 8 celebration #celebr8d8Like this fancy one in downtown Durham atop the rooftop bar of The Durham Hotel.

Drupal 8.0.0 is a BIG DEAL and generally speaking is great for the community of Drupal site owners and site builders.

However (there’s always a but), with the official release of Drupal 8, support for Drupal 6 will end on February 24th, 2016. Given the U.S. holiday season has begun, there is little productive time remaining to undertake a site upgrade before Drupal 6 End Of Life (EOL). If you are a site owner fortunate enough to have survived a Drupal site upgrade in the past, you are well aware that the upgrade process can be time-intensive for complex sites. It is never as easy as the click of a button. For most Drupal 6 site owners, it is the fact that their sites are so complex that they have avoided going through the upgrade process for as long as possible.

This presents responsible yet practical site owners who don’t have unlimited budgets with difficult decisions, each with associated pros and cons to weigh. In part 1 of this series, we’ll help walk you through the following topics at a high-level, with a follow-up post examining each topic in finer detail.

  • What does Drupal 6 EOL mean for me?
  • What are the risks to not upgrading?
  • What are my options?
  • Should we upgrade to Drupal 7 or Drupal 8?
  • How do we decide what to do?

N.B.: if you don’t fit into the aforementioned category of having budgetary constraints, please contact us immediately. ;)

What does Drupal 6 EOL mean for me?

Like any good (and probably the bad too) Drupal advisor will tell you, it all depends. Helpful, right? But truly, it’s necessary to understand the organization and its technical requirements very well to assess the risk of operating a Drupal 6 site after EOL. As an agency that leverages open source technology to build modern web applications, on a daily basis Savas Labs relies on the Newtonian

…shoulders of giants…

to perform sophisticated tasks with the click of a button (or more likely a command in the terminal). That Drupal community that we access and contribute to for features is the same one that provides security maintenance. After EOL for Drupal 6, that click of a button access goes away both for features, but more importantly for security fixes. In other words, it means (almost) no one is watching Drupal 6 after February 24th 2015. For sustainability purposes, that huge community (~100,000 active contributors) must use its time and energy to support the newer platforms.

State of Drupal 6 sites in production

Drupal 6 has been around for a long time. As of mid November 2015 there are at least ~125,000 reported instances (likely underrepresented) of Drupal 6 sites in the wild. So you are not alone (…I am here with you…) and there is some comfort in that. If you’re reading this and have not begun your upgrade process yet, it is very likely you will be spending at least some time outside of support for your Drupal 6 site. We dive into this at a deeper level in part 2, but some of the factors that are worth taking into consideration as you strategize the upgrade are:

Considerations to assess risk
  • How well-known is your organization?
    • Larger organizations with high public profiles are systematically targeted more frequently than smaller, lesser-known organizations.
  • How many contributed modules does your site utilize and how well supported are those modules?
    • Attack vectors that remain for Drupal 6 are likely to be modules that have not received a lot of historical support, but are in some way identifiable to the public when they are in use on a site.
  • How much does your site rely on custom code?
    • Custom code has the advantage of not being publicly known, but the large disadvantage of only being vetted by one site.

What are the risks?

High-level risks, more closely examined in part 2 are as follows from most severe to least.

  • Complete site compromise and control with consequences dictated by the whim of a hacker.
  • Site incompatibility with mandatory server security upgrades that fall out of sync with Drupal 6 (PHP 7 comes out in late 2015).
  • You do not keep up with modern web development practices. After all, Drupal 6 came out January 1st, 1970 (I just checked) and given that makes it older than 6 months on the web, it’s ancient.
  • You expose yourself to a decreasing market of developers that are able to serve you. With each major release, especially two in row (7 and 8) with significant architectural modifications to the former, skills honed in development for the current version of the software, provide diminishing returns the further back in versions you go.

What are my options?

In considering a Drupal 6 upgrade you have a few simple options.

  • Do nothing, and keep your fingers crossed.
  • Upgrade Drupal 6 core to a supported version (probably Drupal 7) and match existing functionality.
  • Engage a robust redesign/rebuild (Drupal 7 or 8).
    • Simultaneously harden the site to best mitigate attack vulnerabilities as the rebuild may take 6-18 months to complete.
  • Select a different solution than Drupal, and migrate to that.

Drupal 7 or Drupal 8… heck, what about Drupal 9?

This is another one that, I know…shocker, depends. The factors that effect this choice we discuss more in part 2 are:

  • Organizational tolerance for risk: Drupal 8 is less tested, and is inherently riskier earlier on in the life cycle of your site.
  • Willingness to support community: In some cases Drupal 8 contributed modules will need extra polish to be up to production snuff.
  • Complexity of site: Drupal 8 core has many more bells and whistles, but the contributed module landscape has a long way to catch up to Drupal 7.
  • What is the future/life of the site: Drupal 8 is much more forward-thinking in its approach, whereas though vetted, Drupal 7 is over 4 years old.
  • Existing developer’s skill set: Drupal 8 architecture, coding style (object oriented) and PFE (proudly found elsewhere) approach that leverages strengths from the rest of the PHP community all mark substantial shifts from Drupal 7. Therefore the skills required to succeed in these two realms differ.
  • Get out of here with that Drupal 9 talk! It’s neither prime nor even!

What is our recommendation?

If you feel lost in these concepts or with answering some of these questions on your own, it’s best that you speak with professionals who have years of experience maintaining and upgrading Drupal sites. The upgrade process is a highly variable one, and is not especially easy to estimate as it is much more nuanced than typical feature development.

Reaching EOL for your existing Drupal site is a time that we encourage site owners to look at the process like moving into a new and better home. It’s best to take the time to envision and create what you want in the new space, rather than thoughtlessly replicate what you had in the old. Why make a carbon copy when you had good reasons to make the move after all (even if you were technologically strong-armed by volunteers)? It’s very common to have features and custom development that have outlived their usefulness to your organization’s mission; so it’s a good time to purge. Out with the old, in with the new!

Having said that, the desire to preserve content from the existing site is very common and often necessary. There are advanced migration techniques available from Drupal 6 to Drupal 7 or Drupal 8 that may be entirely separated from the rest of the rebuild, so porting content and matching site functionality can be completely decoupled.

We love talking through this process with site owners. We analyze what makes the most sense for your organization while addressing priorities for both short and long term goals. We have been building sites in Drupal 8 since May 2015 and sites in Drupal 7 since 2010 so we are well versed to the pros and cons of each. Reach out to further discuss, continue on to part 2 and stay tuned for part 3.

Part 1 - overview | Part 2 - the risks | Part 3 - the options | Part 4 - Drupal 7 or 8?

Nov 05 2015
Nov 05

Two weeks ago, I gave a presentation on Drupalized Web Mapping for the North American Cartographic Information Society 2015 conference in Minneapolis, MN. The full set of slides for the talk are online via Github pages, but here I’m going to skip the “what is Drupal” side of the talk and focus on giving more detail on three different recommended workflows for building web maps in Drupal.

Note: At the time of writing this few of the modules mentioned had stable D8 ports. The shift to D8 is likely to change the landscape of mapping modules significantly, so the following advice is really focused on D7 sites (although we’re using the third approach on a D8 site currently).

The use case - why Drupal?

ArcGIS.com, Google My Maps, CartoDB, Mapbox - the list of GUI-based web-mapping tools keeps getting longer. These tools are great for a lot of purposes. But there are a few things which are hard to do easily using most conventional web-based mapping/GIS tools that are easy to do in Drupal:

  • Annotating features with rich text and longer text descriptions, and having that text be easily editable in a WYSIWYG format.
  • Attaching media (pictures, audio and video) to features without having to use a separate external service for storing media. ArcGIS.com and CartoDB both require you to upload photos elsewhere and then copy-and-paste a URL from the photo location, not the simplest workflow.
  • Incorporating search and filtering capabilities.
  • Allowing multiple users to edit and comment on feature attributes, using versioning and permission controls.

This blog post covers how to add a styled web map of node locations to a Drupal website. There are also ways to add more powerful mapping and GIS capabilities to Drupal (for example via cartaro, but I’m not going to talk about those here). I also assume you’re already familiar with basic Drupal concepts.

Getting and storing geo-data

Geofield is the best module for adding location information to existing content types in a Drupal website. For most basic purposes, the default storage settings will work just fine, although for more complicated geo-data you can also enable PostGIS integration for better performance on the back-end. Geofield depends on the GeoPHP API module, a wrapper for the GeoPHP library which can also integrate with the GeOS PHP extension, if it’s installed on your server, for some moderate performance gains.

Once you’ve added a Geofield to your content type to store location info, you’ll need to populate the field. Geocoder module allows you to auto-populate a geofield by geocoding address information in another text field. This can be a full address, or also just a city and state name. For production sites, you’ll need to make sure that your site abides by the terms of use of your chosen geocoder (for example, sites using Google Geocoder must also display their data on a Google Map). The MapQuest Open Geocoding service uses exclusively OpenStreetMap data, and as such has a much less restrictive use policy. I haven’t played around with using Mapzen’s opensource Pelias geocoder with Drupal yet, but the Geocoder module has a really simple plugin system that allows you to define your own geocoding endpoint.

For sites where ease-of-use on the data-entry side is a lower priority, you can also just allow direct input of latitude and longitude coordinates via the node edit screen (or pasting in GeoJSON for more complicated geometries). Lat/long coordinates are easy to find via Google Maps, or even just a Google search.

Leaflet module

One of the simplest ways to add a web map to your site is using the Leaflet module, which is a wrapper around the Leaflet.js web mapping framework. With Leaflet installed, you can set the display mode for your geofield to “Leaflet Map”, which will add a locator map to each node page. With Leaflet Views also installed, you can set the display mode for a view to “Leaflet” to output a view as a map - you’ll need to include a geofield as one of the fields in the view before this will work, of course. With this set-up, to add search or filters to the map all you have to do is add them as exposed filters to the view!

Leaflet offers some basic customization options – you can set popup title and content, choose a different point icon image, and choose from a couple of different basemaps. The Leaflet More Maps module also offers some additional basemaps. But there are a lot of limitations on customizing maps made using the Leaflet module. If you want to add multiple layers, use tokens from a field in the view to set the icon image, or add custom behavior overlays to the map you’ll need to either use OpenLayers or write your own custom code.

OpenLayers module

OpenLayers is a wrapper for the OpenLayers web mapping framework. OpenLayers is a highly object-oriented mapping framework in which all components of the map (markers, popups, map interaction behaviors, data layers, etc.) are modelled using objects and inheritable classes. For example, to design your own popup you theoretically just have to extend the OpenLayers.Popup.FramedCloud class and override some of its attributes or behaviors. In practice, this is more complicated than it sounds, especially because not all the class intricacies are well-documented.

OpenLayers offers much more customization via the various settings GUIs than Leaflet does. I’m not going to get into all the details here, as it’s pretty self-explanatory (there’s a decent slideshare from DrupalCamp Spain as well). One thing you do need to know when using OpenLayers, is that you’re going to need to create (at least) two views for each map. Each data layer will have its own view, with format OpenLayers Data Overlay. Then you’ll configure a map object within OpenLayers, but in order to actually display the map you need to create a view with format OpenLayers Map. If you’re using contextual filters, those filters need to be applied (with identical settings) on both views, for the map and the relevant data layer.

Open Layers does have a cost of more computational and memory overhead than Leaflet. On a recent project, I found that on a production server with the GeOS PHP extention installed, OpenLayers maps would fail to render on the server-side once the mapped view reached about 500-750 points. If you’re just displaying individual points, you can get around this performance limitation using the GeoCluster module for D7, which implements server-side clustering, but that module clusters all features at once and does not support, for example, clustering multiple feature layers separately. Also it needs some D8 port-related love.

Screenshot of Durham Civil and Human Rights map, showing a detail of Pauli Murray's childhood home.

Views GeoJSON + Custom leaflet.js code

This is a sort of “headless Drupal” approach, and it’s the one we’re using on Savas Labs first D8 site, the Durham Civil Rights map. By using the Views GeoJSON module, you can render the output of any view containing location information as a GeoJSON feed (potentially even including exposed filters via the query path). Then, just like any GeoJSON feed, that data can be loaded via AJAX into a Leaflet (or OpenLayers, or other frameworks) map. Savas Labs’ Anne Tomasevich has a great write-up on just how to do that in Drupal 8 specifically, and I wrote a post a while ago about how to map GeoJSON data in Leaflet more generally. This approach is still something we’re experimenting with, so we’d love to hear your thoughts. One note – using this approach it’s also possible for any user with enough permissions to view the GeoJSON feed to very easily download the full dataset using this approach.

Aug 28 2015
Aug 28

In my previous post I outlined how to build a Sass directory within a custom Drupal theme including Bourbon and Neat.

At this point, we’re ready to write some SCSS within the base, components, and layouts directories. In this post I’ll demonstrate how Savas Labs applies SMACSS principles to organize our custom SCSS. As a reminder, I’ll be linking to our Drupal 8 mapping site as an example throughout, but none of this is Drupal-8-specific.

Drupal-flavored SMACSS

When we left off, our sass directory looked like this:

# Inside the custom theme directory
sass
├── _init.scss
├── base
├── components
├── layouts
├── lib
│   ├── bourbon
│   ├── font-awesome
│   └── neat
└── styles.scss

At this point we’re ready to start styling. Let’s take a look at the three folders that will hold our custom .scss files, which are loosely based on SMACSS. Acquia has a nice writeup of how SMACSS principles can be applied to Drupal, but I like to simplify it even further.

Base

I personally only have three files in the sass/base directory. Don’t forget that we already imported these three partials in styles.scss.

For full examples of each of these files, check out our base directory.

_normalize.scss

This is simply normalize.css renamed as _normalize.scss - remember that CSS is valid SCSS. Thoughtbot recommends using normalize.css as your CSS reset along with Neat. Regardless of which reset you use, include it in base.

_base.scss

This is for HTML element styles only. No layout, no classes, nothing else. In here I’ll apply font styles to the body and headings, link styles to the anchor element, and possibly a few other site-wide styles.

_variables.scss

This is where I house all of my Sass variables and custom mixins. I typically have sections for colors, fonts, other useful stuff like a standard border radius or spacing between elements, variables for any Refills I’m using, and custom mixins.

I’d definitely recommend including _normalize.scss in base if you’re using Neat, but other than that do what works for you! If you’re following my method, your sass folder should be looking like this:

# Inside the custom theme directory
sass
├── _init.scss
├── base
│   ├── _base.scss
│   ├── _normalize.scss
│   └── _variables.scss
├── components
├── layouts
├── lib
│   ├── bourbon
│   ├── font-awesome
│   └── neat
└── styles.scss

Layouts

This directory holds page-wide layout styles, which means we’ll be making heavy use of the Neat grid here. This is flexible, but I recommend a single .scss partial for each unique template file that represents an entire page. Think about what works best for your site. For the sake of our example, let’s say we’re creating _front-page.scss and _node-page.scss. I like to also create _base.scss for layout styles that apply to all pages.

Remember that these styles only apply to the page’s layout! I occasionally find myself moving styles from the layouts directory to base or components when I realize they don’t only define layout. In these partials, you should be doing a lot of grid work and spacing. This is the entirety of my sass/layouts/_base.scss file on our mapping site:

/**
 * @file
 *
 * Site-wide layout styles.
 */

body {
  @include outer-container();

  main {
    @include span-columns(10);
    @include shift(1);
    @include clearfix;

    h1 {
      margin-top: em($navigation-height) + $general-spacing;
    }
  }
}

We’re almost there:

# Inside the custom theme directory
sass
├── _init.scss
├── base
│   ├── _base.scss
│   ├── _normalize.scss
│   └── _variables.scss
├── components
├── layouts
│   ├── _base.scss
│   ├── _front-page.scss
│   └── _node-page.scss
├── lib
│   ├── bourbon
│   ├── font-awesome
│   └── neat
└── styles.scss

Components

In SMACSS this is called “modules,” but that gets a little confusing in Drupal Land. This is for applying layout and theme styles to smaller chunks of your site, which in Drupal typically means regions. Create a separate partial for each region, or if you have several distinct components within a region, consider a separate partial for each of them.

Let’s say we created partials for the header, footer, and sidebar regions, plus one for non-layout node page styles. At this point, our sass directory is looking like this:

# Inside the custom theme directory
sass
├── _init.scss
├── base
│   ├── _base.scss
│   ├── _normalize.scss
│   └── _variables.scss
├── components
│   ├── _node-page.scss
│   └── regions
│       ├── _footer.scss
│       ├── _header.scss
│       └── _sidebar.scss
├── layouts
│   ├── _base.scss
│   ├── _front-page.scss
│   └── _node-page.scss
├── lib
│   ├── bourbon
│   ├── font-awesome
│   └── neat
└── styles.scss

Now we’ve got a nicely organized, easy to navigate Sass directory, ready to hold your styles and compile them into one beautiful CSS file!

But how do we ensure that our one CSS file really is beautiful? Check out my final post about best practices for writing Sass that you can easily maintain or pass off to another developer.

Aug 27 2015
Aug 27

In my previous post I outlined how to build a Sass directory within a custom Drupal theme including Bourbon and Neat.

At this point, we’re ready to write some SCSS within the base, components, and layouts directories. In this post I’ll demonstrate how Savas applies SMACSS principles to organize our custom SCSS. As a reminder, I’ll be linking to our Drupal 8 mapping site as an example throughout, but none of this is Drupal-8-specific.

Drupal-flavored SMACSS

When we left off, our sass directory looked like this:

# Inside the custom theme directory
sass
??? _init.scss
??? base
??? components
??? layouts
??? lib
?   ??? bourbon
?   ??? font-awesome
?   ??? neat
??? styles.scss

At this point we’re ready to start styling. Let’s take a look at the three folders that will hold our custom .scss files, which are loosely based on SMACSS. Acquia has a nice writeup of how SMACSS principles can be applied to Drupal, but I like to simplify it even further.

Base

I personally only have three files in the sass/base directory. Don’t forget that we already imported these three partials in styles.scss.

For full examples of each of these files, check out our base directory.

_normalize.scss

This is simply normalize.css renamed as _normalize.scss - remember that CSS is valid SCSS. Thoughtbot recommends using normalize.css as your CSS reset along with Neat. Regardless of which reset you use, include it in base.

_base.scss

This is for HTML element styles only. No layout, no classes, nothing else. In here I’ll apply font styles to the body and headings, link styles to the anchor element, and possibly a few other site-wide styles.

_variables.scss

This is where I house all of my Sass variables and custom mixins. I typically have sections for colors, fonts, other useful stuff like a standard border radius or spacing between elements, variables for any Refills I’m using, and custom mixins.

I’d definitely recommend including _normalize.scss in base if you’re using Neat, but other than that do what works for you! If you’re following my method, your sass folder should be looking like this:

# Inside the custom theme directory
sass
??? _init.scss
??? base
?   ??? _base.scss
?   ??? _normalize.scss
?   ??? _variables.scss
??? components
??? layouts
??? lib
?   ??? bourbon
?   ??? font-awesome
?   ??? neat
??? styles.scss

Layouts

This directory holds page-wide layout styles, which means we’ll be making heavy use of the Neat grid here. This is flexible, but I recommend a single .scss partial for each unique template file that represents an entire page. Think about what works best for your site. For the sake of our example, let’s say we’re creating _front-page.scss and _node-page.scss. I like to also create _base.scss for layout styles that apply to all pages.

Remember that these styles only apply to the page’s layout! I occasionally find myself moving styles from the layouts directory to base or components when I realize they don’t only define layout. In these partials, you should be doing a lot of grid work and spacing. This is the entirety of my sass/layouts/_base.scss file on our mapping site:

/**
 * @file
 *
 * Site-wide layout styles.
 */

body {
  @include outer-container();

  main {
    @include span-columns(10);
    @include shift(1);
    @include clearfix;

    h1 {
      margin-top: em($navigation-height) + $general-spacing;
    }
  }
}

We’re almost there:

# Inside the custom theme directory
sass
??? _init.scss
??? base
?   ??? _base.scss
?   ??? _normalize.scss
?   ??? _variables.scss
??? components
??? layouts
?   ??? _base.scss
?   ??? _front-page.scss
?   ??? _node-page.scss
??? lib
?   ??? bourbon
?   ??? font-awesome
?   ??? neat
??? styles.scss

Components

In SMACSS this is called “modules,” but that gets a little confusing in Drupal Land. This is for applying layout and theme styles to smaller chunks of your site, which in Drupal typically means regions. Create a separate partial for each region, or if you have several distinct components within a region, consider a separate partial for each of them.

Let’s say we created partials for the header, footer, and sidebar regions, plus one for non-layout node page styles. At this point, our sass directory is looking like this:

# Inside the custom theme directory
sass
??? _init.scss
??? base
?   ??? _base.scss
?   ??? _normalize.scss
?   ??? _variables.scss
??? components
?   ??? _node-page.scss
?   ??? regions
?       ??? _footer.scss
?       ??? _header.scss
?       ??? _sidebar.scss
??? layouts
?   ??? _base.scss
?   ??? _front-page.scss
?   ??? _node-page.scss
??? lib
?   ??? bourbon
?   ??? font-awesome
?   ??? neat
??? styles.scss

Now we’ve got a nicely organized, easy to navigate Sass directory, ready to hold your styles and compile them into one beautiful CSS file!

But how do we ensure that our one CSS file really is beautiful? Check in next week when I talk about best practices for writing Sass that you can easily maintain or pass off to another developer.

About the author

Anne Tomasevich

Recent posts by Savas

Drupalcamp Asheville - an all around success!

A step-by-step tutorial on setting up Bourbon and Neat and compiling it all with Compass.

We’re giving two presentations at Drupalcamp Asheville!

Aug 21 2015
Aug 21

Recently Savas Labs built a custom Drupal 8 theme using Bourbon for mixins and Neat as our grid framework, applying our favorite parts of SMACSS principles and focusing on creating organized, maintainable code. The result? Fast, easy coding and a relatively lightweight theme.

In this three-part series I’ll detail:

I’ll be pulling some examples from our Drupal 8 theme, but none of this is Drupal-8-specific and really, it’s not entirely Drupal-specific. These principles can be applied to any site. Much of the material in these posts is also largely a matter of opinion, so if you disagree or if something else works better for you, sound off about it in the comments!

Definitions

Let’s talk vocab.

Sass

A preprocessor for CSS. Sass offers functionalities not yet available in CSS like variables, rule nesting, and much more.

SCSS

“Sassy CSS.” Early Sass, with the file extension .sass, used a syntax quite different from the CSS syntax we’re already familiar with. Version 3 of Sass brought SCSS, returning to the same syntax as CSS and proving easier to use for most developers. Importantly, this means that valid CSS is also valid SCSS. Files ending with the .scss extension are written in SCSS. I still call them “Sass files”; please don’t be mad at me.

If you found that last paragraph terribly interesting, you should read this.

Partial

An SCSS file that is not directly compiled into a CSS file, but is instead imported into another SCSS file. A partial is denoted by the underscore that begins its filename (e.g. _base.scss).

Bourbon

A Sass mixin library by thoughtbot, inc. Bourbon makes Sass easier and more powerful by providing extremely useful mixins, meaning you don’t have to write them yourself. I particularly enjoy using Bourbon for CSS3 mixins, which allow me to use modern CSS3 modules without having to worry about vendor prefixes.

Neat

A lightweight grid framework written for Sass, also by thoughtbot. The best part of Neat, in my opinion, is the separation of content from layout that comes from defining layout in Sass files rather than template files. This makes your grid system easier to define, update and maintain and keeps your template files cleaner.

Now that we’re all dying to use these awesome things with Drupal, let’s set it up!

Create a Gemfile

We need to install Bourbon and Neat, which are both Ruby gems. We could do a quick gem install bourbon then bourbon install to create a folder of the entire Bourbon library, but this isn’t ideal if we’re ever going to be sharing this code since each developer (and deployment environment) will need to have these gems installed on their machine. Enter Bundler, a package manager for Ruby gems. Per the documentation, we only need to do a few things:

1. Install Bundler, which is a Ruby gem itself

$ gem install bundler

2. Create a Gemfile in our theme directory

$ cd my-custom-theme
$ touch Gemfile

…and list out all the gems our theme requires.

# Gemfile
source "https://rubygems.org"

gem 'compass'
gem 'sass'
gem 'bourbon'        # Bourbon mixin library.
gem 'neat'           # Bourbon Neat grid framework.

See Bundler’s documentation to read about specifying versions within your Gemfile.

3. Install all your dependencies.

$ bundle install

4. Commit the Gemfile and Gemfile.lock to ensure that everyone is using the same libraries.

$ git add .
$ git commit -m "Add Gemfile and Gemfile.lock"

Create a Sass directory

Within the theme directory, create a directory called sass. In here, create the following directories:

# Inside the custom theme directory
sass
├── base
├── components
├── layouts
└── lib

Install libraries

Now we can actually install the Bourbon and Neat libraries within our project.

$ cd sass/lib
$ bourbon install
$ neat install
$ git add .
$ git commit -m "Add Bourbon and Neat libraries"

Here’s what we’ve got now:

# Inside the custom theme directory
sass
├── base
├── components
├── layouts
└── lib
    ├── bourbon
    └── neat

Now that we’ve got our libraries set up, we need to actually import them so that they’re compiled into CSS.

Set up styles.scss

Create styles.scss in the scss directory. Inside styles.scss, we’ll import all our SCSS partials. View a working example of this here. I generally organize the imports in this manner:

/**
 * @file
 * Styles are organized using the SMACSS technique. @see http://smacss.com/book/
 */

/* Import Sass mixins, variables, Compass modules, Bourbon and Neat, etc. */
@import "init";
@import "base/variables";

/* HTML element (SMACSS base) rules */
@import "base/normalize";
@import "base/base";

/* Layout rules */
// Import all layout files.

/* Component rules (called 'modules' in SMACSS) */
// Import all component files.

We haven’t created any of these partials yet, but we will.

Some people may want to use Sass globbing here for brevity’s sake. I prefer not to as I find the file to be more readable without it.

Set up _init.scss

Within the sass directory, create a file called _init.scss.

In _init.scss we will (in this order):

  1. Import Bourbon (the mixin library) and Neat Helpers (Neat’s settings and functions)
  2. Set some overrides
  3. Import Neat itself
  4. Import fonts

You can view an example of a full _init.scss file here, but I’ll go through some of the highlights.

1. Import bourbon.scss and neat-helpers.scss

First we import all of Bourbon and Neat’s settings and functions, which are included in neat-helpers.scss.

// Import variables and mixins to be used (Bourbon).
@import "lib/bourbon/bourbon";
@import "lib/neat/neat-helpers";

2. Create overrides

Now we’ll override some of Neat’s settings and create our breakpoints.

// Turn on Neat's visual grid for development.
$visual-grid:       true;
$visual-grid-color: #EEEEEE;

// Set to false if you'd like to remove the responsiveness.
$responsive:    true;

// Set total number of columns in the grid.
$grid-columns:  12;

// Set the max width of the page using the px to em function in Bourbon.
// The first value is the pixel value of the width and the second is the base font size of your theme.
$font-size:     16px;
$max-width-px:  2000px;
$max-width:     em($max-width-px, $font-size);

// Define breakpoints.
// The last argument is the number of columns the grid will have for that screen size.
// We've kept them all equal here.
$mobile:   new-breakpoint(min-width em(320px, $font-size) $grid-columns);
$narrow:   new-breakpoint(min-width em(560px, $font-size) $grid-columns);
$wide:     new-breakpoint(min-width em(851px, $font-size) $grid-columns);
$horizontal-bar-mode: new-breakpoint(min-width em(950px, $font-size) $grid-columns);

3. Import Neat

Now that we’ve completed our settings, we’ll import the entire Neat library and our overrides will apply and cause the grid to function the way we want it to.

// Import grid to be used (Bourbon Neat) now that we've set our overrides.
@import "lib/neat/neat";

4. Import fonts

I like to include my fonts in this file to be consistent about how I’m importing my libraries (e.g. the Font Awesome library, which I’ve included in sass/lib). Some people might move this into a _typography.scss file or something similar, perhaps residing in the base directory. Do what feels right!

// Fonts -----------------------------------------------------------------------

// Noto Serif (headings)
@import url(http://fonts.googleapis.com/css?family=Noto+Serif:400,700);

// Open Sans (copytext)
@import url(http://fonts.googleapis.com/css?family=Open+Sans:400italic,700italic,700,400);

// Font Awesome (icons)
@import 'lib/font-awesome/font-awesome';

Compile!

We haven’t written any custom styles yet, but at this point we can compile our SCSS into CSS using Compass (which we included in the Gemfile earlier).

First we need to generate a Compass configuration file using compass config.

$ cd my-custom-theme
$ bundle exec compass config

Why did we use bundle exec? Running a Compass command as a Bundler executable runs the command using the Compass gem defined in our Gemfile. This way, if we decided to define a specific version of Compass within the Gemfile that potentially differs from another version installed on our local machines, we ensure we’re using that specific version every time we run a Compass command.

Now we can compile our Sass. Run this command from the root of your theme directory:

$ bundle exec compass compile

Running this command for the first time does two things:

  1. Creates a stylesheets directory containing styles.css, the compiled version of styles.scss.
  2. Creates a .sass-cache directory

Once we start writing our own Sass, we can have Compass watch for changes and regenerate styles.css as we code:

$ bundle exec compass watch

I usually open a new tab in my terminal and leave this command running as I’m styling.

At this point our Sass directory is looking like this:

# Inside the custom theme directory
sass
├── _init.scss
├── base
├── components
├── layouts
├── lib
│   ├── bourbon
│   ├── font-awesome
│   └── neat
└── styles.scss

Where do we go from here? In my next post, I tackle the base, components, and layouts SMACSS-based directories and the custom scss files they will hold. In a future post I’ll go through some of Savas Labs best practices for writing Sass that can be easily shared amongst team members and maintained in the long run.

Aug 04 2015
Aug 04

The general problem

One of the most embarrassing and potentially costly things we can do as developers is to send emails out to real people unintentionally from a development environment. It happens, and often times we aren’t even aware of it until the damage is done and a background process sends out, say, 11,000 automated emails to existing customers (actually happened to a former employer recently). In the Drupal world, there are myriad ways to attempt to address this problem.

General solutions to the general problem

  • maillog - A Drupal module that logs mails to the database and optionally allows you to “not send” them
  • reroute email - A Drupal module that intercepts email and routes it to a configurable address
  • devel mail - An option of the beloved devel module which writes emails to local files instead of sending
  • mailcatcher (not Drupal-specific) - Configure your local mail server to not send mail through PHP

The ultimate solution to the problem?

Never store real email addresses in your development environment. In the Drupal world, we do that by using the drush sql-sanitize command. With no arguments, how I typically execute it, the command will set all users emails addresses to a phony address that looks like this: [email protected]. This is a good thing. Then, even in cases in which you do accidentally send out emails in an automated way, often from cron, sending to phony addresses is a livable mistake; no end-user receives an email that confuses her or makes her lose confidence in your organization.

So, in most cases, drush sqlsan (alias) is enough, and the mail redirection options linked above are additional safety measures. Of course, I’m not writing about most scenarios now am I? Sadly, I’m not yet aware of a comprehensive solution that ensures no email will be sent from a development environment. Please comment if you know of one!

The specific problem with user_revision module

One pernicious case, in which drush sqlsan is insufficient in sanitizing your database, is when the user_revision module is enabled on a Drupal 7 site, at least without my patch applied. The user_revision module extends the UserController class, which overwrites fields from the “base” table users (in the case) to the “revision” table, user_revision, due to the way that entity_load() works . Therefore, when a user entity is loaded, it receives the mail field from the user_revision table. Without the above patch applied, this table is not affected by drush sqlsan.

How did I discover this?

I discovered this when adding new cron, notification functionality to the Tilthy Rich Compost website, which we maintain. We began using the user_revision module in 2013 due to losing information we still needed from canceling users. After sending emails to subscribers from my development environment for the 10th time in 2 years, even after sanitizing, I was determined to figure out once and for all, what was going on. So like any deep-dive, I fired up the trusty ol’ debugger and discovered the aforementioned culprit.

The solution to the specific problem

After consulting Kosta, we agreed that the solution would be to write a drush hook for the user_revision module. This code would need to sanitize the mail field in the user_revision table and would be invoked when the drush sqlsan command is executed in the presence of the user_revision module. However, to write this code efficiently and effectively, I would need to debug drush commands during execution, which I had never done.

How to debug drush (or other CLI scripts) with PHPStorm

Set up xdebug (Mac only)

I first installed xdebug with homebrew via this method. NB: Changing the port change to 10000 was necessary for me.

Upgrade to latest drush

I ensured I was using the most recent version of drush to ensure that the code I wrote would apply to the most recent drush development.

Getting breakpoints in PHPStorm to listen to drush

Several have blogged about this before, so I’ll just point theirs out. Generally, I followed these instructions, but I trust that my mentor and friend Randy Fay’s article is excellent as well. They all seemed to use xdebug and PHPStorm, and though I use PHPStorm, I have been using ZendDebugger for years, with reasonable success. But I had been dissatisfied of late, and the rest of the team uses xdebug anyway, so I figured it would be a safe switch, which proved true. After having xdebug properly installed, you can add a line to your .bashrc file to always make PHPStorm ready to listen for drush commands.

The solution in action

So now when running drush sqlsan, we can truly feel safe that we won’t send emails to anyone we didn’t mean to. You’re welcome community :wink:

Will user_revision exist in D8?

It’s not clear, though some think so. Perhaps mature D8 entities and revisioning on all entities will render a contrib module unnecessary. Time will tell.

Jul 06 2015
Jul 06

Adding a map to a Drupal 7 site is made easy by a variety of location storage and map rendering modules. However, at the time of this post most of these modules don’t have an 8.x branch ready and therefore aren’t usable in Drupal 8. Update: As of October 2016, the Leaflet module has a beta release for Drupal 8. Since Savas Labs has recently taken on a Drupal 8 mapping project, we decided to use the Leaflet library within a custom theme to render our map and the Views GeoJSON module to store our data.

Before we jump in, I have to give major kudos to my coworker Tim Stallmann. This tutorial is based on his excellent post about mapping with Leaflet and GeoJSON, so check that out for a great primer if you’re new to mapping.

Setup

Before we can get into mapping, we’ll need a working Drupal 8 site. Savas Labs has previously gone over setting up a D8 site using Docker and creating a custom theme. That said, you don’t need to use Docker or a custom theme based on Classy to create your map - any Drupal 8 instance with a custom theme will do. In this tutorial, I’ll be referencing our custom theme called Mappy that we created for the Durham Civil Rights Mapping project.

Install contributed modules

First you’ll need to install several contributed modules in your site’s modules/contrib directory:

  • Geofield, which creates a new field type called geofield that we’ll use within a view
  • Views GeoJSON, a style plugin for Views that outputs data in GeoJSON, which can be used by Leaflet to create map points. Update: We’re using the 8.x-1.x-dev version of Views GeoJSON. You can follow the status of the module’s port to Drupal 8 here.

Update: We previously recommended installing the GeoPHP module, which was a dependency of the Geofield module. This dependency has been removed and the GeoPHP library must be managed with Composer.

# Install geophp library
# Run this in the Drupal root
composer require "phayes/geophp"

After running this command you should see the phayes/geophp directory in vendor. You can read more about installing dependencies via Composer on drupal.org.

There are 3 core modules you’ll need:

Rest and serialization are dependencies of Views GeoJSON, so they will be installed when Views GeoJSON is installed.

Add the Leaflet library

Screenshot of the Mappy theme showing Leaflet file locations

Head over to Leaflet’s website and download the latest stable release of the Leaflet library.

Move the Leaflet files into your custom theme directory (see mine for reference). You can put your files wherever you want to, you’ll just need to customize your filepaths in the libraries file in the next step.

You should also create a custom JavaScript file to hold your map code - ours is called map.js.

Next you’ll need to add the Leaflet library to your theme’s libraries file. In mappy.libraries.yml, shown below, I’ve defined a new library called leaflet and stated the paths to leaflet.css, leaflet.js and my custom JS file map.js.

Note that I’ve listed jQuery as a dependency - in Drupal 8 jQuery is no longer loaded on every page, so it needs to be explicitly included here.

# From mappy.libraries.yml
leaflet:
  css:
    theme:
      css/leaflet.css: {}
  js:
    js/leaflet.js: {}
    js/map.js: {}
  dependencies:
    - core/jquery

Once the library is defined, you need to include it on your page. This can be done globally by including the following in your [theme].info.yml file:

# In mappy.info.yml
libraries:
 - mappy/leaflet

You could also attach the library in a Twig template:


{# In some .html.twig file #}
{{ attach_library('mappy/leaflet') }}


For more methods of attaching assets to pages and elements, check out Drupal.org’s writeup on the matter.

Define a new content type

Now we need a content type that includes a location field.

  1. Navigate to admin/structure/types/add
  2. Give your new content type a name (we called ours “Place”), then click “Save and manage fields.”
  3. Add a new field with the field type Geofield. If Geofield isn’t an option, you should double-check that the Geofield module is installed. Add a label (we used “Location Coordinates”), then click “Save and continue.” Screenshot of geofield creation
  4. On the next page, leave the number of maximum values at 1 and click “Save field settings.”

That’s it! Obviously you can add more fields to your content type if you’d like, but all we need to generate a map marker is the geofield that we created.

Add some content

Next, add a few points by navigating to “node/add/place” (or node/add/whatever your content type is called) and create a few nodes representing different locations. A quick Google search can provide you with the latitude and longitude of any location you’d like to include.

Add a new view

Next we’ll add a view that will output a list of our “place” nodes as GeoJSON thanks to the Views GeoJSON plugin.

  1. Navigate to admin/structure/views/add
  2. Give your new view a name - ours is called “Points.”
  3. Under View Settings, show content of type “Place” (or whatever you named your new content type).
  4. Check the “Provide a REST export” box. Note that this box will only be available if the rest module is installed. Enter a path for your data to be output - we chose “/points”. Click “Save and edit.”
  5. Add the Location Coordinates field and choose GeoJSON as the output format. You can add other fields if you want to, but we’ll definitely need this one.
  6. Under “Format,” click on “Serializer.” Change the style to GeoJSON. When the GeoJSON settings pop up, add the following settings: Screenshot the rest export settings for the Places view
  7. Under “Pager,” change the number of fields to display to 0 (which means unlimited in this case).

For reference, here’s the settings for my Places view: Screenshot of settings for the Places view

We’ve just set up a view that outputs GeoJSON data at [site-url]/points. Take a minute to go to that URL and check out your data. In the next step, we’ll use this page to populate our map with points.

Create the map div and add a base map

The first thing we need to do is create a div with the id “map” in our template file. Our map is on the front page so I’ve inserted the following into page--front.html.twig. Place this code in the template your map will reside in. Feel free to customize the class, but the ID should remain “map.”

<div id="map" class="map--front"></div>

We also need to define a height and width of the map div. Ours is going to span the entire page background so I’ve included the following in my Sass file:

.map--front {
  // Set these to whatever you want.
  height: 100vh;
  width: 100%;
}

Previously we created a custom JavaScript file to hold our map code. Ours is called map.js and is located in our custom theme’s js directory. In the code below, we’ve added the map itself and set a center point and a zoom level. We’ve centered over our hometown of Durham, NC and selected a zoom level of 12 since all of our map markers are viewable within this region. Check out this explanation of zoom levels, or go for a little trial and error to get the right one for your map.

(function ($) {
  // Create map and set center and zoom.
  var map = L.map('map', { // The `L` stands for the Leaflet library.
    scrollWheelZoom: false,
    center: [35.9908385, -78.9005222],
    zoom: 12
  });
})(jQuery);

Now we need to add a base map. We’re using Positron by CartoDB. We’ll import the tiles and attribution, then add them as a layer to our map.

(function ($) {
  // Add basemap tiles and attribution.
  var baseLayer = L.tileLayer('http://{s}.basemaps.cartocdn.com/light_all/{z}/{x}/{y}.png', {
    attribution: '&copy; <a href="http://www.openstreetmap.org/copyright">OpenStreetMap</a> contributors, &copy; <a href="http://cartodb.com/attributions">CartoDB</a>'
  });

  // Create map and set center and zoom.
  var map = L.map('map', {
    scrollWheelZoom: false,
    center: [35.9908385, -78.9005222],
    zoom: 12
  });

  // Add basemap to map.
  map.addLayer(baseLayer);
})(jQuery);

Go to your Drupal site and rebuild your cache and you should see your base map!

Screenshot the base map

Add our points

Next, we’re going to access the GeoJSON we’re outputting via our view to add points to our map. First, let’s add the path to our marker image.

L.Icon.Default.imagePath = '/themes/custom/mappy/images/leaflet/';

Now we’ll use .getJSON to retrieve our data from the url “/points,” then trigger the addDataToMap function to create a new layer containing our points via Leaflet’s geoJson function.

// Add points.
  function addDataToMap(data, map) {
    var dataLayer = L.geoJson(data);
    dataLayer.addTo(map);
  }

  $.getJSON('/points', function(data) {
    addDataToMap(data, map);
  });

Refresh - we’ve got points!

Screenshot of the map with markers

The last thing to do is add popups to each point when they’re clicked. We’ll insert this code in the addDataToMap function. If you actually navigate to [site-url]/points, you can inspect your GeoJSON and see which array keys have been assigned to the fields in your content type.

Screenshot of GeoJSON

I want to display the node title in the popup, which I can see is at feature.property.name.

 function addDataToMap(data, map) {
        var dataLayer = L.geoJson(data, {
            onEachFeature: function(feature, layer) {
                var popupText = feature.properties.name;
                layer.bindPopup(popupText);
            }
        });
        dataLayer.addTo(map);
    }

Now when I click on a point I get a nice little popup with the node title.

Screenshot of the map with a popup open

Check out the entire map.js file and be sure to visit Savas Labs GitHub repository for the Durham Civil Rights Mapping project and the completed site to see a Drupal 8 site in action!

Jun 23 2015
Jun 23

No one likes long forms. They’re overwhelming to look at and it’s easy to lose your place. Multi-step forms are a way to simplify data collection and make your users’ lives easier.

Drupal’s Form API combined with the CTools module provide a solid platform for building a multi-step form. There are many wonderful guides on how to build a CTools multi-step form in Drupal. But as far as I can tell, all of the guides assume that the form will live on a page — which makes sense, as that’s the most common use case.

On a recent project, though, a client asked us to create a multi-step form using only a block, so it could be placed on a page using Panels. It turns out this is pretty straightforward but as it’s not well documented elsewhere, here’s a quick guide to what you need to do. (Note: I created an example module on GitHub, so please reference that as needed. This post is just covering the highlights.)

In the main CTools form definition, which looks something like this:

<?php
$form_info = array(
  'id' => 'quote-form',
  'ajax' => TRUE,
  'path' => 'example-form/%step',
  'show trail' => TRUE,
  'show back' => TRUE,
  'show return' => FALSE,
);

We need to change path to use query parameters for advancing the form. So let’s change path to 'path' => 'example-form?step=%step'. The path example-form could be generated by a View, a node page, a Panel page, etc.

Define the block

Next, we need an implementation of hook_block_info() and hook_block_view(). hook_block_info() is pretty unremarkable so I’m not including it here, other than to say you should consider setting cache to DRUPAL_NO_CACHE when declaring your block. Now, on to hook_block_view():

<?php
/**
 * Implements hook_block_view().
 */
function example_block_view($delta = '') {
  $block = array();
  switch ($delta) {
    case 'example_form':
      $block['subject'] = t('Our example form');
      $parameters = drupal_get_query_parameters();
      $next_step = empty($parameters['step']) ? 'step-one' : $parameters['step'];
      $block['content'] = example_ctools_wizard($next_step);
      break;
  }
  return $block;
}

Let’s take a closer look at this. drupal_get_query_parameters() is checking to see if there’s a query parameter for step in the current URL (e.g. http://localhost?step=step-two). If so, we set the $next_step variable to that value; if not, we default to step-one as the starting point for the form. We then pass in the $next_step variable to our example_ctools_wizard() function, which generates the multi-step form.

Send the user on their way

So far, so good. But there’s one problem at this stage. If we try to use the form now, we’ll get errors: clicking “Continue” on the form will take you to a 404 page of http://localhost/example-form%3Fstep%3Dstep-two instead of http://localhost?step=step-two. That’s because CTools runs the path we declared in 'path' => 'example-form?step=%step' through an encoding function.

The workaround is to use our subtask_next callback to redirect our user where we want them to go using drupal_goto():

<?php
/**
 * Callback executed when the 'next' button is clicked.
 */
function example_subtask_next(&$form_state) {
  $values = (array) example_get_page_cache('quote');
  example_set_page_cache('quote', array_merge($values, $form_state['values']));
  // Because we are using query parameters to advance/rewind the form, and
  // Ctools doesn't like query parameters (URL encoding fails), we'll use
  // drupal_goto() to take the user where they need to go.
  $destination = substr($form_state['redirect'][0], strlen(example-form?step='));
  drupal_goto('example-form', array('query' => array('step' => $destination)));
}

The first two lines are caching form values so that as the user goes back and forth between steps on the form, their data is cached. Moving on: remember how CTools is sending us to a 404 page with the encoded value of the path we want to go to? It turns out the un-encoded value is in $form_state['redirect'][0], in the form of example-form?step=step-two.

Since we know the base path, we can use substr() and strlen() to extract the value of step= and then pass that along to drupal_goto(). drupal_goto() bypasses CTools’ own redirection, and thus we are able to avoid the unwanted encoding of the path, and can send our users happily along their way.

Jun 10 2015
Jun 10

Theming in Drupal 8 means a lot of changes for current Drupalers and a lot of awesome stuff for everyone. In this post I’ll cover:

  • What’s changing
  • The positives and negatives of these changes
  • How to create a custom theme in D8
  • Twig basics
  • Twig debugging

What’s new?

Too much to list here, but here are some highlights:

  • Twig, a template engine by SensioLabs, is used inside template files in lieu of PHP
  • Responsive design elements are included by default
  • Breakpoints can be set and used across modules and themes
  • Support for IE8 and below is dropped meaning the use of jQuery 2.0, HTML5, and CSS3 (including pseudo selectors) are now supported
  • Classy, a new base theme, is introduced
  • CSS: far fewer IDs are used, default classes are no longer in core but are moved to Classy, CSS file structure now uses SMACSS and class names follow the BEM format
  • CSS and JS files are attached to pages differently
  • template.php becomes the slightly better-named [theme-name].theme. Maybe we’ll finally get theme.php in Drupal 9?

Check out Drupal’s change log for a comprehensive list of changes.

Why all the changes?

Though the theming layer in Drupal 8 is quite different from Drupal 7 and will require some relearning, these changes come with great improvements, including:

  • Fewer Drupal-specific conventions and more popular, well-documented frameworks (such as Twig), meaning non-Drupalers can jump in much more quickly. Let’s face it - Drupal 7 theming has a major learning curve, which can keep developers and designers from using Drupal at all.
  • Template files are more secure since they no longer contain PHP code (thanks to Twig). In his D8 theming guide, Sander Tirez offers this nice/scary example of PHP code that could be executed in a Drupal 7 template file:
// This really shouldn’t be allowed to work, and it won’t in D8.
  <?php db_query('DROP TABLE {users}'); ?>

  • Even more security: text is automatically escaped in Twig, meaning a lower chance of XSS attacks.
  • D8 themers don’t need to know PHP to whip up a theme.
  • Separation of logic from appearance, leading to more modular (reusable) code.
  • Speaking of more modular code, Twig introduces template inheritance
  • Lack of browser support for IE8 and below means we get to use HTML5, CSS3, and modern jQuery libraries
  • More semantic CSS class names means leaner CSS and a more readable DOM
  • A general trend towards more extendable, modular, well-organized, better-performing code

Okay, are there any disadvantages?

At the time of writing this post, the toughest things about theming in Drupal 8 for me were:

  • Contributed themes and modules not having their 8.x branches ready. So far I haven’t seen any contributed themes that are truly usable with Drupal 8. This will surely change soon, and it’s good motivation to submit patches in the meantime.
  • Lack of documentation online. When building my first D8 site, documentation often didn’t exist for the problem I was having, it existed but was marked as out of date, or it was out of date but NOT marked as such. This was definitely a challenge! I’d recommend taking anything you read with a grain of salt (including this).

Fortunately both of these problems will resolve as Drupal 8 gets closer to release.

Creating a custom theme in Drupal 8

So, now that we’ve covered some reasons that D8 theming will be awesome and we’re feeling motivated to submit some patches and write some documentation, let’s create a custom theme using Classy as a base.

The first thing to note is the different file structure. The core folder now holds all the the modules and themes that ship with Drupal, and contributed and custom modules and themes are now found respectively in the modules and themes folders in the Drupal document root (mine is called docroot).

Let’s create a folder for our new theme. Savas Labs is working on a Drupal 8 mapping project, so I’ll use that as an example. Our theme is called “Mappy,” so we’ve created a folder for our theme within themes/custom.

Screenshot of D8 file structure.

The first file we’ll want to create is [theme-name].info.yml, which replaces D7’s [theme-name].info file. I’ve created mappy.info.yml, shown below. If you’re new to YAML, Symfony has a nice writeup on syntax. Pay close attention to the whitespace - for example, a space is required after the colon in key-value pairs.

# mappy.info.yml
name: Mappy
type: theme
description: 'D8 Theme for a basic leaflet site.'
core: 8.x
base theme: classy
libraries:
 - mappy/global-styling
 - mappy/leaflet
regions:
  navbar: 'Top Navigation Bar'
  content: Content
  sidebar: 'Sidebar'
  footer: 'Footer'

Let’s knock out the easy ones:

name: Mappy
type: theme
description: 'D8 Theme for a basic leaflet site.'
core: 8.x

This information tells Drupal that we’re dealing with a Drupal 8 theme and gives Drupal a name and description to display in the admin UI. Note that all of these items are required for your theme to be installable.

regions:
  navbar: 'Top Navigation Bar'
  content: Content # required!
  sidebar: 'Sidebar'
  footer: 'Footer'

This hasn’t changed much from Drupal 7. Don’t forget that the Content region is required. You can also forego declaring regions if you want to use Drupal’s default regions.

Classy, the new base theme

base theme: classy

Classy is a brand new base theme that ships with Drupal core. All CSS classes were moved out of core template files and into Classy’s as a way to a) contain, minimize, and organize default classes and b) give developers the option of not using Drupal’s default classes without having to undo core. One can simply choose not to use Classy as a base theme.

Additionally, Classy’s classes follow the BEM convention, making them less generic and more meaningful. Check out this article for a great introduction to BEM.

Libraries

libraries:
 - mappy/global-styling
 - mappy/leaflet

In Drupal 8, assets can be added to pages in a few different ways: globally, per-template, and per-page. We’ve chosen to add our CSS and JS globally since this is a small site and the same relatively lightweight assets are used on almost every page.

In the mappy.info.yml file, I’ve listed two libraries. These correspond to items in my mappy.libraries.yml file, which lives in the root of my theme directory. No matter how you’re including CSS or JS files on a page, you’ll need to define them in your [theme-name].libraries.yml file.

# mappy.libraries.yml
global-styling:
  css:
    theme:
      css/styles.css: {}

leaflet:
  css:
    theme:
      css/leaflet.css: {}
  js:
    js/leaflet.js: {}
    js/map.js: {}
  dependencies:
    - core/jquery

As you may have guessed, global-styling is a library that applies site-wide styles. leaflet is the leaflet library, which consists of leaflet.css and leaflet.js, plus our custom file map.js. jQuery is no longer loaded on every page in Drupal 8, so we have to explicitly include it when it’s required.

By listing these two libraries in mappy.info.yml we ensure that these assets will be included on every page of our site. However, this is typically not the best practice for larger sites since these files can seriously affect performance. This page on Drupal.org details how to attach assets to pages via hooks so that CSS and JS files are only loaded where they’re needed.

Breakpoints

Another new YAML file, [theme-name].breakpoints.yml, allows developers to create standard breakpoints to be used by modules and themes across the site. You can set custom breakpoints by defining them in this file. Below is our breakpoints file, which also resides in the root of our theme. Note that we simply adapted the breakpoints file from the Bartik theme.

# mappy.breakpoints.yml
mappy.mobile:
  label: mobile
  mediaQuery: '(min-width: 0px)'
  weight: 2
  multipliers:
    - 1x
mappy.narrow:
  label: narrow
  mediaQuery: 'all and (min-width: 560px) and (max-width: 850px)'
  weight: 1
  multipliers:
    - 1x
mappy.wide:
  label: wide
  mediaQuery: 'all and (min-width: 851px)'
  weight: 0
  multipliers:
    - 1x

Important tip: Once you add a breakpoints file, you’ll need to uninstall and reinstall your theme to expose these breakpoints in the admin UI.

With these files set up, you now have a working custom theme!

Creating template files with Twig

In our custom theme’s current state, we’re using Classy’s template files as-is. If we want to customize any of these templates, we need to override them with Twig files located in our custom theme’s templates directory.

Twig is a template engine with syntax similar to Django, Jinja, and Liquid. It simplifies template creation with clean syntax and useful build-in filters, functions, and tags. In a Drupal template file (now with the extention .html.twig), anything between {{ ... }} or {% ... %} or {# ... #} is Twig.

  • {{ These }} are for printing content, either explicitly or via functions
  • {% These %} are for executing statements
  • {# These #} are for comments

Printing variables and regions

In Drupal 7 we render content like so:

<?php print render($page['sidebar']); ?>

Printing variables using Twig in D8 is as easy as including them in the double curly brace delimiter.

// In page--front.html.twig
// Print the sidebar region.

{{ page.sidebar }}


…unless there are special characters in the variable name. If that’s the case and you see an error when using the syntax above, you can use Twig’s subscript syntax, which should look pretty familiar to Drupalers:

// In page--front.html.twig
// Print the page type.

{{ page['#type'] }}


This will be more useful when debugging. The Drupal core base themes include lists of available variables and regions in the DocBlock of their template files, or you can print variables to the page via Twig’s debug mode (more on that below) to see what’s available to you.

Filters and functions

Twig comes with many built-in filters that variables are passed to via the pipe character. These filters do many of the things that PHP functions would have in previous Drupal versions. One example is the date filter:

// Format the post date.

{{ post.published|date("Y-m-d") }}


There are also Drupal-specific Twig filters, such as t which runs the string through the t() function.

// Run an ARIA label through t()

<nav class="tabs" role="navigation" aria-label="{{ 'Tabs'|t }}">


By the way, ARIA labels are new in Drupal 8 too!

In addition to filters, Twig provides a range of functions that are also used within the double curly brace delimiters.

Tags

Control flow and other tags are also supported in Twig. One of my favorite things about templating languages is how easy it is to execute if statements and for loops. Savas Labs uses Jekyll for our company website and the Liquid templating language makes it easy to loop through a list of data points, blog posts, projects, etc. and print them to a page rather than writing out all of the HTML. In Drupal, we’ll use the if statement quite often.

// From Bartik's page.html.twig
// If there are tabs, output them.

{% if tabs %}
  <nav class="tabs" role="navigation" aria-label="{{ 'Tabs'|t }}">
    {{ tabs }}
  </nav>
{% endif %}


Another useful tag is set, which allows you to set and use variables throughout the template. In the following example, the variable heading_id is set and then used as the aria-labelledby attribute. Note that the Twig concatenation character ~ is used, and the string ‘-menu’ is passed through the clean_id filter.

// From Classy's block--system-menu-block.html.twig

{% set heading_id = attributes.id ~ '-menu'|clean_id %}
<nav{{ attributes.addClass(classes) }} role="navigation" aria-labelledby="{{ heading_id }}">


Coding standards

Since this is new to some Drupalers, take a moment to check out the coding standards for Twig.

Debugging with Twig

Twig comes with a highly useful debug feature that outputs helpful HTML comments and allows you to code without having to clear the cache constantly, but it doesn’t work out of the box. We’re going to turn on that feature and disable the several layers of caching that require developers to clear the cache every time they make a change in a template file.

To enable debug mode and turn off caching, we need to do 3 things:

  1. Turn on Twig’s debug mode
  2. Turn on Twig auto reload, meaning that Twig templates are automatically recompiled when the source code is changed
  3. Disable Drupal’s render cache

Note that one thing we do NOT need to do, surprisingly, is turn off Twig caching - turning on auto reload is enough.

If you open default.services.yml located in sites/default, you’ll see some twig.config options where you can enable Twig debugging auto reload. I’m going to use this syntax but in a different file because I’m using a local settings file.

I created settings.local.php by making copy of example.settings.local.php in sites, moving it to sites/default and renaming it. I then opened up settings.local.php and customized the $databases['default']['default'] array.

To get Drupal to recognize my local settings file, I opened settings.php and uncommented the last 3 lines:

<?php
if (file_exists(__DIR__ . '/settings.local.php')) {
  include __DIR__ . '/settings.local.php';
}

In settings.local.php we’ll see:

<?php
/**
 * Enable local development services.
 */
$settings['container_yamls'][] = DRUPAL_ROOT . '/sites/development.services.yml';

This means we need to head over to sites and edit development.services.yml to change our local development services. I added these lines to this file to enlable debug mode and auto reload:

parameters:
  twig.config:
    debug: true
    auto-reload: true

Great, we’ve completed steps 1 and 2. Fun fact: step 3 is already complete too! In settings.local.php:

<?php
/**
 * Disable the render cache (this includes the page cache).
 *
 * This setting disables the render cache by using the Null cache back-end
 * defined by the development.services.yml file above.
 *
 * Do not use this setting until after the site is installed.
 */
$settings['cache']['bins']['render'] = 'cache.backend.null';

So by using the local settings file we’ve already disabled the render cache.

Now, reload your site and you should see HTML comments in your browser’s code inspector with lots of helpful info: which theme hook is being implemented, theme hook suggestions (i.e. how to override the current theme hook), and which template file is being output. You can also make changes to your source code and simply refresh the page to see your changes rather than constantly clearing the cache.

Where my variables at?

One useful function that comes with Twig is dump(). This function works once you’ve enabled Twig’s debug mode and can be entered into any template file.

// Print out all variables on the page.

{{ dump() }}


// Print the page's base path.

{{ dump(base_path) }}


dump() is great, but it outputs a rather unwieldy array.

Screenshot of dump function output.

Enter the beloved Devel module and the new Devel Kint module. Kint is to Drupal 8 what krumo was to Drupal 7. Once Devel and Devel Kint are installed, you can use kint() in place of dump() for a nice expandable array.

// In page--front.html.twig
// Print out all variables on the page.

{{ kint() }}


Screenshot of kint function output.

Ahh, much better!

Further reading:

  • Start with Drupal.org’s theming guide
  • Drupalize Me’s post about debugging Twig has some detailed information about dump(), devel and kint. Be aware that some of the information in that post on configuring Twig is out of date.
Apr 23 2015
Apr 23

In preparation for a code sprint our team is organizing to port Views GeoJSON to Drupal 8, we decided it would be a great opportunity to standardize our Drupal 8 local development environments. To ease this process, we ended up using Docker, Docker Compose, and Bowline.

In this post I’ll give a brief overview of these awesome tools, and explain how we set them up on OS X to create identical Drupal 8 development environments on each team member’s computer.

Background

There are a lot of different ways to setup a local development environment, and there are usually many challenges along the way. One problem that often arises is that a developer’s local environment differs from their co-workers and/or their staging or production environments. For example, maybe you’re running PHP 5.6, your colleague is running PHP 5.5, and production is running PHP 5.4. This can cause issues when you share or deploy code that works in one environment but not in another. Using Docker, Docker Compose, and Bowline we can remove this pain point by ensuring that all of the environments are the same.

First off what are Docker, Docker Compose, and Bowline?

By using these tools, you can ensure that each member of your team has the same local setup. That way, if code works in one environment, then it works in all of them.

Now I’ll go through the steps I followed to set everything up on my Mac. For this tutorial I’ll be showing you how to set up a fresh Drupal 8 install, however you can also use these tools on new or existing Drupal 6 and 7 projects.

Initial Setup

Install Docker on your machine

Unfortunately, you can’t run Docker natively in OS X, as explained in the Docker documentation:

Instead, you must install the Boot2Docker application. The application includes a VirtualBox Virtual Machine (VM), Docker itself, and the Boot2Docker management tool. The Boot2Docker management tool is a lightweight Linux virtual machine made specifically to run the Docker daemon on Mac OS X.

To install Boot2Docker, I followed the Docker Mac installation instructions. First, I installed Boot2Docker from boot2docker/osx-installer as explained in that tutorial. The docker and boot2docker binaries are installed in /usr/local/bin which you can access from your terminal. I then followed the instructions for starting Boot2Docker from the command line, running the following commands:

Setup Boot2Docker, this only needs to be run once during initial setup:

$ boot2docker init

Start the boot2docker application:

$ boot2docker start

Set the environment variables for the Docker client, so that it can access Docker running on the Boot2Docker virtual machine. Note, this needs to be run for each terminal window or tab you open:

$ eval "$(boot2docker shellinit)"

Verify that boot2docker is running:

$ boot2docker status

Verify that the Docker client environment is initialized:

$ docker version

Install Docker Compose

To install Docker Compose I followed the Docker Compose installation instructions. I ran the following command:

$ curl -L https://github.com/docker/compose/releases/download/1.2.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
$ chmod +x /usr/local/bin/docker-compose

Verify the installation of Docker Compose:

$ docker-compose --version

Optionally, if you choose to install command completion for Docker Compose for the bash shell, you may follow these steps, however on OS X you will probably have to modify the bash_completion.d path as follows:

$ curl -L https://raw.githubusercontent.com/docker/compose/1.2.0/contrib/completion/bash/docker-compose > /usr/local/etc/bash_completion.d/docker-compose

Install Bowline and set up the Drupal 8 Project

The Bowline project readme provides installation and setup instructions, however I needed to make some modifications. For example, Fig has been deprecated in place of Docker Compose. Explicitly I did the following:

Pull the Docker images:

$ docker pull davenuman/bowline-web-php
$ docker pull mysql:5.5

Navigate to your sites directory and create a new project for your drupal 8 site (for me, that directory is /Users/dan/Sites):

$ mkdir drupal8
$ cd drupal8

Setup bowline:

$ git init
$ git remote add bowline [email protected]:davenuman/bowline.git
$ git remote update
$ git checkout bowline/master .
$ git add . && git status
$ git rm --cached readme.md
$ rm readme.md
$ git commit -m 'Starting with bowline code'

Activate bowline and build the containers:

$ . bin/activate
$ build

Check that that the containers are running:

Download Drupal 8 to your project folder and rename the folder as docroot. You can manually download it here, or you can use wget as follows:

$ wget http://ftp.drupal.org/files/projects/drupal-8.0.0-beta9.tar.gz
$ tar -xzvf drupal-8.0.0-beta9.tar.gz
$ mv drupal-8.0.0-beta9 docroot
$ rm drupal-8.0.0-beta9.tar.gz

Update Composer (not to be confused with Docker Compose) to use Drush 7 (Drupal 8 does not work with older versions of Drush):

$ composer require drush/drush:dev-master

Use bowline to initialize the site settings and then use drush to install Drupal

$ settings_init
$ drush si --sites-subdir=default

Your Drupal 8 site is now set up and running on the web and MySQL Docker containers you set up using Bowline. However, a few extra steps are required so that you can access your site on the Apache server that is running within the web container.

Run the bowline command to get the IP address of your web container, it should be something like http://172.17.0.2/

Run the boot2docker ip command to get the IP address of the boot2docker virtual machine, it should be something like 192.168.59.103

We now manually add a subnet route to the docker instance running inside the boot2docker virtual machine as follows:

$ sudo route -n add 172.0.0.0/8 192.168.59.103

You should now be able to access your Drupal 8 site using the web IP address by ing the bowline command, for example http://172.17.0.2/ for my setup.

One of the great things about Bowline is it sets up drush to work with your containers. You can use drush to get a one time login link for the admin user by simply running:

$ drush uli

You can also debug the site using XDebug. I will explain how to set this up in a future blog post.

When you’re done

Bowline, Boot2Docker, all docker containers, and the subnet route will all cease when you restart your machine, however you can manually stop them as follows:

Deactivate bowline:

$ deactivate

Stop all docker containers:

$ docker stop $(docker ps -a -q)

Stop the boot2docker virtual machine:

$ boot2docker stop

Remove the static route:

$ sudo route -n delete 172.0.0.0/8 192.168.59.103

Running in the future

Next time you start you machine and want to fire up the development environment you’ll need to run the following commands from the project root.

$ boot2docker start
$ eval "$(boot2docker shellinit)"
$ . bin/activate
$ build
$ sudo route -n add 172.0.0.0/8 192.168.59.103
$ bowline

Troubleshooting

For each new terminal window or tab you open, you have to set the Boot2Docker environment variables. If you see an error message like: “Couldn’t connect to Docker daemon - you might need to run `boot2docker up`.” you may need to run:

$ eval "$(boot2docker shellinit)"

If you move to a different project within the same terminal window or tab, make sure you deactivate bowline:

$ deactivate

If drush status is working, but the site is not loading, double check that you are rerouting to the Boot2Docker IP address:

$ sudo route -n add 172.0.0.0/8 192.168.59.103

Versions used in this tutorial

  • Docker 1.6.0
  • Boot2Docker 1.6.0
  • Docker Compose 1.2.0
  • Drupal 8.0.0-beta9
  • Drush 7.0-dev
Apr 07 2015
Apr 07

Drupal Association icon There are lots of free and open source software (FOSS) projects out in the world. Why do some thrive while others founder? I would argue that a strong community is needed to push FOSS projects forward. And to help that happen, organization is key.

Drupal is a thriving FOSS project, and one of the main organizations moving it forward is the Drupal Association. Today, Savas Labs is proud to have become a supporting member of the Drupal Association. In their words:

The Drupal Association is dedicated to helping the open-source Drupal CMS project flourish. We help the Drupal community with funding, infrastructure, education, promotion, distribution and online collaboration at Drupal.org.

By supporting the Drupal Association, we’re excited to help sustain the growing network of Drupal developers around the world. But we also recognize that making a yearly payment to an organization is just the starting point.

Keep an eye on our blog as we’ve got some exciting plans to strengthen the Drupal community here in Durham, North Carolina, with trainings, community code sprints, and mentoring opportunities.

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web