Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Feb 11 2021
Feb 11

Last week one of our clients was asking me about how they should think about the myriad of options for website hosting, and it inspired me to share a few thoughts. 

The different kinds of hosting

I think about hosting for WordPress and Drupal websites as falling into one of three groups. We’re going to compare the options using an example of a fairly common size of website — one with traffic (as reported by Google Analytics) in the range of 50,000–100,000 visitors per month. Adjust accordingly for your situation. 

  • “Low cost/low frills” hosting — Inexpensive website hosting would cost in the range of $50–$1,000/yr for a site with our example amount of traffic. Examples of lower cost hosts include GoDaddyBluehost, etc.  Though inexpensive, these kinds of hosts have none of the infrastructure that’s needed to do ongoing web development in a safe/controlled way such as the ability to spin up a copy of the website at the click of a button, make a change, get approval from stakeholders, then deploy to the live site. Also, if you get a traffic spike, you will likely see much slower page loads. 
  • “Unmanaged”, “Bare metal”, or “DIY” hosting — Our example website will likely cost in the range of $500–$2,500/yr. Examples of this type of hosting include: AWSRackspaceLinode, etc. or just a computer in your closet. Here you get a server, but that’s it. You have to set up all the software, put security measures in place, and set up the workflow so that you can get stuff done. Then it’s your responsibility to keep that all maintained year over year, perhaps even to install and maintain firewalls for security purposes. 
  • “Serverless” hosting¹ — It’s not that there aren’t servers, they’re just transparent to you. Our example website would likely cost in the range of $2500–5000/yr. Examples of this kind of hosting: PantheonWP EngineAcquiaPlatform.sh. These hosts are very specialized for WordPress and/or Drupal websites. You just plug in your code and database, and you’re off. Because they’re highly specialized, they have all the security/performance/workflow/operations in place that 90% of Drupal/WordPress websites need.

How to decide?

I recommend two guiding principles when it comes to these kinds of decisions:

  1. The cost of services (like hosting) are much cheaper than the cost of people. Whether that’s the time that your staff is spending maintaining a server, or if you’re working with an agency like Four Kitchens, then your monthly subscription with us. Maybe even 10x.  So saving $1,000/yr on hosting is only worth it if it costs less than a handful of hours per year of someone’s time. 
  2. Prioritize putting as much of your budget towards advancing your organization’s mission as possible. If two options have a similar cost, we should go with the option that will burn fewer brain cells doing “maintenance” and other manual tasks, and instead choose the option where we can spend more of our time thinking strategically and advancing the mission.

This means that you should probably disregard the “unmanaged/bare/DIY” group. Whoever manages the site will spend too much time running security updates, and doing other maintenance and monitoring tasks. 

We also encourage you to disregard the “low cost” group. Your team will waste too much time tripping over the limitations, and cleaning up mistakes that could be prevented on a more robust platform.

So that leaves the “serverless” group. With these, you’ll get the tools that will help streamline every change made to your website. Many of the rote tasks are also taken care of as part of the package. 

Doing vs. Thinking

It’s easy to get caught up in doing stuff. And it’s easy to make little decisions over time that mean you spend all your days just trying to keep up with the doing. The decision you make about hosting is one way that you can get things back on track to be more focused on the strategy of how to make your website better.

¹ The more technical members of the audience will know that “serverless” is technically a bit different.  You’d instead call this “platform-as-a-service” or “infrastructure-as-a-service”. But we said we’d avoid buzzwords.

Making the web a better place to teach, learn, and advocate starts here...

When you subscribe to our newsletter!

Jan 13 2021
Jan 13

namespace Drupal\mymodule;

use Drupal\Core\Config\ConfigFactoryInterface;
use Drupal\Core\Entity\EntityInterface;
use Drupal\Core\Entity\EntityTypeManagerInterface;
use Drupal\Core\PathProcessor\InboundPathProcessorInterface;
use Drupal\Core\PathProcessor\OutboundPathProcessorInterface;
use Drupal\Core\Render\BubbleableMetadata;
use Symfony\Component\EventDispatcher\EventSubscriberInterface;
use Symfony\Component\HttpFoundation\Request;
use Symfony\Component\HttpKernel\Event\GetResponseEvent;
use Symfony\Component\HttpKernel\KernelEvents;

/**
* Get the active subsite from the request based on URL.
*/
class SubsiteManager implements EventSubscriberInterface, OutboundPathProcessorInterface, InboundPathProcessorInterface {

const SUBSITE_ENTITY_TYPE = 'node';

const SUBSITE_PATH_FIELD = 'field_path';

/**
* The entity type manager.
*
* @var \Drupal\Core\Entity\EntityTypeManagerInterface
*/
protected $entityTypeManager;

/**
* The configuration factory.
*
* @var \Drupal\Core\Config\ConfigFactoryInterface
*/
protected $configFactory;

/**
* The active subsite.
*
* @var \Drupal\Core\Entity\EntityInterface
*/
protected $subsite;

/**
* Contruct the SubsiteManager.
*
* @param \Drupal\Core\Entity\EntityTypeManagerInterface $entityTypeManager
* The entity type manager.
* @param \Drupal\Core\Config\ConfigFactoryInterface $configFactory
* The configuration factory.
*/
public function __construct(EntityTypeManagerInterface $entityTypeManager, ConfigFactoryInterface $configFactory) {
$this->entityTypeManager = $entityTypeManager;
$this->configFactory = $configFactory;
}

/**
* {@inheritdoc}
*/
public static function getSubscribedEvents() {
return [
// Runs as soon as possible in the request but after
// LanguageRequestSubscriber (priority 255) because we want the language
// prefix to come before the subsite prefix.
KernelEvents::REQUEST => ['onKernelRequestSetSubsite', 254],
];
}

/**
* Get the active subsite from the request.
*
* @param \Symfony\Component\HttpKernel\Event\GetResponseEvent $event
* The event to process.
*/
public function onKernelRequestSetSubsite(GetResponseEvent $event) {
// Set the default subsite as fallback.
if ($default_subsite = $this->getDefaultSubsite()) {
$this->setCurrentSubsite($default_subsite);
}

$request = $event->getRequest();
$request_path = urldecode(trim($request->getPathInfo(), '/'));

// First strip the language when it's processor is available. This is only
// the case when more than 1 language is installed.
if (\Drupal::hasService('path_processor_language')) {
$request_path = \Drupal::service('path_processor_language')->processInbound($request_path, $request);
}

// Get the first part of the path to check if it matches to a subsite page.
$path_args = array_filter(explode('/', $request_path));
$prefix = array_shift($path_args);

if (!$prefix) {
return;
}

// If the prefix matches to a subsite page, set it in the property.
$subsites = $this->entityTypeManager->getStorage(static::SUBSITE_ENTITY_TYPE)->loadByProperties([
'type' => 'subsite',
static::SUBSITE_PATH_FIELD => $prefix,
]);
$subsite = reset($subsites);
if ($subsite) {
$this->setCurrentSubsite($subsite);
}
}

/**
* {@inheritdoc}
*/
public function processInbound($path, Request $request) {
// Get the first part of the path to check if it matches to a subsite page.
$path_args = explode('/', trim($path, '/'));
$prefix = array_shift($path_args);

// If we don't have a prefix, or if the prefix is the only thing in the path,
// keep the current path as it is. This is a subsite homepage.
if (!$prefix || $path === '/' . $prefix) {
return $path;
}

// Only when dealing with a subsite page, change the request.
$subsites = $this->entityTypeManager->getStorage(static::SUBSITE_ENTITY_TYPE)->loadByProperties([
'type' => 'subsite',
static::SUBSITE_PATH_FIELD => $prefix,
]);
$subsite = reset($subsites);
if (!$subsite) {
return $path;
}

return '/' . implode('/', $path_args);
}

/**
* {@inheritdoc}
*/
public function processOutbound($path, &$options = [], Request $request = NULL, BubbleableMetadata $bubbleable_metadata = NULL) {
if (!empty($options['subsite'])) {
$subsite = $this->entityTypeManager->getStorage(static::SUBSITE_ENTITY_TYPE)->load($options['subsite']);
$options['prefix'] = $this->addPrefix($options['prefix'], $subsite->get(static::SUBSITE_PATH_FIELD)->value);
}
$subsite = $this->getCurrentSubsite();
if ($subsite && empty($options['subsite']) && !$this->isDefaultSubsite() && $path !== '/' . $subsite->get(static::SUBSITE_PATH_FIELD)->value) {
$options['subsite'] = $subsite->id();
$options['prefix'] = $this->addPrefix($options['prefix'], $subsite->get(static::SUBSITE_PATH_FIELD)->value);
}
return $path;
}

/**
* Create the updated path prefix.
*
* @param string $existing_prefix
* The existing path prefix.
* @param string $subsite_prefix
* The Subsite path prefix.
*
* @return string
* The combined path prefix.
*/
protected function addPrefix($existing_prefix, $subsite_prefix) {
$existing_prefix = trim($existing_prefix, '/');
$subsite_prefix = trim($subsite_prefix, '/');
$combined_prefixes = array_filter([$existing_prefix, $subsite_prefix]);
$prefix = implode('/', $combined_prefixes) . '/';
return $prefix !== '/' ? $prefix : '';
}

/**
* Set the current subsite.
*
* @param \Drupal\Core\Entity\EntityInterface $subsite
* The current subsite.
*/
public function setCurrentSubsite(EntityInterface $subsite) {
$this->subsite = $subsite;
}

/**
* Get the current subsite.
*
* @return \Drupal\Core\Entity\EntityInterface
* The current subsite.
*/
public function getCurrentSubsite() {
return $this->subsite;
}

/**
* Get the default subsite.
*/
public function getDefaultSubsite() {
// Fetch a default subsite based on a config page.
$default_subsite_id = $this->configFactory->get('mymodule.settings')->get('default');
return $default_subsite_id ? $this->entityTypeManager->getStorage(static::SUBSITE_ENTITY_TYPE)->load($default_subsite_id) : NULL;
}

/**
* Check if the current subsite is the default subsite.
*/
public function isDefaultSubsite() {
return $this->subsite && $this->getDefaultSubsite()->id() === $this->subsite->id();
}

}</span>

Jan 04 2021
Jan 04
Sean B4 min read

Jan 4, 2021

Two weeks ago I tweeted about updating a website with 123 contrib and 59 custom modules to Drupal 9. When I sent that tweet I was just starting the update and all was going pretty smooth. After using the awesome Upgrade Status module, all the contributed modules and our custom modules showed they were ready. Since our team believes waiting only makes it harder to keep up, we decided to go for Drupal 9.1.0 while we were at it.

I want to start by saying that we have quite some test coverage for our custom code, and a lot of functional tests as well. I’m not sure how much time this saved, but even with the test coverage soms issues were tricky to find. That is also the reason I’m writing this. Partially to help teams that do not have the luxury of extensive test coverage, partially as a reminder that test coverage will eventually save time!

Updating all the modules using composer was a little tricky. The site contains a little over 50 core patches. Luckily most still applied but 17 of them had to be updated or rerolled. We also had to update PHPUnit and since we are using brianium/paratest we had to update to PHPUnit 9 to make all dependencies work. After we had all the code updated, the fun started.

The first thing that needs to be done to get a running site is make sure all service dependencies are correct. The deprecated entity.manager and path.alias_manager were the ones we found most.

The Path Alias core subsystem has been moved to the “path_alias” module

https://www.drupal.org/node/3092086

\Drupal\Core\Path\AliasManagerInterface
@path.alias_manager
\Drupal\path_alias\AliasManagerInterface
@path_alias.manager

EntityManager has been split into 11 classes

https://www.drupal.org/node/2549139
We found quite a few places where we had overridden or extended core classes that still injected the old entity manager. If you find references to the old entity manager in your code, there is a good chance you need to replace it with the entity type manager. In some cases though, you need other services as well, so please pay close attention to check if the constructor has changed.

entity.managerentity_type.manager

Some of the classes we had extended/overridden and constructors we had to change were:

  • NodeController
  • CommentDefaultFormatter
  • TermSelection
  • ContentEntityNormalizer

Now that we had a running site, some of the blocks wouldn’t show up. We traced this to the following issues.

Plugins now use the ‘context_definitions’ key to define their contexts

https://www.drupal.org/node/3016699

 * context = {
* “user” = @ContextDefinition(“entity:user”, label = @Translation(“User”))
* }
* context_definitions = {
* “user” = @ContextDefinition(“entity:user”, label = @Translation(“User”))
* }

Entity contexts have dedicated classes

https://www.drupal.org/node/2976400

$context = new ContextDefinition(‘entity:’ . $entity->getEntityTypeId())$context = EntityContext::fromEntityTypeId($entity->getEntityTypeId());

Render callbacks must be a closure or implement TrustedCallbackInterface or RenderCallbackInterface

https://www.drupal.org/node/2966725
This mostly applied to #pre_render methods for our code, but we also had some custom #lazy_builder callbacks.

$build[‘#pre_render’][] = my_module_block_pre_render’;$build[‘#pre_render’][] = [MyModuleBlock::class, ‘preRender’];

Now that we had a “working” site, it was time to run the tests. The following list of deprecations/changes was harder to track down and we would probably only have found those issues after extensive user testing.

The “Symfony\Component\HttpKernel\Event\GetResponseForExceptionEvent::getException()” method is deprecated since Symfony 4.4, use “getThrowable()” instead

https://www.drupal.org/project/drupal/issues/3113876

public function on404(GetResponseForExceptionEvent $event) {public function on404(ExceptionEvent $event) {

Several file uri/scheme functions deprecated and moved to \Drupal\Core\StreamWrapper\StreamWrapperManagerInterface

https://www.drupal.org/node/3035273

\Drupal::service(‘file_system’)->uriScheme($uri)StreamWrapperManager::getScheme($uri)

user.private_tempstore is deprecated use the ‘tempstore.private’ service instead

https://www.drupal.org/project/entity/issues/2951683

$container->get(‘user.private_tempstore’)$container->get(‘tempstore.private’)

ConfigurablePluginInterface is deprecated in favor of ConfigurableInterface, DependentPluginInterface.

https://www.drupal.org/node/2946161

interface MyModulePluginInterface extends ConfigurablePluginInterface, PluginFormInterface, PluginInspectionInterface {interface MyModulePluginInterface extends ConfigurableInterface, PluginFormInterface, PluginInspectionInterface {

Deprecate TermInterface::getVocabularyId()

https://www.drupal.org/project/drupal/issues/2850691

$term>getVocabularyId()$term>bundle()

The ::getCurrentUserId method is deprecated

https://api.drupal.org/api/drupal/core%21modules%21node%21src%21Entity%21Node.php/function/Node%3A%3AgetCurrentUserId/8.7.x
We had references to getCurrentUserId in the config of media/node base field overrides (eg core.base_field_override.media.image.uid.yml). These should be replaced. Other entity types might have them too.

default_value_callback: ‘Drupal\media\Entity\Media::getCurrentUserId’default_value_callback: ‘Drupal\media\Entity\Media::getDefaultEntityOwner’

POSTing to EntityResource can now happen at /node, /taxonomy/term … instead of /entity/node, /entity/taxonomy_term …

https://www.drupal.org/node/2737401

 * “https://www.drupal.org/link-relations/create" = “/api/my-entity/{id}” * “create” = “/api/my-entity/{id}”

CSRF token route protection moved out of the REST module to be available to other core systems and contrib.

https://www.drupal.org/node/2772399

$response = $http_client->get(‘/rest/session/token’, [‘query’ => [‘_format’ => ‘hal_json’]]);$response = $http_client->get(‘/session/token’, [‘query’ => [‘_format’ => ‘hal_json’]]);

Forwards-compatibility shims of PHPUnit 8 functionality added for PHPUnit 6 & 7

https://www.drupal.org/node/3133050
We had to change some of our tests for compatibility with the new PHPUnit version.

$this->assertContains$this->assertStringContainsString

Overridden test methods require void return type hints

https://www.drupal.org/node/3114724

protected function onNotSuccessfulTest(\Throwable $t) {protected function onNotSuccessfulTest(\Throwable $t): void {

Even though most of these changes had change records, I was not fully aware of all these breaking changes and I’m also not sure how realistic it is to keep track of everything. In the end it took about 4 full days to get all our code updated, tests debugged and code fixed. Looking at the list, it all seems like a very manageable set of changes for a project of this size. I can imagine most sites will be a lot easier to update. I guess having a decent amount of test coverage and debugging a couple of issues is probably a good enough way to deal with the changes. I’m curious how other teams handle this, so if you have a good way to maybe prevent some debugging in the future, please let me know!

Nov 18 2020
Nov 18
Jim VomeroJim Vomero

Jim Vomero

Senior Engineer

As a tech lead, Jim works with clients through the full project cycle, translating their business requirements into actionable development work and working with them to find technical solutions to their challenges.

November 18, 2020

From the consumer perspective, there’s never been a better time to build a website. User-friendly website platforms like Squarespace allow amateur developers to bypass complex code and apply well-designed user interfaces to their digital projects. Modern site-building tools aren’t just easy to use — they’re actually fun

For anyone who has managed a Drupal website, you know the same can’t be said for your platform of choice. While rich with possibilities, the default editorial interface for Drupal feels technical, confusing, and even restrictive to users without a developer background. Consequently, designers and developers too often build a beautiful website while overlooking its backend CMS

Drupal’s open-ended capabilities constitute a competitive advantage when it comes to developing an elegant, customer-facing website. But a lack of attention to the needs of those who maintain your website content contributes to a perception that Drupal is a developer-focused platform. By building a backend interface just as focused on your site editors as the frontend, you create a more empowering environment for internal teams. In the process, your website performs that much better as a whole.

UX principles matter for backend design as much as the frontend

Given Drupal’s inherent flexibilities, there are as many variations of CMS interfaces as there are websites on the platform. That uniqueness is part of what makes Drupal such a powerful tool, but it also constitutes a weakness

The editorial workflow for every website is different, which opens an inevitable training gap in translating your site’s capabilities to your editorial team. Plus, despite Drupal’s open-source strengths, you’ll likely need to reinvent the wheel when designing CMS improvements specific to your organization

For IT managers, this is a daunting situation because the broad possibilities of Drupal are often overwhelming. If you try to make changes to your interface, you can be frustrated when a seemingly easy fix requires 50 hours of development work. Too often, Drupal users will wind up working with an inefficient and confusing CMS because they’re afraid of the complexity that comes with building out a new interface

Fortunately, redesigning your CMS doesn’t have to be a demanding undertaking. With the right expertise, you can develop custom user interfaces with little to no coding required. Personalized content dashboards and defined roles and permissions for each user go a long way toward creating a more intuitive experience

Improving your backend design is often seen as an additional effort, but think of it as a baseline requirement. And, by sharing our user stories within the Drupal community, we also build a path toward improving the platform for the future.

Admin themes are a great starting point

Drupal’s default admin theme as of Drupal 9.4 is Claro, and it’s a good starting point for admin user experience customization. Claro was developed to address the concerns that came out of the Drupal Admin UX Study, which examined the difficulties content editors encountered with the platform

Here at Four Kitchens, we use the Gin theme, which is based on Claro but includes extra enhancements. A number of useful modules are also available to tie add-ons together with Gin, like Gin Toolbar and Gin Layout Builder

For our own usage (and yours, too!), we have compiled the Gin theme and some handy modules and configuration into a starter project we call Sous. Sous also incorporates an Emulsify-based frontend theme and other goodies

This standardization is used across nearly all of our builds. As a result, our development is more efficient. Claro — and by extension, Gin — also includes some work on accessibility within the admin interface, which provides a more inclusive experience

Additionally, both Claro and Gin incorporate responsive layouts, so if an editor needs to make changes on a phone or a tablet, they can. If you’re a long-time Drupal user, you will remember how impossible that used to be.

Use Drupal’s Views module to customize user dashboards

One of the biggest issues with Drupal’s out-of-the-box editorial tools is that they don’t reflect the way any organization actually uses the CMS. Just as UX designers look to provide a positive experience for first-time visitors to your site, your team should aim for delivering a similarly strong first impression for those managing its content

By default, Drupal takes users to their profile pages upon login, which is useful to… almost no one. Plus, the platform’s existing terminology uses cryptic terms such as “node,” “taxonomy,” and “paragraphs” to describe various content items. From the beginning, you should remove these abstract references from your CMS. Your editorial users shouldn’t have to understand how the site is built to own its content.

In the backend, every Drupal site has a content overview page, which shows the building blocks of your site. Offering a full list that includes cryptic timestamps and author details, this page constitutes a floodgate of information. Designing an effective CMS is as much an exercise in subtraction as addition. Whether your user’s role involves reviewing site metrics or new content, their first interaction with your CMS should display what they use most often

If one population of users is most interested in the last item they modified, you can transform their login screen to a custom dashboard to display those items. If another group of users works exclusively with SEO, you can create an interface that displays reports and other common tasks. Using Drupal’s Views module, dashboards like these are possible with a few clicks and minimal coding

By tailoring your CMS to specific user habits, you allow your website teams to find what they need and get to work faster. The most dangerous approach to backend design is to try and build one interface to rule them all.

Listen to your users and ease frustrations with a CMS that works

Through Drupal Views, you can modify lists of content and various actions to control how they display in your CMS. While Views provides many options to create custom interfaces, your users themselves are your organization’s most vital resource. By watching how people work on your site, you can recognize areas where your CMS is falling short

Drupal content dashboardDrupal content dashboard

Even if you’ve developed tools that aimed to satisfy specific use cases, you might be surprised the way your tools are used. Through user experience testing, you’ll often find the workarounds your site editors have developed to manage the site

In one recent example, site editors needed to link to a site page within the CMS. Without that functionality, they would either find the URL by viewing the source code in another tab and copying its node ID number. Anyone watching these users would find their process cumbersome, time-consuming, and frustrating. Fortunately, there’s a Drupal module called Linkit that was implemented to easily eliminate this needless effort

There are many useful modules in the Drupal ecosystem that can enhance the out-of-the-box editorial experience. Entity Clone expedites the content creation process. Views Bulk Operations and Bulk Edit simplify routine content update tasks. Computed Field and Automatic Entity Label take the guesswork out of derived or dependent content values. Using custom form modes and Field Groups can help bring order and streamline the content creation forms

Most of the time, your developers don’t know what solutions teams have developed to overcome an ineffective editorial interface. And, for fear of the complexity required to create a solution, these supposed shortcuts too often go unresolved. Your backend users may not even be aware their efforts could be automated or otherwise streamlined. As a result, even the most beautiful, user-friendly website is bogged down by a poorly designed CMS

Once these solutions are implemented, however, you and your users enjoy a shared win. And, through sharing your efforts with the Drupal community, you and your team build a more user-friendly future for the platform as well.

Making the web a better place to teach, learn, and advocate starts here...

When you subscribe to our newsletter!

Sep 23 2020
Sep 23
Michael Lutz

Michael Lutz

Senior Engineer

Primarily responsible for maintaining the Drupal core migration system, Michael often spends long nights and weekends working through the Drupal project issues queue, solving problems, and writing code.

September 23, 2020

Working in digital design and development, you grow accustomed to the rapid pace of technology. For example: After much anticipation, the latest version of Drupal was released this summer. Just months later, the next major version is in progress

At July’s all-virtual DrupalCon Global, the open-source digital experience conference, platform founder Dries Buytaert announced Drupal 10 is aiming for a June 2022 release. Assuming those plans hold, Drupal 9 would have the shortest release lifetime of any recent major version

For IT managers, platform changes generate stress and uncertainty. Considering the time-intensive migration process from Drupal 7 to 8, updating your organization’s website can be costly and complicated. Consequently, despite a longtime absence of new features, Drupal 7 still powers more websites than Drupal 8 and 9 combined. And, as technology marches on, the end of its life as a supported platform is approaching

Fortunately, whatever version your website is running, Drupal is not running away from you. Drupal’s users and site builders may be accustomed to expending significant resources to update their website platform, but the plan for more frequent major releases alleviates the stress of the typical upgrade. And, for those whose websites are still on Drupal 7, Drupal 10 will continue offering a way forward

The news that Drupal 10 is coming sooner rather than later might have been unexpected, but you still have no reason to panic just yet. However, your organization shouldn’t stand still, either

Drupal 10 is comingImage via dri.es.

The end for Drupal 7 is still coming, but future upgrades will be easier

Considering upgrading to Drupal 8 involves the investment of building a new site and migrating its content, it’s no wonder so many organizations have been slow to update their platform. Drupal 7 is solid and has existed for nearly 10 years. And, fortunately, it’s not reaching its end of life just yet

At the time of Drupal 9’s release, Drupal 7’s planned end of life was set to arrive late next year. This meant the community would no longer release security advisories or bug fixes for that version of the platform. Affected organizations would need to contact third-party vendors for their support needs. With the COVID-19 pandemic upending businesses and their budgets, the platform’s lifespan has been extended to November 28, 2022

Drupal’s development team has retained its internal migration system through versions 8 and 9, and it remains part of the plan for the upcoming Drupal 10 as well. And the community continues to maintain and improve the system in an effort to make the transition easier. If your organization is still on Drupal 7 now, you can use the migration system to jump directly to version 9, or version 10 upon its release. Drupal has no plans to eliminate that system until Drupal 7 usage numbers drop significantly

Once Drupal 10 is ready for release, Drupal 7 will finally reach its end of life. However, paid vendors will still offer support options that will allow your organization to maintain a secure website until you’re ready for an upgrade. But make a plan for that migration sooner rather than later. The longer you wait for this migration, the more new platform features you’ll have to integrate into your rebuilt website.

Initiatives for Drupal 10 focus on faster updates, third-party software

In delivering his opening keynote for DrupalCon Global, Dries Buytaert outlined five strategic goals for the next iteration of the platform. Like the work for Drupal 9 that began within the Drupal 8 platform, development of Drupal 10 has begun under the hood of version 9

A Drupal 10 Readiness initiative focuses on upgrading third-party components that count as technological dependencies. One crucial component is Symfony, which is the PHP framework Drupal is based upon. Symfony operates on a major release schedule every two years, which requires that Drupal is also updated to stay current. The transition from Symfony 2 to Symfony 3 created challenges for core developers in creating the 8.4 release, which introduced changes that impacted many parts of Drupal’s software

To avoid a repeat of those difficulties, it was determined that the breaking changes involved in a new Symfony major release warranted a new Drupal major release as well. While Drupal 9 is on Symfony 4, the Drupal team hopes to launch 10 on Symfony 6, which is a considerable technical challenge for the platform’s team of contributors. However, once complete, this initiative will extend the lifespan of Drupal 10 to as long as three or four years

Other announced initiatives included greater ease of use through more out-of-the-box features, a new front-end theme, creating a decoupled menu component written in JavaScript, and, in accordance with its most requested feature, automated security updates that will make it as easy as possible to upgrade from 9 to 10 when the time comes. For those already on Drupal 9, these are some of the new features to anticipate in versions 9.1 through 9.4.

Less time between Drupal versions means an easier upgrade path

The shift from Drupal 8 to this summer’s release of Drupal 9 was close to five years in the making. Fortunately for website managers, that update was a far cry from the full migration required from version 7. While there are challenges such as ensuring your custom code is updated to use the most recent APIs, the transition was doable with a good tech team at your side

Still, the work that update required could generate a little anxiety given how comparatively fast another upgrade will arrive. But the shorter time frame will make the move to Drupal 10 easier for everybody. Less time between updates also translates to less deprecated code, especially if you’re already using version 9. But if you’re not there yet, the time to make a plan is now.

Making the web a better place to teach, learn, and advocate starts here...

When you subscribe to our newsletter!

Sep 22 2020
Sep 22
Pierce Lamb12 min read

Sep 22, 2020

Smart Content is a module for Drupal which enables personalized content selection for anonymous and authenticated users. The module supplies the UI and logic for creating and making these selections as well as some simple browser based conditions to test, but Smart Content by itself does not provide the data needed to support them. However, there are a couple of modules in its ecosystem that support 3rd party data providers, for e.g. Demandbase and FunnelEnvy. The idea here is if your site is already using one of these data providers to record data about anonymous users, that data can be used to deliver personalized content within Smart Content. Recently, I built a connector for Marketo RTP to Smart Content; I will update this blog with a link once it is a public module. For now, however, I believe detailing how I did it can help others connect Smart Content to any 3rd party marketing API.

The entry point is first understanding what a Response from the Marketing API looks like. For example, in FunnelEnvy, there are two fundamental options: matching the ID of an Audience, or matching the ID of a Variation. In DemandBase, there are myriad dimensions in the response. In Marketo RTP, we have 6 dimensions with a number of sub-dimensions. Either way, this Response needs to be understood so we can start representing it inside Smart Content. One way to look at this response is to query the marketing API in your browser console. For example, in inspector -> console, for RTP, I would type: rtp(‘get’, ‘visitor’, function(data){console.log(data)}); and observe the results. We’ll take a step back here to discuss setting up Smart Content before continuing.

The entry point of Smart Content (SC) is the Segment Set. Administering Segment Sets is found in the Structure -> Smart Content -> Manage Segment Sets menu (once SC is installed). A Segment Set represents some generalized way you want to segment anonymous users on your site. For example, you might title a Segment Set ‘Industry’ and then within the set, create Segments that correlate to industries like ‘banking’ or ‘manufacturing’. Once you’ve created the ‘Industry’ Segment Set, press the ‘edit’ button and you should be brought to a page where you can add Segments. This brings us to the next core piece of SC: the Condition.

You’ll notice that under a Segment you have the ability to create a list of conditions. You can select “If all” (AND) are true or “If any” (OR) are true, then the segment evaluates to true, otherwise, false. SC works by iterating through these segments and checking their conditions; once a segment’s condition(s) evaluate to true, a winner has been found and SC delivers a reaction (personalized content) based on the true segment. These conditions correlate exactly to the API Response data we discussed above. So, in code, we’ll need to create a condition that matches the API we’re using.

At this point I’ll assume that you’ve created a custom module in Drupal to represent your connector; in my case I’ve named it ‘smart_content_marketo_rtp.’ Within your module, create a ‘src’ folder and a ‘js’ folder. Inside the src folder create a ‘Plugin’ folder and inside that folder a ‘Derivative’ folder and a ‘smart_content’ folder. In the ‘smart_content’ folder we’ll have a ‘Condition’ folder, and inside that a ‘Group’ folder and an option ‘Type’ folder. The end result should look like this:

The first piece of code we’ll add to this is a new PHP Class in /smart_content/ to represent our new condition. In my case I titled it ‘MarketoCondition.php’. In addition to that, we also need to add the .libraries.yml file. Here we’ll configure the module’s JS files which interact with Smart Content’s backend. Since we’re creating a new condition, we’ll follow SC’s naming conventions. In the library.yml file add your version of this config:

condition.marketo_rtp:
header: true
version: 1.x
js:
js/condition.marketo_rtp.js: { }
dependencies:
- smart_content/storage
- smart_content/condition_type.standard

Note the filename you created under js:, you’ll need to also add a js file with this name under your js folder (e.g. condition.marketo_rtp.js). Okay, back to the MarketoCondition.php file.

Here is where we will define our new Condition. It will look like this:

namespace Drupal\smart_content_marketo_rtp\Plugin\smart_content\Condition;

use Drupal\smart_content\Condition\ConditionTypeConfigurableBase;

/**
* Provides a Marketo condition plugin.
*
* @SmartCondition(
* id = "marketo",
* label = @Translation("Marketo"),
* group = "marketo",
* deriver = "Drupal\smart_content_marketo_rtp\Plugin\Derivative\MarketoConditionDeriver"
* )
*/
class MarketoCondition extends ConditionTypeConfigurableBase {

/**
* {@inheritdoc}
*/
public function getLibraries() {
$libraries = array_unique(array_merge(
parent::getLibraries(),
[
'smart_content_marketo_rtp/condition.marketo_rtp'
]
)
);
return $libraries;
}

}</span>

Note the definitions in the comments. The syntax here must be preserved as Smart Content reads these entries and uses them internally. Note the filepath in the ‘deriver’ assignment; go ahead and create this PHP class in your ‘Derivative’ folder as well. Make sure to change the string in the getLibraries() method so it matches your module name and your JS file config definition in libraries.yml.

What this file is doing is defining a new Condition for Smart Content; the ‘id’ key defines how it is named when it’s passed in SC, the ‘label’ key defines how it will appear to an end user and the ‘deriver’ key points to a class that will define how SC should interpret all the dimensions in the API response we discussed earlier. In essence “what conditions should be available under the Marketo id?” Finally, overriding the getLibraries() function allows us to attach our custom JS file whenever our new Condition is used. That JS file will describe how to interact with the 3rd party API that powers our new Condition.

Next, let’s move to the deriver file defined in the comment. As shown, this file will be in src/Plugin/Derivative and must match the name you put in the comment exactly. This file can be largely isomorphic to other ConditionDeriver’s from Smart Content; a good example is the Demandbase Module. The one method we will care about in creating a custom connector is the getStaticFields() method. This is where the connector will map the marketing API’s dimensions to actual Smart Content types. If you’ve checked out the Demandbase link above, you’ll see that the three basic SC types are ‘boolean’, ‘number’ and ‘textfield.’ Hopefully the marketing API’s response you’re working with fits neatly into these. When I wrote the Marketo RTP connector, the response did not, and I had to write my own custom types. This explains where the types ‘arraytext’ and ‘arraynumber’ come from in the getStaticFields() method in this connector:

protected function getStaticFields() {
return [
'abm-code' => [
'label' => 'ABM Code',
'type' => 'arraynumber',
],
'abm-name' => [
'label' => 'ABM Name',
'type' => 'arraytext',
],
'category' => [
'label' => 'Category',
'type' => 'textfield',
],
'group' => [
'label' => 'Group',
'type' => 'textfield',
],
'industries' => [
'label' => 'Industry',
'type' => 'arraytext',
],
'isp' => [
'label' => 'Internet Service Provider',
'type' => 'boolean',
],
'location-country' => [
'label' => 'Country',
'type' => 'arraytext',
],
'location-city' => [
'label' => 'City',
'type' => 'arraytext',
],
'location-state' => [
'label' => 'State',
'type' => 'arraytext',
],
'matchedSegments-name' => [
'label' => 'Segment Name',
'type' => 'arraytext',
],
'matchedSegments-code' => [
'label' => 'Segment ID',
'type' => 'arraynumber',
],
'org' => [
'label' => 'Organization',
'type' => 'textfield',
]
];
}

This will look confusing at first, but if you followed the link above about RTP’s 6 dimensions you’ll see that the getStaticFields array members match exactly to these 6 dimensions. I’ve introduced a convention here for any dimension that contains a keyed array: I’ve used a ‘-’ to separate the dimension itself from the nested key. For example ‘abm-code’ and ‘abm-name.’ This dash will be necessary later when we parse out a nested key from a condition. Note that the ‘label’ key will designate the string that users of your connector see when they create a new Condition.

At this point, if you’ve created your new Condition and your Deriver files, we have one more file to add before seeing our new Condition inside Smart Content. Under the ‘Group’ folder create a new PHP class titled after your marketing API, for e.g. mine is simply named ‘Marketo.php.’ This file groups all of the new conditions you defined in your ConditionDeriver under one name. It is a very simple file:

namespace Drupal\smart_content_marketo_rtp\Plugin\smart_content\Condition\Group;

use Drupal\smart_content\Condition\Group\ConditionGroupBase;

/**
* Provides a condition group for Marketo conditions.
*
* @SmartConditionGroup(
* id = "marketo",
* label = @Translation("Marketo")
* )
*/
class Marketo extends ConditionGroupBase {

}</span>

The ‘id’ field in the comment links it to MarketoCondition.php and the ‘label’ field defines what users will see as a grouping when they create a new Condition.

At this point, if you did not create any new types in the getStaticFields() method, we can edit our Industry Segment Set, flush caches, and then press the ‘Select a condition’ drop down. You should now be able to scroll through this list and see the new grouping and the dimensions you defined in your Deriver file. If they do not appear, then one of the steps above was done incorrectly.

If you did not create any new Types, you can skip this next section. In the getStaticFields method I pasted above, you can see two new Types, ‘arraynumber’ and ‘arraytext.’ To define these new types, we’ll create two new PHP classes in the src/smart_content/Condition/Type folder: ‘ArrayNumber.php’ and ‘ArrayText.php.’ Since these two new types are depending on a more primitive type (textfield or number), I can simply extend these more primitive types. As such, my ArrayNumber.php file will look like:

namespace Drupal\smart_content_marketo_rtp\Plugin\smart_content\Condition\Type;

use Drupal\smart_content\Plugin\smart_content\Condition\Type\Number;

/**
* Provides a 'arraynumber' ConditionType.
*
* @SmartConditionType(
* id = "arraynumber",
* label = @Translation("ArrayNumber"),
* )
*/
class ArrayNumber extends Number {

/**
* {@inheritdoc}
*/
public function getLibraries() {
return ['smart_content_marketo_rtp/condition_type.array'];
}

}</span>

And

namespace Drupal\smart_content_marketo_rtp\Plugin\smart_content\Condition\Type;

use Drupal\smart_content\Plugin\smart_content\Condition\Type\Textfield;

/**
* Provides a 'arraytext' ConditionType.
*
* @SmartConditionType(
* id = "arraytext",
* label = @Translation("ArrayText"),
* )
*/
class ArrayText extends Textfield {

/**
* {@inheritdoc}
*/
public function getLibraries() {
return ['smart_content_marketo_rtp/condition_type.array'];
}

}</span>

As you can see, since these new types will use all the same operators as their primitive types, the only method we need to override is the getLibraries() method which will pass a custom JS file for evaluating the truth values of our new Types. Note that the ‘id’ field MUST match the type name you gave in your Deriver file. Make sure to add that JS file in libraries.yml and your /js/ folder. The libraries.yml definition will look like this:

condition_type.array:
header: true
version: 1.x
js:
js/condition_type.array.js: { }
dependencies:
- core/drupal

I will not go into too much detail on condition_type.array.js as its unlikely most readers are defining a new Type. The key for this file is to define new functions that Smart Content will call when it encounters an ‘arraytext’ or ‘arraynumber’. These functions follow specific naming conventions, for e.g.:

Drupal.smartContent.plugin.ConditionType['type:arraytext'] = function (condition, value) {...}Drupal.smartContent.plugin.ConditionType['type:arraynumber'] = function (condition, value) {...}

Where condition is the Smart Content Condition represented in JSON and the value is the value discovered on the page for a given visitor. These functions must return boolean values. You can check out /modules/contrib/smart_content/js/condition_type.standard.js to get a better sense of these functions. Also you can message ‘plamb’ on the Drupal Slack.

If you’ve gotten this far, then we have one more file to populate. Way back at the beginning we created the file js/condition.marketo_rtp, but left it empty. In this file we will tell Smart Content what to do when it comes across a condition of type ‘Marketo.’ We’ll open this file with the following:

(function (Drupal) {

Drupal.smartContent = Drupal.smartContent || {};
Drupal.smartContent.plugin = Drupal.smartContent.plugin || {};
Drupal.smartContent.plugin.Field = Drupal.smartContent.plugin.Field || {};

...

}(Drupal));</span>

The primary function in this file will follow Smart Content’s naming conventions:

Drupal.smartContent.plugin.Field['marketo'] = function (condition) {...}

Note that the text ‘marketo’ matches the id we’ve been passing around in many other files. When Smart Content evaluates a field of grouping ‘marketo’ it will execute this function. The first job this function must perform is making sure we can access the Marketing API and get a Response. It does that by constructing a Promise so it can be done asynchronously and then it returns a resolution of that Promise to the Smart Content backend that contains the relevant value. As such, our functions skeleton will look like this:

Drupal.smartContent.plugin.Field['marketo'] = function (condition) {
Drupal.smartContent.marketo = new Promise((resolve, reject) =>
{...}

});
return Promise.resolve(Drupal.smartContent.marketo).then( (value) => {
...
});
}</span>

Let’s first take a look at what’s happening inside the Promise. Here we will be checking that we can call the API and resolve the Promise with its Response:

let attempts = 0;
const interval = setInterval(() => {
if (attempts < 200) {
if (typeof rtp === "function") {
clearInterval(interval);
rtp('get', 'visitor', function(data){
if(data.results) {
Drupal.smartContent.storage.setValue('marketo', data.results);
resolve(data.results);
} else {
resolve({})
}
});
}
}
else {
clearInterval(interval);
resolve({});
}
attempts++;
}, 10);

All this code is doing is running an interval function through 200 attempts of trying to resolve a given variable as a function. This structure closely models the other Smart Content modules, the main difference is that in others the function is waiting for a JS library to be available on the page vs a function resolution; this is merely an artefact of how Marketo RTP works. Once the rtp variable is recognized as a function, the code can successfully call it the way Marketo intends. The Response data can then be passed into the resolve statement to then be dealt with when the function returns.

You might wonder how the Drupal.smartContent.storage.setValue(‘marketo’, data.results); line made it into this code block. I omitted some earlier code for clarity that deals with this. Most Smart Content connectors use a browser’s Local Storage to store the results of the Marketing APIs response. This is because the metadata associated with a user in the API rarely changes. When a user returns to the site, instead of executing the setTimeout function and waiting for the rtp function to return, we can simply grab the stored values out of their browser which will always be faster.. Putting the Local Storage code back in, the code block will look like this:

Drupal.smartContent.plugin.Field['marketo'] = function (condition) {
let key = condition.field.pluginId.split(':')[1];
if(!Drupal.smartContent.hasOwnProperty('marketo')) {
if(!Drupal.smartContent.storage.isExpired('marketo')) {
let values = Drupal.smartContent.storage.getValue('marketo');
Drupal.smartContent.marketo = values;
}
else {
Drupal.smartContent.marketo = new Promise((resolve, reject) => {
//run setTimeout code
});
}

}

}</span>

So only if the ‘marketo’ key isn’t found in Local Storage is the setTimeout code run. You can get a better sense of what these methods are doing here. But wait, what is that let key = condition.field.pluginId.split(‘:’)[1]; line about? Key is used in the return statement which we will discuss next.

When Smart Content passes around condition information, its key is always the group name, ‘marketo’ appended with the key from the Deriver class that was selected in the condition. Common keys for the RTP connector would be ‘marketo:matchedSegments-code’ or ‘marketo:industries.’ The line we mentioned above, let key = condition.field.pluginId.split(‘:’)[1];, is thereby grabbing the string that occurs after the ‘:’ and storing it for use in the return statement. This key will be used to parse the Response structure that comes back from the marketing API. Now we can look at the full return statement:

return Promise.resolve(Drupal.smartContent.marketo).then( (value) => {
//All single value members and arrays containing values
if(value.hasOwnProperty(key)){
return value[key];
}else {
//All arrays of arrays
var is_array_type = marketo_testForArray(condition.field.type);
if(is_array_type) {
var refined_key = marketo_getCorrectKey(key);
if(value.hasOwnProperty(refined_key)) {
return value[refined_key];
}
}
}
return null;
});

Promise.resolve().then() guarantees the code in the then block will run when the promise resolves. Since we earlier stored the key the Smart Content condition cares about, we first check if the Marketo API’s returned value simply contains that key. If so we return the value at that key back to Smart Content; simple enough. However, if the key in question is one of our custom array types, we need to detect that, split the main key from the nested key and return the correct value. The two functions called in the else block provide that functionality.

With these pieces in place, we can now test our new connector.

At this point, we would login to whichever marketing platform we’re connecting to and create a new test segment to test against our connector. In Marketo, this means going to Web Personalization -> Segments -> Create New Segment. Every Marketing platform is a bit different, but many of them have the option of creating a segment based on a url parameter which is easy for local testing. In Marketo we would add a ‘Behavioral -> Include Pages’ segment and define a URL for testing, e.g. /?test=yes. Under ‘domains’ we’d make sure our is selected and hit save. With Marketo, we can hover the segment we just created and get the ID. This is the value Smart Content will be matching against when our connector runs.

If we then open a fresh incognito window (remember the Local Storage discussion earlier?) and load /?test=yes we should be matched into the Marketo Segment. We can verify this by opening the console and running rtp(‘get’, ‘visitor’, function(data){console.log(data)}); again. Expanding the return structure should show our ID in ‘matchedSegments.’

To see this matching in a decision block, we would go back to Drupal and load Structure -> Smart Content -> Manage Segment Sets. If you created a Segment Set earlier, use that one or create a new one. For Marketo we will either create or re-use a condition and select Segment ID(Marketo) and paste the segment ID we just copied. Once saved and with caches flushed, we can add a decision block in Structure -> Block Layout -> Place Block, select our new segment set, choose blocks and save. In a new incognito window, we would load /?test=yes and see whatever block we chose.

If you have any questions you can find me @plamb on the Drupal slack chat.

Jul 30 2020
Jul 30

argument-open-source Over 500,000 businesses leverage Drupal to launch their websites and projects. From NASA to Tesla, public and private institutions regularly rely on Drupal to launch large-scale websites capable of handling their development and visual needs. But, starting a Drupal project doesn’t guarantee success. In fact, 14% of all IT projects outright fail, 43% exceed their initial budgets, and 31% fail to meet their original goals! In other words, if you want to create a successful Drupal project, you need to prepare. Don’t worry! We’ve got your back. Here are 5 things to keep in mind when starting a Drupal-based project.

1. GATHER REQUIREMENTS FROM STAKEHOLDERS EARLY AND OFTEN

According to PMI, 39% of projects fail due to inadequate requirements. Believe it or not, requirement gathering is the single most important stage of project development. In fact, it’s the first step Drupal itself takes when pushing out new projects (see this scope document for their technical document project). Gathering requirements may sound easy, but it can be a time-consuming process. We recommend using SMART (Specific, Measurable, Agreed Upon, Realistic, Time-based) to map out your specific needs. If possible, involve the end-user during this stage. Don’t assume you know what users want; ask them directly. Internally, requirements gathering should rally nearly every stakeholder with hefty amounts of cross-collaboration between departments. You want to lean heavily on data, establish your benchmarks and KPIs early, and try to involve everyone regularly. The single biggest project mistake is acting like requirements are set-in-stone. If you just follow the initial requirements to a “T,” you may push out a poor project. You want to regularly ask questions, communicate issues, and rely on guidance from stakeholders and subject matter experts (SMEs) to guide your project to completion.

2. PLAN YOUR SDLC/WORKFLOW PIPELINE

We all have different development strategies. You may leverage freelancers, a best-in-class agency, or internal devs to execute your Drupal projects. Typically, we see a combination of two of the above. Either way, you have to set some software development lifecycle and workflow standards. This gets complex. On the surface, you should think about coding standards, code flow, databases, and repositories, and all of the other development needs that should be in sync across devs. But there’s also the deeper, more holistic components to consider. Are you going to use agile? Do you have a DevOps strategy? Are you SCRUM-based? Do you practice design and dev sprints? At Mobomo, we use an agile-hybrid development cycle to fail early, iterate regularly, and deploy rapidly. But that’s how we do things. You need to figure out how you want to execute your project. We’ve seen successful Drupal projects using virtually every workflow system out there. The way you work matters, sure. But getting everyone aligned under a specific way of working is more important. You can use the “old-school” waterfall methodology and still push out great projects. However, to do that, you need everyone on the same page.

3. USE SHIFT-LEFT TESTING FOR BUG AND VULNERABILITY DETECTION

Drupal is a secure platform. Of the four most popular content management systems, Drupal is the least hacked. But that doesn’t mean it’s impenetrable. You want to shift-left test (i.e., automate testing early and often in the development cycle). Drupal 8+ has PHPUnit built-in — taking the place of SimpleTest. You can use this to quickly test out code. You can perform unit tests, kernel tests, and functional tests with and without JavaScript. You can also use Nightwatch.js to run tests. Of course, you may opt for third-party automation solutions (e.g., RUM, synthetic user monitoring, etc.) The important thing is that you test continuously. There are three primary reasons that shift-left testing needs to be part of your development arsenal.

  • It helps prevent vulnerabilities. The average cost of a data breach is over $3 million. And it takes around 300 days to identify and contain website breaches.
  • It bolsters the user experience. A 100-millisecond delay in page load speed drops conversions by 7%. Meanwhile, 75% of users judge your credibility by your website’s design and performance, and 39% of users will stop engaging with your website if your images take too long to load. In other words, simple glitches can result in massive issues.
  • It reduces development headaches. Nothing is worse than developing out completely new features only to discover an error that takes you back to step 1.

4. GET HYPER-FAMILIAR WITH DRUPAL’S API

If you want to build amazing Drupal projects, you need to familiarize yourself with the Drupal REST API. This may sound like obvious advice. But understanding how Drupal’s built-in features, architecture, and coding flow can help you minimize mistakes and maximize your time-to-launch. The last thing you want to do is code redundantly when Drupal may automate some of that coding on its end. For more information on Drupal’s API and taxonomy, see Drupal API. We know! If you’re using Drupal, you probably have a decent idea of what its API looks like. But make sure that you understand all of its core features to avoid headaches and redundancies.

5. SET STANDARDS

Every development project needs standards. There are a million ways to build a website or app. But you can’t use all of those million ways together. You don’t want half of your team using Drupal’s built-in content builder and the other half using Gutenberg. Everyone should be on the same page. This goes for blocks, taxonomy, and every other coding need and task you’re going to accomplish. You need coding standards, software standards, and process standards to align your team to a specific framework. You can develop standards incrementally, but they should be shared consistently across teams. Ideally, you’ll build a standard for everything. From communication to development, testing, launching, and patching, you should have set-in-stone processes. In the past, this was less of an issue. But, with every developer rushing to agile, sprint-driven methodologies, it can be easy to lose sight of standards in favor of speed. Don’t let that happen. Agile doesn’t mean “willy-nilly” coding and development for the fastest possible launch. It still has to be systematic. Standards allow you to execute faster and smarter across your development pipeline.

NEED SOME HELP?

At Mobomo, we build best-in-class Drupal projects for brands across the globe. From NASA to UGS, we’ve helped private, and public entities launch safe, secure, and exciting Drupal solutions. Are you looking for a partner with fresh strategies and best-of-breed agile-driven development practices?

Contact us. Let’s build your dream project — together.

Jul 28 2020
Jul 28

argument-open-source

DRUPAL MIGRATION PREPARATION AUDIT

All good things must come to an end. Drupal 7 will soon meet its end. Does your organization have your migration plan to Drupal 9 in order? Here’s what you need to know to keep your Drupal site running and supported. Talk to Our Drupal Migration Experts Now!

OUR APPROUCH TO DRUPAL MIGRATION.

  • Analyze 
  • Inventory
  • Migration
  • Revision
  • SEO

OVERVIEW

Staying up to date with Drupal versions is vital to maintaining performance to your site:

  • Future-proofing
  • Avoiding the end-of-life cut-off
  • Performance
  • Security

GOALS

  1. Catalog existing community contributed modules necessary to the project
  • Do these modules have a corresponding Drupal 8 version?
  • If the answer to the above question is no, is there an alternative?
  • Is there an opportunity to optimize or upgrade the site’s usage of contributed modules?
  1. Catalog existing custom built modules
  • Do these modules rely on community contributed modules that may not have a migration path to Drupal 8?
  • Do these modules contain deprecated function calls?
  • Are there any newer community contributed modules that may replace the functionality of the custom modules?
  1. Review existing content models.
  • How complex is the content currently—field, taxonomy, media?
  • What specific integrations need to be researched so content will have feature parity?
  1. Catalog and examine 3rd party integrations.
  • Is there any kind of e-commerce involved?
  • Do these 3rd party integrations have any Drupal 8 community modules?
  1. Catalog User roles and permissions
  • Do user accounts use any type of SSO?
  • Is there an opportunity to update permissions and clean up roles?

PRE-AUDIT REQUIREMENTS

  • Access to the codebase
  • Access to the database
  • Access to a live environment (optional)
  • Access to integrations in order to evaluate level of effort

DELIVERABLES

Module Report The module report should contain an outline of the existing Drupal 7 modules with the corresponding path to Drupal 8, whether that’s an upgraded version of the existing module or a similar module. This report should also contain a sheet outlining any deprecated function usage for the custom modules that will need to be ported to Drupal 8.

Content Model Report The Content Model report should contain an overview of the existing site’s content types, users, roles, permissions and taxonomic vocabularies with each field given special consideration. Recommendations should be made in the report to improve the model when migrating to Drupal 8.

Integration Report The integration report contains a catalog of the third party integrations currently in use and marks those with an existing contributed module from the community and those that will require custom work to integrate with the Drupal 8 system.

Our Insights on Drupal Our latest thoughts, musings and successes.

Contact us. We’ll help you expand your reach.

Jul 10 2020
Jul 10

argument-open-source

When you first sit down to create your Drupal website, you have plenty of decisions to make. What are your first blog posts going to be? What kinds of marketing materials do you need to help your website convert? What is your SEO strategy to boost your SERP position? These are all important, and we highly recommend that you consider each point before you launch your first website.

But those are details. The most significant decision you’re going to make is what theme you’ll use. Think of your theme as the building block of your website. It’s how users are going to perceive your site, interpret your content, and engage with your products or services. You want a beautiful, interactive, intuitive, and easy-to-browse website that pushes customers to think, engage, and consume your rich creatives.

Here’s the problem: there are thousands of Drupal themes. When you first look through the avalanche of bright colors, minimal panes, and unique content configurations, it can be dizzying. How do you pick a theme with that certain something that sets you apart? 

Here are some criteria to help you sift through the tsunami of designs on the market.

How Important is Your Drupal Theme, Really?

At some point, you need to pull the trigger. But how soon should you go with your gut instinct? After all, is picking the “perfect” theme really that important? In today’s hyper-redundant theme ecosystem, it’s easy to think that website design is a secondary factor in your website build process. Many websites today have eerily similar themes, and you may be looking to copy-paste that minimalist, white-space-heavy style that your competitors probably use.

Don’t make the mistake of minimizing the importance of the theme. Your competitors may use cookie-cutter themes, but you shouldn’t. Here’s why:

  • 38% of people will flat out refuse to engage with a website if its looks aren’t appealing to them.
  • 88% of people won’t return to your website ever again after a single bad experience.
  • 75% of customers make a judgment call on your brand’s credibility based on your website design.
  • Given 15 minutes to read content, people would rather view something beautifully designed than something plain-looking.
  • 94% of negative feedback regarding your website will be design related.

In other words, your customers are going to judge the efficacy of your brand based on your website’s design. Remember the phrase, “first impressions are everything.” Well, 94% of first impressions are based on design—you want something stunning. Obviously, design is still a highly personal experience. Some people like quirky and weird, some like minimal and smooth, and others like aggressive and animation heavy. It depends on your end user and who you are as a brand.

So how do you go about picking the right one? After all, there’s a lot at stake. Your theme is going to be the first thing customers see when they click on your website. Here are the three core components of website themes you should consider before you make your choice.

1. Your Brand’s Identity

We all know that branding is a big deal. 89% of marketers say that branding is their top goal, and branding is the first thing that 89% of investors look at when deciding whether or not to open their wallets. So, when it comes to your design, brand should be front-of-mind. Who is your company? What does it stand for? And, most all, what does it look like?

Your Drupal theme is a powerful branding tool. Every single component of your website is an opportunity for branding. We could get overly complicated diving into website branding, but we’ll stick with the simple stuff. Let’s talk about color. Seems simple enough, right? Check this out:

  • Color alone improves brand recognition by 80%.
  • 93% of people focus on your brand’s color when buying products.
  • When people make subconscious decisions about your product, 90% of that decision is related to color.

Ok! So color is obviously important. But what about all the other “stuff” on your website? Does the position of content boxes, navigation menu, and blog posts really matter? You bet! Consistent brand representation across content boosts bottom-line profits by 33% on average. And 80% of people think content is what drives them to really engage and build loyalty with brands.

In a nutshell, think about branding when you look at themes. 90% of users expect you to have consistent branding across all channels. If you can’t find a theme that screams, “you,” that’s ok! If you can’t find one, build one.

2. Performance

The theme you choose will have a direct impact on your website’s performance. Unnecessary components, visual clutter, and poor frontend coding can all increase load times and disrupt website accessibility. Obviously, some of your performance capabilities happen on the backend (e.g., caching, DB Query optimization, MySQL settings, etc.) But your theme still has a sizable effect on how your website performs.

Overly large CSS files, redundant coding for modules, blank spaces, and other issues can all increase time-to-load, create visual issues, and create stop-points for your users. To be clear, performance is a significant component in both lead generation and retention:

  • A 100-millisecond delay drops conversions by 7%.
  • Increasing the number of page elements from 400 to 6,000 drops conversion rates by 95%.
  • 79% of shoppers that encounter a website with poor performance will never return.

Always test out themes for performance. The aesthetic qualities of a website are important, but performance is a necessity.

3. UX

We like to call UX the “hidden performance.” It’s how your users will engage with and consume content throughout your website. The theme you pick will dictate a significant portion of your UX. Before you choose a theme, build out your information architecture strategy, create mockups for UI (or at least find UI examples that you enjoy), and plot out your broad content strategy. Then, choose a theme that compliments your strategy and information architecture.

Here’s the most important thing: always evolve your UX. Consider applying agile to your theme building and choosing practices. Even after you select the right theme, constantly make improvements to your UI/UX to breed consistency and customer-centricity. You can purchase a pre-made theme on the Drupal marketplace, but you still need to customize the theme to fit your brand and conform to your UX framework. You don’t want to choose a cookie-cutter theme on the marketplace and fail to maximize its value. Not only will your website look nearly identical to thousands of other Drupal sites, but you also won’t truly build an experience-driven website. Give your customers home-cooked steak and potatoes—not a microwaved frozen dinner.

Are You Looking for the Perfect Drupal Theme?

If you want a theme that’s hyper-branded, built for performance, and created using brand-specific information architecture, you won’t find it on a pre-built theme website. You need to create it. At Mobomo, we help public and private entities create breathtaking Drupal themes specifically for their brand and their users. Let’s build your brand something amazing.

Contact us to learn more.

Jul 08 2020
Jul 08

argument-open-source

Businesses and governments build websites for one reason: to provide value to their users. But what if your website was incapable of reaching millions of your users? 25% of Americans live with disabilities. For some of them, the simple act of navigating websites, digesting information, and understanding your content is difficult. Yet, despite brands increasing spending on web design and digital marketing, less than 10% of websites actually follow accessibility standards. Businesses are spending significant money to capture an audience, yet they’re not ensuring that their audience can engage with their website.

It’s a problem—a big one.

You don’t want to exclude customers. It’s bad for business, and it’s bad for your brand. Better yet, accessibility features help improve your SEO, reduce your website complexity, and increase your ability to connect with your loyal audience. But accessibility standards aren’t always baked into the architecture of websites.

Luckily, there are some content management systems (CMS) that let you create hyper-accessible websites without even trying. Drupal comes equipped with a variety of accessibility features — each of which helps make your website more accessible for your customers.

Understanding the Importance of Website Accessibility

Creating an accessible website may sound vague, but there’s already a worldwide standard you can follow. The Web Content Accessibility Guidelines (WCAG) — which is maintained by The World Wide Web Consortium — is the global standard for web accessibility used by companies, governments, and merchants across the world.

Sure! Following the WCAG standard helps you reach a wider audience. But it also keeps you out of legal hot water. Not only has the ADA made it abundantly clear that compliance requires website accessibility. A United States District Court in Florida ruled that WCAG standards are the de facto standards of web accessibility. And there are already cases of businesses getting sued for failing to adhere to them.

  • The DOJ sues H&R Block over its website’s accessibility.
  • WinnDixie.com was sued for accessibility, and the judge required them to update their website.
  • The National Museum of Crime and Punishment was required to update its website accessibility.

The list goes on. Adhering to WCAG web accessibility standards helps protect your brand against litigation. But, more importantly, it opens doors to millions of customers who need accessibility to navigate and engage with your amazing content.

One-third of individuals over the age of 65 have hearing loss. Around 15% of Americans struggle with vision loss. And millions have issues with mobility. The CDC lists six forms of disability:

  • Mobility (difficulty walking or climbing)
  • Cognition (difficult remembering, making decisions, or concentrating)
  • Hearing (difficulty hearing)
  • Vision (difficulty seeing)
  • Independent living (difficulty doing basic errands)
  • Self-care (difficulty bathing, dressing, or taking care of yourself)

Web accessibility touches all of those types of disabilities. For those with trouble seeing, screen readers help them comprehend websites. But, screen readers strip away the CSS layer. Your core content has to be accessible for them to be able to comprehend it. Those with mobility issues may need to use keyboard shortcuts to help them navigate your website. Hearing-impaired individuals may require subtitles and captions. Those with cognitive issues may need your website to be built with focusable elements and good contrasting.

There are many disabilities. WCAG creates a unified guideline that helps government entities and businesses build websites that are hyper-accessible to people with a wide range of these disabilities.

Drupal is WCAG-compliant

WCAG is vast. A great starting point is the Accessibility Principles document. But, creating an accessible website doesn’t have to be a time-consuming and expensive process. Drupal has an entire team dedicated to ensuring that their platform is WCAG compliant. In fact, Drupal is both WCAG 2.0 compliant and Authoring Tool Accessibility Guidelines (ATAG 2.0) compliant. The latter deals with the tools developers use to build websites. So, Drupal has accessibility compliance on both ends.

What Accessibility Features Does Drupal Have?

Drupal’s accessibility compliance comes in two forms:

  1. Drupal has built-in compliance features that are native to every install (7+).
  2. Drupal supports and enables the community to develop accessibility modules.

Drupal’s Built-in Compliance Features

Drupal 7+ comes native with semantic markup. To keep things simple, semantic markup helps clarify the context of content. At Mobomo, we employ some of the best designers and website developers on the planet. So, we could make bad HTML markup nearly invisible to the average user with rich CSS and superb visuals. But when people use screen readers or other assistive technology, that CSS goes out-of-the-window. They’re looking at the core HTML markup. And if it’s not semantic, they may have a difficult time navigating it. With Drupal, markup is automatically semantic — which breeds comprehension for translation engines, search engines, and screen readers.

Drupal’s accessibility page also notes some core changes made to increase accessibility. These include things such as color contrasting. WCAG requires that color contrasting be at least 4.5:1 for normal text and 7:1 for enhanced contrast. Drupal complies with those guidelines. Many other changes are on the developer side, such as drag and drop functions and automated navigation buttons.

Of course, Drupal also provides developer handbooks, theming guides, and instructional PDFs for developers. Some of the accessibility is done on the developer’s end, so it’s important to work with a developer who leverages accessibility during their design process.

Drupal’s Support for the Accessibility Community

In addition to following WCAG guidelines, Drupal supports community-driven modules that add additional accessibility support. Here are a few examples of Drupal modules that focus on accessibility:

There are hundreds. The main thing to remember is that Drupal supports both back-end, front-end, and community-driven accessibility. And they’ve committed to continuously improving their accessibility capabilities over time. Drupal’s most recent update — the heavily anticipated Drupal 9 — carries on this tradition. Drupal has even announced that Drupal 10 will continue to expand upon accessibility.

Do You Want to Build an Accessible Website

Drupal is on the cutting-edge of CMS accessibility. But they can’t make you accessible alone. You need to build your website from the ground up to comply with accessibility. A good chunk of the responsibility is in the hands of your developer. Are you looking to build a robust, functional, beautiful, and accessible website? 

Contact us. We’ll help you expand your reach.

May 15 2020
May 15
Pierce Lamb12 min read

May 15, 2020

This is Part 2 in a two part series where we detail how to create custom URLs for Drupal Views Exposed Filters. Part 1 covers how we create/update and delete these URLs. Part 2 covers how to load and process these URLs

If you have any questions you can find me @plamb on the Drupal slack chat.

Now that we have the original Exposed Filter paths correlated to custom paths in the path_alias table, let’s look at how to load them in the View and make sure they’re loading the right page when clicked on.

When I first worked on this problem, I was using a contrib module for Views called ‘Better Exposed Filters’ (BEF). I made this choice because I wanted to expose the filters as links (verses a

May 15 2020
May 15
Pierce Lamb8 min read

May 15, 2020

Part 1 covers when and how to generate custom URLs, Part 2 covers how to load and process these URLs.

If you have any questions you can find me @plamb on the Drupal slack chat.

A core feature of Drupal is the View. Views are pages that list content on a Drupal website. Despite that simple-sounding description, Views can be complex, especially for beginners. One feature we commonly want on a content listing page is the ability for an end user to filter the displayed content dynamically. For example, a car website might display all cars in a database on a listing page and an end-user might want to click an exposed filter called ‘Tesla’ to show only the Tesla models. Drupal provides this functionality out-of-the-box. Exposed filters in Drupal function by attaching querying parameters to the base URL of the View which the backend can use to appropriately filter the content. For example, if I have a View with the path /analyst-relations that displays content from large technology analysts, one exposed filter might be a link with the title Gartner. The path attached to the Gartner link will look like /analyst-relations?related_firm=5467. This query parameter, ?related_firm=5467, provides all the information Drupal needs to appropriately filter content. However, it is not a very nice-looking, descriptive URL. Ideally the link associated with the Gartner filter is something like /analyst-relations/firm/gartner.

I should note now that I am not a SEO expert and I don’t know for certain if custom exposed filter links will affect ranking in search engines. However, when I click a link like /analyst-relations/firm/gartner I have a much better idea of what information will be contained on that page than if I click /analyst-relations?related_firm=5467. Since serving these URLs does not have a high performance cost and they provide a more user-friendly experience, I believe that is reason enough to serve them.

Our goal is to replace all default exposed filter links with custom, descriptive URLs. The first question is, how do we create the custom URLs programmatically? Each URL will need to be unique and based on the content(s) it is related to. One option would be to do this dynamically as a page with exposed filter links is being loaded. Another option is to generate and store the custom URL whenever the relevant content is created/updated/deleted. I preferred the second option as it feels safer, more performant, and Drupal 8/9 comes with the path_alias module which I believe fits this task. I’ll note that this decision is definitely up for debate.

Okay, so we’re going to generate these custom URLs at CRUD time for relevant content(s). The quickest way to do that is, in a custom module, utilizing hook_entity_insert, hook_entity_update, and hook_entity_delete. From a technical debt perspective there may be a better way to do this, for e.g. by extending Entity classes, but these hooks will get you to a proof-of-concept the quickest. Every time any Entity is created, updated or deleted, these hooks are going to fire. If our custom module is called custom_urls, in our custom_urls.module file we would have:

/**
* Implements hook_entity_insert()
*/
function custom_urls_entity_insert(Drupal\Core\Entity\EntityInterface $entity){
_create_or_update_path_alias($entity);
}
/**
* Implements hook_entity_update()
*/
function custom_urls_entity_update(Drupal\Core\Entity\EntityInterface $entity){
_create_or_update_path_alias($entity);
}
/**
* Implements hook_entity_delete()
*/
function custom_urls_entity_delete(Drupal\Core\Entity\EntityInterface $entity){
_delete_path_alias($entity)
}

Inside of _create_or_update_path_alias and _delete_path_alias the first thing we’ll do is narrow down to only the entities we care about. That function will be called: _is_relevant_entity. Exposed Filters are often based on TaxonomyTerms or specific Entity bundles. For our example, inside _is_relevant_entity we will narrow to only the Terms and Entity Bundle we care about:

function _is_relevant_entity(Drupal\Core\Entity\EntityInterface $entity){
$entity_arr = [
'boolean' => FALSE,
'old_path' => '',
'new_path' => ''
];
$maybe_term = $entity instanceof Drupal\taxonomy\Entity\Term;
if($maybe_term){
...
} elseif ($entity->bundle() == 'product') {
....
}
return $entity_arr;
}

$entity_arr will be used to carry information about if the Entity is relevant, what the generated exposed filter path is and what the custom URL will be. If you follow the control structure you can see we’re going to use it to determine what the boolean value should be and for our example we care about Terms and Entities of type product. In our proof-of-concept, it would look something like this:

function _is_relevant_entity(Drupal\Core\Entity\EntityInterface $entity){
$entity_arr = [
'boolean' => FALSE,
'old_path' => '/analyst-relations',
'new_path' => ''
];
$maybe_term = $entity instanceof Drupal\taxonomy\Entity\Term;
if($maybe_term){
$relevant_taxonomies = [
'related_topics' => '/topic?related_topic=',
'related_companies' => '?related_firm='
];
$taxonomy_name = $entity->bundle();
$entity_arr['boolean'] = in_array($taxonomy_name, array_keys($relevant_taxonomies));
$entity_arr['old_path'] = $entity_arr['old_path'].$relevant_taxonomies[$taxonomy_name].$entity->id();
} elseif ($entity->bundle() == 'product') {
$entity_arr['boolean'] = TRUE;
$entity_arr['old_path'] = $entity_arr['old_path'].'/product?related_product='.$entity->id();
}
return $entity_arr;
}

As you can see, to get to a POC, I’ve done a lot of hardcoding here. In a fully general solution and safer solution, we’d load the View and get the old_path and the values in $relevant_taxonomies that way. However, via hardcoding I’ve generated the exact same paths that the View will create, for e.g. /analyst-relations?related_firm=5467. Note that if you don’t generalize this and the query keys or path in the View change (they are customizable) this will stop working.

Okay, so back to our _create_or_update_path_alias function. The beginning will look something like this:

function _create_or_update_path_alias($entity){
$raw_entity_arr = _is_relevant_entity($entity);
if($raw_entity_arr['boolean']){
//Update the path alias with the new URL
$clean_entity_arr = _build_custom_url($entity, $raw_entity_arr);

We use the boolean key to make sure we have an Entity we care about. Next we generate the custom url in _build_custom_url. That function will look like this:

function _get_url_from_regex($title){
$replace_whitespace = preg_replace('/\s+/', '-',$title);
$new_path_caboose = preg_replace('/[^a-zA-Z.-]/','',$replace_whitespace);
return $new_path_caboose;
}
function _build_custom_url($entity, $entity_arr){
$maybe_product = $entity->bundle() == 'product';
$raw_entity_url = $entity->url();
$entity_url_arr = explode('/',$raw_entity_url);
if($maybe_product) {
//It's a product Node
$new_path_train = '/analyst-relations/product/';
if(array_key_exists(2, $entity_url_arr)){
$new_path_caboose = $entity_url_arr[2];
} else {
$new_path_caboose = _get_url_from_regex($entity->label());
}
} else {
//It's a taxonomy term
$old_path = $entity_arr['old_path'];
$maybe_firm = strpos($old_path,'firm') !== FALSE;
if($maybe_firm){
//Firm filter
$new_path_train = '/analyst-relations/firm/';
} else {
//Topic filter
$new_path_train = '/analyst-relations/topic/';
}
if(count($entity_url_arr) > 1 && $entity_url_arr[1] !== 'taxonomy'){
$new_path_caboose = $entity_url_arr[1];
} else {
$new_path_caboose = _get_url_from_regex($entity->label());
}
}
$new_path = $new_path_train.strtolower($new_path_caboose);
$entity_arr['new_path'] = $new_path;
return $entity_arr;
}

In this function, we attempt to create the custom URL from the $entity->url() attached to Product’s and Taxonomy Terms. If we’re unable to, we pass the $entity-label() through some regexs. I’ve split the regexs into two inside _get_url_from_regex to make it easier to understand what is going on. We take the Entities label and replace any whitespace in it with a dash. We take this string and remove any non alphabetic character out of it. This produces strings that should work as the end (the caboose) of the new path and they replace the id number out of the old path. Then, whether we have a product or appropriate taxonomy term, we create the first part (the train) of the custom URL. Again this has been hardcoded for alacrity, but like above, in a general solution we’d load the View and create these. Like above if the View’s path changes this will stop working.

Okay so now we have an array that verifies we have a correct Entity, its old exposed filter path and the new custom path we want it to have. Now we are going to use the entityTypeManager() to query the path_alias table. Let’s view some more of the _create_or_update_path_alias function:

function _create_or_update_path_alias($entity){
$raw_entity_arr = _is_relevant_entity($entity);
if($raw_entity_arr['boolean']){
//Update the path alias with the new URL
$clean_entity_arr = _build_custom_url($entity, $raw_entity_arr);
$old_path = $clean_entity_arr['old_path'];
$new_path = $clean_entity_arr['new_path'];
$path_alias_conn = \Drupal::entityTypeManager()->getStorage('path_alias');
$new_path_already_exists = $path_alias_conn->loadByProperties(['alias' => $new_path]);
if(empty($new_path_already_exists)) {
$maybe_path_alias = $path_alias_conn->loadByProperties(['path' => $old_path]);
if (empty($maybe_path_alias)) {
//Create path alias
} else if (count($maybe_path_alias) == 1) {
//Update path alias
} else {
//We've somehow returned more than one result for the old path. Something is wrong
\Drupal::logger('custom_urls')->notice("The path: " . $old_path . ", is returning more than one result in path_alias");
}
} else {
\Drupal::logger('custom_urls')->notice("The generated path: " . $new_path . ", already exists in path_alias. An entity with an identical title was likely created");
}
}
}

So we get the connection to the path_alias table. First we test if the $new_path (the custom URL) already exists there. If it does we don’t do anything and send a message to the logger so we’re aware that the current Entity is trying to create a custom URL that already exists. Then we check if the $old_path (the generated exposed filter path) is already in the path_alias table (note that because they contain the entity’s id, they should only ever conflict on the rare chance, say, a Node and a Term on the same view have the same ID). If it does not we create the new path_alias entry using the $old_path and $new_path; else if it comes back with 1 result than we have an update and we update the $new_path; else we’ve somehow returned more than one result for the $old_path and we notify the logger. Here is the function completely filled out:

function _create_or_update_path_alias($entity){
$raw_entity_arr = _is_relevant_entity($entity);
if($raw_entity_arr['boolean']){
//Update the path alias with the new URL
$clean_entity_arr = _build_custom_url($entity, $raw_entity_arr);
$old_path = $clean_entity_arr['old_path'];
$new_path = $clean_entity_arr['new_path'];
$path_alias_conn = \Drupal::entityTypeManager()->getStorage('path_alias');
$new_path_already_exists = $path_alias_conn->loadByProperties(['alias' => $new_path]);
if(empty($new_path_already_exists)) {
$maybe_path_alias = $path_alias_conn->loadByProperties(['path' => $old_path]);
if (empty($maybe_path_alias)) {
//Create path alias
$new_path_ent = $path_alias_conn->create([
'path' => $old_path,
'alias' => $new_path,
'langcode' => \Drupal::languageManager()->getCurrentLanguage()->getId()
]);
$new_path_ent->save();
//Add new URL to cache
_cache_fancy_url($old_path,$new_path);
} else if (count($maybe_path_alias) == 1) {
//Update path alias
$path_alias_obj = reset($maybe_path_alias);
$path_alias_obj->set('alias', $new_path);
$path_alias_obj->save();
//Drop old URL from cache and add new one
_cache_fancy_url($old_path,$new_path);
} else {
//We've somehow returned more than one result for the old path. Something is wrong
\Drupal::logger('custom_urls')->notice("The path: " . $old_path . ", is returning more than one result in path_alias");
}
} else {
\Drupal::logger('custom_urls')->notice("The generated path: " . $new_path . ", already exists in path_alias. An entity with an identical title was likely created");
}
}
}

But wait, another function snuck in there: _cache_fancy_url($old_path,$new_path). In Part 2 of this series, we will look at how to load and process the custom urls; doing this from the cache is definitely the fastest way to do that, so we create/modify cache entries here. For clarity, I will show that function here:

function _cache_fancy_url($old_path, $new_path){
$default_cache = \Drupal::cache();
$old_path_result = $default_cache->get($old_path);
if($old_path_result !== FALSE) {
//Old path in cache, likely a Term or Product has been modified
//delete the old entry
$default_cache->delete($old_path);
}
//Add the new entry $default_cache->set($old_path,$new_path,Drupal\Core\Cache\CacheBackendInterface::CACHE_PERMANENT);
}

Caching here isn’t too important because when we load the custom urls, if they aren’t in the cache (perhaps, after a cache flush) we will set them there, but for what extra performance a 5 line function imparts it is worth it.

The delete hook is highly similar to the first two. I’ll paste it here and I imagine if you’ve read the above not much explanation is needed:

function custom_urls_entity_delete(Drupal\Core\Entity\EntityInterface $entity){
$raw_entity_arr = _is_relevant_entity($entity);
if($raw_entity_arr['boolean']){
//delete the associated path alias
$old_path = $raw_entity_arr['old_path'];
$path_alias_conn = \Drupal::entityTypeManager()->getStorage('path_alias');
$maybe_path_alias = $path_alias_conn->loadByProperties(['path' => $old_path]);
if(count($maybe_path_alias) == 1){
$path_alias_conn->delete($maybe_path_alias);
_delete_from_cache($maybe_path_alias);
} else {
\Drupal::logger('custom_urls')
->notice("The path: ".$old_path.", was set to delete from path_alias, but it returned ".count($maybe_path_alias)." results");
}
}
}

So now every time an Entity we care about in our View with exposed filters is created/updated/deleted we are also creating/updating/deleting and caching its associated custom URL. I prefer this way of creating the custom URLs versus creating them dynamically when the page loads as I feel that executing this extra code at entity CRUD time is more performant than at page load. While I know path_alias was intended for URLs like /node/1, I feel that this usage of the path_alias table matches its general intention: to provide nice aliases for non-nice paths.

We are one big step closer to custom URLs on a View with exposed filters, check out Part 2 to see how to load and process these custom URLs.

Apr 23 2020
Apr 23
Todd Ross Nienkerk

Todd Ross Nienkerk

CEO, Owner, and Co‑Founder

Todd is responsible for driving Four Kitchens’ vision and long-term strategy.

April 23, 2020

We’ve been making big websites for 14 years, and almost all of them have been built on Drupal. It’s no exaggeration to say that Four Kitchens owes its success to the incredible opportunities Drupal has provided us. There has never been anything like Drupal and the community it has fostered—and there may never be anything like it ever again.

That’s why it’s crucial we do everything we can to support the Drupal Association. Especially now.

The impacts of COVID-19 have been felt everywhere, especially at the Association. With the cancellation of DrupalCon Minneapolis, the Drupal Association lost a major source of annual fundraising. Without the revenue from DrupalCon, the Association would not be able to continue its mission to support the Drupal project, the community, and its growth.

The Drupal community’s response to this crisis was tremendous. For our part, we proudly joined 27 other organizations in pledging our sponsorship fees to the Association regardless of whether, or how, DrupalCon happened. I ensured my Individual Membership was still active, and I made a personal contribution.

But we need to do more.

You can help by joining us in the #DrupalCares campaign.

The #DrupalCares campaign is a fundraiser to protect the Drupal Association from the financial impact of COVID-19. Your support will help keep the Drupal Association strong and able to continue accelerating the Drupal project.

The Drupal Association

The outpouring of support has been… Inspiring. First, project founder Dries Buytaert and his partner Vanessa Buytaert pledged their generous support of $100,000. Then, a coalition of Drupal businesses pledged even more matching contributions. We are proud to count ourselves among the dozens of participating Drupal businesses.

Any individual donations, increased memberships, or new memberships through the end of April will be tripled by these matching pledges, up to $100,000, for a total of $300,000.

Please join us in supporting the Drupal Association. Your contribution will help ensure the continued success of the Association and the Drupal community for years to come.

Give to #DrupalCares through April to help the Association receive a 3:1 matching contribution. 

Making the web a better place to teach, learn, and advocate starts here...

When you subscribe to our newsletter!

Apr 01 2020
Apr 01

argument-open-source

Back in 2013, when I first joined Mobomo, we migrated NASA.gov from a proprietary content management system (CMS) to Amazon Cloud and Drupal 7. It goes without saying, but there was a lot riding on getting it right. The NASA site had to handle high traffic and page views each day, without service interruptions, and the new content management system had to accommodate a high volume of content updates each day. In addition to having no room for compromise on performance and availability, the site also had to have a high level of security. 

Maybe the biggest challenge, though, was laying the groundwork to achieve NASA’s vision for a website with greater usability and enhanced user experiences. If NASA’s audience all fell into the same demographic, that goal probably wouldn’t have seemed so intimidating, but NASA’s audience includes space fans who range from scientists to elementary school kids. 

Our mission was to create a mobile-first site that stayed true to NASA’s brand and spoke to all of the diverse members of its audience. A few years later, we relaunched a user-centric site that directed visitors from a dynamic home page to microsites designed specifically for them.

Making Space Seem Not So Far Away

NASA.gov includes data on its missions, past and present. To make this massive amount of data more user-friendly, we worked with NASA to design a site that’s easily searchable, navigable, and enhanced through audio, video, social media feeds, and calendars. Users can find updates on events via features such as the countdown clock to the International Space Station’s 20th anniversary. NASA.gov users can also easily find what they need if they want to research space technology, stream NASA TV, or explore image galleries. 

The NASA.gov site directs its younger visitors to a STEM engagement microsite where students can find activities appropriate for their grade level. The site also includes the NASA Kids’ Club where students can have some fun while they’re learning about exploration. For example, they can try their hands at virtually driving a rover on Mars, play games, and download activities. 

Older students with space-related aspirations can learn about internship and career opportunities, and teachers can access lesson plans and STEM resources.

How to Make it Happen

To successfully achieve NASA’s goals and manage a project this complex, we had to choose the right approach. Some website projects are tailor-made for a simple development plan that moves from a concept to design, construction, testing, and implementation in a structured, linear way. The NASA.gov project, however, wasn’t one of them.

For this website and the vast majority of the sites we develop, our team follows DevOps methodology. With DevOps, you don’t silo development from operations. Our DevOps culture brings together all stakeholders to collaborate throughout the process to achieve:

Faster Deployment

If we had to build the entire site then take it live, it would have taken much longer for NASA and its users to have a new resource. We built the site in stages, validating at every stage. By developing in iterations, and involving the entire team, we also have the ability to address small issues rather than waiting until they create major ones. It also gives us more agility to address changes and keep everyone informed. This prevents errors that could put the brakes on the entire project.

Optimized Design

NASA.gov has several Webby Awards, and award-winning web design takes a team that works together and collaborates with the organization to define the audience (or audiences), optimize the site’s navigation and usability, and strike a balance between the site’s primary purpose and its appeal. 

Mobile-First

Because NASA.gov users may be accessing the site from a PC, laptop, tablet, smartphone, or other device, it was also pivotal to use mobile-first design. Mobile-first starts by designing for the smallest screens first, and then work your way up to larger screens. This approach forces you to build a strong foundation first, then enhance it as screen sizes increase. It basically allows you to ensure user experiences are optimized for any size device. 

Scalability

NASA.gov wasn’t only a goliath website when we migrated it to Amazon Cloud and Drupal. We knew it would continue to grow. Designing the site with microsites that organize content, help visitors find the content that is most relevant to their interests, and enhance usability and UX informed a plan for future growth. 

Efficient Development Processes

DevOps Methodology breaks down barriers between developers and other stakeholders, automates processes, makes coding and review processes more efficient, and enables continuous testing. Even though we work in iterations, our team maintains a big-picture view of projects, such as addressing integrations, during the development process. 

Planned Post-Production

DevOps also helps us cover all the bases to prepare for launch and to build in management tools for ongoing site maintenance. 

What Your Business Can Learn from NASA

You probably never thought about it, but your business or organization has a lot in common with NASA, at least when it comes to your website. Just like NASA, you need a website that gives you the ability to handle a growing digital audience, reliably and securely. You’re probably also looking for the best CMS for your website, one that’s cost-effective and gives you the features you need.

Your website should also be designed to be usable and to provide the user experiences your audience wants. And, with the number of mobile phone users in the world topping 5 billion, you want to make sure their UX is optimized with mobile-first design. 

NASA’s project is also an illustration of how building your website in stages, getting input from all stakeholders, and validating and testing each step of the way can lead to great results. You also need a plan for launching the site with minimal disruption and tools that will make ongoing management and maintenance easier. 

You probably want to know you are doing everything you can to make your content appealing, engaging, and interactive. You may think NASA has an advantage in that department since NASA’s content is inherently exciting to its audience.

But so is yours. Create a website that showcases it. Not sure where to begin? Click here and we’ll point you in the right direction.

Mar 12 2020
Mar 12

Category 1: Web development

Government organizations want to modernize and build web applications that make it easier for constituents to access services and information. Vendors in this category might work on improving the functionality of search.mass.gov, creating benefits calculators using React, adding new React components to the Commonwealth’s design system, making changes to existing static sites, or building interactive data stories.

Category 2: Drupal

Mass.gov, the official website of the Commonwealth of Massachusetts, is a Drupal 8 site that links hundreds of thousands of weekly visitors to key information, services, and other transactional applications. You’ll develop modules to enhance and stabilize the site; build out major new features; and iterate on content types so that content authors can more easily create innovative, constituent-centered services.

Category 3: Data architecture and engineering

State organizations need access to large amounts of data that’s been prepared and cleaned for decision-makers and analysts. You’ll take in data from web APIs and government organizations, move and transform it to meet agency requirements using technology such as Airflow and SQL, and store and manage it in PostgreSQL databases. Your work will be integral in helping agencies access and use data in their decision making.

Category 4: Data analytics

Increasingly, Commonwealth agencies are using data to inform their decisions and processes. You’ll analyze data with languages such as Python and R, visualize it for stakeholders in business intelligence tools like Tableau, and present your findings in reports for both technical and non-technical audiences. You’ll also contribute to the state’s use of web analytics to improve online applications and develop new performance metrics.

Category 5: Design, research, and content strategy

Government services can be complex, but we have a vision for making access to those services as easy as possible. Bidders for this category may work with partner agencies to envision improvements to digital services using journey mapping, user research, and design prototyping; reshape complex information architecture; help transform technical language into clear-public facing content, and translate constituent feedback into new and improved website and service designs.

Category 6: Operations

You’ll monitor the system health for our existing digital tools to maintain uptime and minimize time-to-recovery. Your DevOps work will also create automated tests and alerts so that technical interventions can happen before issues disrupt constituents and agencies. You’ll also provide expert site reliability engineering advice for keeping sites maintainable and building new infrastructure. Examples of applications you’ll work on include Mass.gov, search.mass.gov, our analytics dashboarding platform, and our logging tool.

Jan 30 2020
Jan 30

Recently, we were asked if we could integrate some small, one-page websites into an existing Drupal website. This would not only make it easier to manage those different websites and their content, but also reduce the hosting and maintenance costs.

In this article, I will discuss how we tackled this problem, improved the content management experience and implemented this using the best Drupal practices.

First, some background information: America’s Promise is an organization that launches many national campaigns that focus on improving the lives and futures of America’s youth. Besides their main website (www.americaspromise.org), they also had separate websites and domain names for some of these campaigns, e.g. www.everyschoolhealthy.org.

We came up with a Drupal solution where they could easily configure and manage their campaigns. Next to having the convenience of managing all these campaigns from one admin panel, they could also reference content items easily from their main website or other campaigns by tagging the content with specific taxonomy terms (keywords).

We created a new content type “Campaign” with many custom paragraph types as the building blocks for creating a new campaign. We wanted this to be as easy as possible for the content editors, but also give enough freedom where every campaign can have their own branding, by selecting a font color, background image/color/video.

Below are some of the paragraph types we created:

  • Hero
  • Column Layout
  • WYSIWYG
  • Latest News
  • Newsletter Signup
  • Twitter Feed
  • Video Popup
  • Community Partners
  • Latest Resources
  • Grantee Spotlight
  • Statistics Map
  • Partner Spotlight
  • Media Mentions

These paragraphs offer lots of flexibility to create unique and interactive campaigns. By drag and drop, these paragraphs can be ordered however you’d like.

Below is a screenshot of some of these paragraph types in action, and how easy they can be configured on the backend.

Every School Healthy paragraphs

Below you can see how the “Hero” paragraph looks like in the admin panel. The editor enters a tagline, chooses a font color, uploads a logo, an optional background image or video, and a background overlay color with optional opacity.

Campaign Builder Hero Backend

As you can see in the above screenshot, this is a very basic paragraph type, but it shows the flexibility in customizing the building blocks for the campaign. We also created more complex paragraph types that required quite some custom development.

One of the more complicated paragraph types we created is a statistics map. America’s Promise uses national and state statistics to educate and strengthen its campaign causes.

Campaign Builder Statistics Map

The data for this map comes from a Google Sheet. All necessary settings can be configured in the backend system. Users can then view these state statistics by hovering over the map or see even more details by clicking on an individual state.

Campaign Builder Statistics Map Backend

Some other interesting paragraph types we created are:

  • Twitter Feed, where the editors can specify a certain #hashtag and the tweets will display in a nice masonry layout
  • Newsletter Signup, editors can select what newsletter campaign the user signs up for
  • Latest News/Resources, editors can select the taxonomy term they want to use to filter the content on

Time to dive into some of the more technical approaches we took. The campaign builder we developed for America’s Promise depends on several Drupal contrib modules:

  • paragraphs
  • bg_image_formatter
  • color_field
  • video
  • masonry (used for the Twitter Feed)

Font color and background image/color/video don’t need any custom code, those can be accomplished using the above modules and configuring the correct CSS selectors on the paragraph display:

Campaign Builder Hero Display

In our custom campaign builder module, we have several custom Entities, Controllers, Services, Forms, REST resources and many twig template files. Still, the module mainly consists of custom field formatters and custom theme functions.

Example: the “Latest News” paragraph only has one field where the editor can select a taxonomy term. With a custom field formatter, we will display this field as a rendered view instead. We pass the selected term as an argument to the Latest News view, execute the view and display it with a custom #theme function.

Conclusion

By leveraging the strength of paragraphs, other contrib modules and some custom code, we were able to create a reusable and intuitive campaign builder. Where the ease of content management was a priority without limiting the design or branding of each campaign.

Several campaigns that are currently live and built with our campaign builder:

Could your organization benefit from having your own custom campaign builder and want to see more? Contact us for a demo.

Jan 23 2020
Jan 23
Allan Chappell

Allan Chappell

Senior Support Lead

Allan brings technological know-how and grounds it with some simple country living. His interests include DevOps, animal husbandry (raising rabbits and chickens), hiking, and automated testing.

January 23, 2020

In the Drupal support world, working on Drupal 7 sites is a necessity. But switching between Drupal 7 and Drupal 8 development can be jarring, if only for the coding style.

Fortunately, I’ve got a solution that makes working in Drupal 7 more like working in Drupal 8. Use this three-part approach to have fun with Drupal 7 development:

  • Apply Xautoload to keep your PHP skills fresh, modern, and compatible with all frameworks and make your code more reusable and maintainable between projects.
  • Use the Drupal Libraries API to use third-party libraries.
  • Use the Composer template to push the boundaries of your programming design patterns.

Applying Xautoload

Xautoload is simply a module that enables PSR-0/4 autoloading. Using Xautoload is as simple as downloading and enabling it. You can then start using use and namespace statements to write object-oriented programming (OOP) code.

For example:

xautoload.info

name = Xautoload Example
description = Example of using Xautoload to build a page
core = 7.x package = Midcamp Fun

dependencies[] = xautoload:xautoload

xautoload_example.module

 'xautoload_example_page_render',
    'access callback' => TRUE,
  );
  return $items;
}

function xautoload_example_page_render() {
  $obj = new SimpleObject();
  return $obj->render();
}

src/SimpleObject.php

 "

Hello World

", ); } }

Enabling and running this code causes the URL /xautoload_example to spit out “Hello World”.

You’re now ready to add in your own OOP!

Using third-party libraries

Natively, Drupal 7 has a hard time autoloading third-party library files. But there are contributed modules (like Guzzle) out there that wrap third-party libraries. These modules wrap object-oriented libraries to provide a functional interface. Now that you have Xautoload in your repertoire, you can use its functionality to autoload libraries as well.

I’m going to show you how to use the Drupal Libraries API module with Xautoload to load a third-party library. You can find examples of all the different ways you can add a library in xautoload.api.php. I’ll demonstrate an easy example by using the php-loremipsum library:

1. Download your library and store it in sites/all/libraries. I named the folder php-loremipsum.

2. Add a function implementing hook_libraries_info to your module by pulling in the namespace from Composer. This way, you don’t need to set up all the namespace rules that the library might contain.

function xautoload_example_libraries_info() {
  return array(
    'php-loremipsum' => array(
      'name' => 'PHP Lorem Ipsum',
      'xautoload' => function ($adapter) {
        $adapter->composerJson('composer.json');
      }
    )
  );
}

3. Change the page render function to use the php-loremipsum library to build content.

use joshtronic\LoremIpsum;
function xautoload_example_page_render() {
  $library = libraries_load('php-loremipsum');
  if ($library['loaded'] === FALSE) {
    throw new \Exception("php-loremipsum didn't load!");
  }
  $lipsum = new LoremIpsum();
  return array(
    '#markup' => $lipsum->paragraph('p'),
  );
}

Note that I needed  to tell the Libraries API to load the library, but I then have access to all the namespaces within the library. Keep in mind that the dependencies of some libraries are immense. You’ll very likely need to use Composer from within the library and commit it when you first start out. In such cases, you might need to make sure to include the Composer autoload.php file.

Another tip:  Abstract your libraries_load() functionality out in such a way that if the class you want already exists, you don’t call libraries_load() again. Doing so removes libraries as a hard dependency from your module and enables you to use Composer to load the library later on with no more work on your part. For example:

function xautoload_example_load_library() {
  if (!class_exists('\joshtronic\LoremIpsum', TRUE)) {
    if (!module_exists('libraries')) {
      throw new \Exception('Include php-loremipsum via composer or enable libraries.');
    }
    $library = libraries_load('php-loremipsum');
    if ($library['loaded'] === FALSE) {
      throw new \Exception("php-loremipsum didn't load!");
    }
  }
}

And with that, you’ve conquered the challenge of using third-party libraries!

Setting up a new site with Composer

Speaking of Composer, you can use it to simplify the setup of a new Drupal 7 site. Just follow the instructions in the Readme for the Composer Template for Drupal Project. From the command line, run the following:

composer create-project drupal-composer/drupal-project:7.x-dev  --no-interaction

This code gives you a basic site with a source repository (a repo that doesn’t commit contributed modules and libraries) to push up to your Git provider. (Note that migrating an existing site to Composer involves a few additional considerations and steps, so I won’t get into that now.)

If you’re generating a Pantheon site, check out the Pantheon-specific Drupal 7 Composer project. But wait: The instructions there advise you to use Terminus to create your site, and that approach attempts to do everything for you—including setting up the actual site. Instead, you can simply use composer create-project  to test your site in something like Lando. Make sure to run composer install if you copy down a repo.

From there, you need to enable the Composer Autoload module , which is automatically required in the composer.json you pulled in earlier. Then, add all your modules to the require portion of the file or use composer require drupal/module_name just as you would in Drupal 8.

You now have full access to all the  Packagist libraries and can use them in your modules. To use the previous example, you could remove php-loremipsum from sites/all/libraries, and instead run composer require joshtronic/php-loremipsum. The code would then run the same as before.

From here on out, it’s up to your imagination. Code and implement with ease, using OOP design patterns and reusable code. You just might find that this new world of possibilities for integrating new technologies with your existing Drupal 7 sites increases your productivity as well.

Making the web a better place to teach, learn, and advocate starts here...

When you subscribe to our newsletter!

Jul 30 2019
Jul 30

Testing integrations with external systems can sometimes prove tricky. Services like Acquia Lift & Content Hub need to make connections back to your server in order to pull content. Testing this requires that your environment be publicly accessible, which often precludes testing on your local development environment.

Enter ngrok

As mentioned in Acquia’s documentation, ngrok can be used to facilitate local development with Content Hub. Once you install ngrok on your development environment, you’ll be able to use the ngrok client to connect and create an instant, secure URL to your local development environment that will allow traffic to connect from the public internet. This can be used for integrations such as Content Hub for testing, or even for allowing someone remote to view in-progress work on your local environment from anywhere in the world without the need for a screen share. You can also send this URL to mobile devices like your phone or tablet and test your local development work easily on other devices.

After starting the client, you’ll be provided the public URL you can plug into your integration for testing. You’ll also see a console where you can observe incoming connections.

Resources

Jun 20 2019
Jun 20

We’ve been starting many of our projects using Acquia’s Lightning distribution. This gives a good, consistent starting point for and helps speed development through early adoption of features that are still-in-the-works for Drupal 8. Like other distributions, Lightning bundles Drupal Core with a set of contributed modules and pre-defined configuration.

While Lightning is a great base to start from, sometimes you want to deviate from the path it provides. Say for example you want to use a Paragraphs based system for page components, your client has a fairly complex custom publishing workflow, and you also have different constraints for managing roles. Out-of-the-box, Acquia Lightning has a number of features you may find yourself in conflict with. Things like Lightning Layout provide a landing page content type that may not fit the needs for the site. Lightning Roles has a fairly hard-coded set of assumptions for role generation. And while it is a good solution for many sites, Lightning Workflow may not always be the right fit.

You may find yourself tempted to uninstall these modules and delete the configuration they brought to the party, but things are not always that simple. Because of the inter-relationships and dependencies involved, simply uninstalling these modules may not be possible. Usually, all looks fine, then when it comes time for a deployment things fall apart quickly.

This is where sub-profiles can save the day. By creating a sub-profile of Acquia Lightning you can tweak Lightning’s out-of-the-box behavior and include or exclude modules to fit your needs. Sub-profiles inherit all of the code and configuration from the base profile they extend. This gives the developer the ability to take an install profile like Acquia Lightning and tweak it to fit her project’s needs. Creating a sub-profile can be as easy as defining it via a *.info.yml file.

In our example above, you may create a sub-profile like this:

name: 'example_profile'
type: profile
description: 'Lightning sub-profile'
core: '8.x'
type: profile
base profile: lightning
themes:
  - mytheme
  - seven
install:
  - paragraphs
  - lightning_media
  - lightning_media_audio
  - lightning_media_video
exclude:
  - lightning_roles
  - lightning_page
  - lightning_layout
  - lightning_landing_page

This profile includes dependencies we’re going to want, like Paragraphs – and excludes the things we want to manage ourselves. This helps ensure that when it comes time for deployment, you should get what you expect. You can create a sub-profile yourself by adding a directory and info.yml file in the “profiles” directory, or if you have Drupal Console and you’re using Acquia Lightning, you can follow Acquia’s instructions. This Drupal Console command in Lightning will walk you through a wizard to pick and choose modules you’d like to exclude.

Once you’ve created your new sub-profile, you can update your existing site to use this profile. First, edit your settings.php and update the ‘install_profile’ settings.

$settings['install_profile'] = 'example_profile';

Then, use Drush to make the profile active.

drush cset core.extension module.example_profile 0

Once your profile is active and in-use, you can export your configuration and continue development.

Jun 11 2019
Jun 11

Every once in a while you have those special pages that require a little extra something. Some special functionality, just for that page. It could be custom styling for a marketing landing page, or a third party form integration using JavaScript. Whatever the use case, you need to somehow sustainably manage JavaScript or CSS for those pages.

Our client has some of these special pages. These are pages that live outside of the standard workflow and component library and require their own JS and CSS to pull them together.  Content authors want to be able to manage these bits of JavaScript and CSS on a page-by-page basis. Ideally, these pages would go through the standard development and QA workflow before code makes it out to production. Or perhaps you need to work in the opposite direction, giving the content team the ability to create in Production, but then capture and pull them back into the development pipeline in future deployments?

This is where Drupal 8’s Configuration Entities become interesting. To tackle this problem, we created a custom config entity to capture these code “snippets”. This entity gives you the ability to enter JavaScript or CSS into a text area or to paste in the URL to an externally hosted resource. It then gives you a few choices on how to handle the resulting Snippet. Is this JavaScript, or CSS? Do you want to scope the JavaScript to the Footer or the Header? Should we wrap the JavaScript in a Drupal Behavior?

Once the developer makes her selections and hits submit, the system looks at the submitted configuration and if it’s not an external resource, it writes a file to the filesystem of the Drupal site.

Now that you’ve created your library of Snippets, you can then make use of them on your content. From either your Content Type, Paragraph, or other Content Entity – simply create a new reference field. Choose “Other”, then on the next page scroll through the entity type list till you get to the configuration section and select JSnippet. Your content creators will then have access to the Snippets when creating content.

By providing our own custom Field Formatter for Entity Reference fields, we’re then able to alter how that snippet is rendered on the final page. During the rendering process, when the Snippet reference field is rendered, the custom field formatter loads the referenced configuration entity and uses its data and our dynamically generated library info to attach the relevant JavaScript or CSS library to the render array. During final rendering, this will result in the JavaScript or CSS library being added to the page, within its proper scope.

Because these snippets are configuration entities, they can be captured and exported with the site’s configuration. This allows them to be versioned and deployed through your standard deployment process. When the deployed configuration is integrated, the library is built up and any JS or CSS is written to the file system.

Want to try it out? Head on over to Drupal.org and download the JSnippet module. If you have any questions or run into any issues just let us know in the issue queue.

Apr 19 2019
Apr 19

What we learned from our fellow Drupalists

Lisa MirabileMassachusetts Digital ServicePublished in

5 min read

Apr 19, 2019

On April 7th, our team packed up our bags and headed off to Seattle for one of the bigger can’t miss learning events of the year, DrupalCon.

“Whether you’re C-level, a developer, a content strategist, or a marketer — there’s something for you at DrupalCon.” -https://events.drupal.org/

As you may have read in one of our more recent posts, we had a lot of sessions that we couldn’t wait to attend! We were very excited to find new ideas that we could bring back to improve our services for constituents or the agencies we work with to make digital interactions with government fast, easy, and wicked awesome. DrupalCon surpassed our already high expectations.

At the Government Summit, we were excited to speak with other state employees who are interested in sharing knowledge, including collaborating on open-source projects. We wanted to see how other states are working on problems we’ve tried to solve and to learn from their solutions to improve constituents’ digital interactions with government.

One of the best outcomes of the Government Summit was an amazing “birds of a feather” (BOF) talk later in the week. North Carolina’s Digital Services Director Billy Hylton led the charge for digital teams across state governments to choose a concrete next step toward collaboration. At the BOF, more than a dozen Massachusetts, North Carolina, Georgia, Texas, and Arizona digital team members discussed, debated, and chose a content type (“event”) to explore. Even better, we left with a meeting date to discuss specific next steps on what collaborating together could do for our constituents.

The learning experience did not stop at the GovSummit. Together, our team members attended dozens of sessions. For example, I attended a session called “Stanford and FFW — Defaulting to Open” since we are starting to explore what open-sourcing will look like for Mass.gov. The Stanford team’s main takeaway was the tremendous value they’ve found in building with and contributing to Drupal. Quirky fact: their team discovered during user testing among high-school students that “FAQ” is completely mysterious to younger people: they expect the much more straightforward “Questions” or “Help.”

Another session I really enjoyed was called “Pattern Lab: The Definitive How-to.” It was exciting to hear that Pattern Lab, a tool for creating design systems, has officially merged its two separate cores into a single one that supports all existing rendering engines. This means simplifying the technical foundation to allow more focus on extending Pattern Lab in new and useful ways (and less just keeping it up and running). We used Pattern Lab to build Mayflower, the design system created for the Commonwealth of Massachusetts and implemented first on Mass.gov. We are now looking at the best ways to offer the benefits of Mayflower — user-centeredness, accessibility, and consistent look and feel — to more Commonwealth digital properties. Some team members had a chance to talk later to Evan Lovely, the speaker and one of the maintainers of Pattern Lab, and were excited by the possibility of further collaboration to implement Mayflower in more places.

There were a variety of other informative topics. Here are some that my peers and I enjoyed, just to name a few:

Our exhibit hall booth at DrupalCon 2019Talking to fellow Drupalists at our booth

On Thursday we started bright and early to unfurl our Massachusetts Digital Service banner and prepare to greet fellow Drupalists at our booth! We couldn’t have done it without our designer, who put all of our signs together for our first time exhibiting at DrupalCon (Thanks Eva!)

It was remarkable to be able to talk with so many bright minds in one day. Our one-on-one conversations took us on several deep dives into the work other organizations are doing to improve their digital assets. Meeting so many brilliant Drupalists made us all the more excited to share some opportunities we currently have to work with them, such as the ITS74 contract to work with us as a vendor, or our job opening for a technical architect.

We left our table briefly to attend Mass.gov: A Guide to Data-Informed Content Optimization, where team members Julia Gutierrez and Nathan James shared how government agencies in Massachusetts are now making data-driven content decisions. Watch their presentation to learn:

  1. How we define wicked awesome content
  2. How we translate indicators into actionable metrics
  3. The technology stack we use to empower content authors

To cap it off, Mass.gov, with partners Last Call Media and Mediacurrent, won Best Theme for our custom admin theme at the first-ever Global Splash awards (established to “recognize the best Drupal projects on the web”)! An admin theme is the look and feel that users see when they log in. The success of Mass.gov rests in the hands of all of its 600+ authors and editors. We’ve known from the start of the project that making it easy and efficient to add or edit content in Mass.gov was key to the ultimate goal: a site that serves constituents as well as possible. To accomplish this, we decided to create a custom admin theme, launched in May 2018.

A before-and-after view of our admin theme

Our goal was not just a nicer looker and feel (though it is that!), but a more usable experience. For example, we wanted authors to see help text before filling out a field, so we brought it up above the input box. And we wanted to help them keep their place when navigating complicated page types with multiple levels of nested information, so we added vertical lines to tie together items at each level.

Last Call Media founder Kelly Albrecht crosses the stage to accept the Splash award for Best Theme on behalf of the Mass.gov Team.All the Splash award winners!

It was a truly enriching experience to attend DrupalCon and learn from the work of other great minds. Our team has already started brainstorming how we can improve our products and services for our partner agencies and constituents. Come back to our blog weekly to check out updates on how we are putting our DrupalCon lessons to use for the Commonwealth of Massachusetts!

Interested in a career in civic tech? Find job openings at Digital Service.
Follow us on Twitter | Collaborate with us on GitHub | Visit our site

Apr 05 2019
Apr 05

In this project we had built a collection of components using a combination of Paragraphs and referenced block entities. While the system we built was incredibly flexible, there were a number of variations we wanted to be able to apply to each component. We also wanted the system to be easily extensible by the client team going forward. To this end, we came up with a system of configuration entities that would allow us to provide references to classes and thematically name these styles. We built upon this by extending the EntityReferenceSelections plugin, allowing us to customize the list of styles available to a component by defining where those styles could be used.

The use of configuration entities allows the client team to develop and test new style variations in the standard development workflow and deploy them out to forward environments, giving an opportunity to test the new styles in QA prior to deployment to Production.

The Styles configuration entity

This configuration entity is at the heart of the system. It allows the client team to come in through the UI and create the new style. Each style is comprised of one or more classes that will later be applied to the container of the component the style is used on. The Style entity also contains configuration allowing the team to identify where this style can be used. This will be used later in the process to allow the team to limit the list of available styles to just those components that can actually make use of them.

The resulting configuration for the Style entity is then able to be exported to yml, versioned in the project repository and pushed forward through our development pipeline. Here’s an example of a Style entity after export to the configuration sync directory.

uuid: 7d112e4e-0c0f-486e-ae36-b608f55bf4e4
langcode: en
status: true
dependencies: {  }
id: featured_blue
label: 'Featured - Blue'
classes:
  - comp__featured-blue
uses:
  rte: rte
  cta: cta
  rail: '0'
  layout: '0'
  content: '0'
  oneboxlisting: '0'
  twoboxlisting: '0'
  table: '0'

Uses

For “Uses” we went with a simple configuration form. The result of this is form is stored in the key value store for Drupal 8. We can then access that configuration from our Styles entity and from our other plugins in order to retrieve and decode the values. Because the definition of each use was a simple key and label, we didn’t need anything more complex for storage.

Assigning context through a custom Selection Plugin

By extending the core EntityReferenceSelection plugin, we’re able to combine our list of Uses with the uses defined in each style component. To add Styles to a component, the developer would first add a entity reference field to the the Styles config entity to the component in question. In the field configuration for that entity reference field, we can chose our custom Selection Plugin. This exposes our list of defined uses. We can then select the appropriate use for this component. The end result of this is that only the applicable styles will be presented to the content team when they create components of this type.

getConfiguration()['uses'];

    if ($options) {
      $form['uses'] = [
        '#type' => 'checkboxes',
        '#title' => $this->t('Uses'),
        '#options' => $options,
        '#default_value' => $uses,
      ];
    }
    return $form;
  }

  /**
   * {@inheritdoc}
   */
  public function getReferenceableEntities($match = NULL, $match_operator = 'CONTAINS', $limit = 0) {
    $uses_config = $this->getConfiguration()['uses'];

    $uses = [];
    foreach ($uses_config as $key => $value) {
      if (!empty($value)) {
        $uses[] = $key;
      }
    }

    $styles = \Drupal::entityTypeManager()
      ->getStorage('styles')
      ->loadMultiple();

    $return = [];
    foreach ($styles as $style) {
      foreach ($style->get('uses') as $key => $value) {
        if (!empty($value)) {
          if (in_array($key, $uses)) {
            $return[$style->bundle()][$style->id()] = $style->label();
          }
        }
      }
    }
    return $return;
  }

}

In practice, this selection plugin presents a list of our defined uses in the configuration for the field. The person creating the component can then select the appropriate use definitions, limiting the scope of styles that will be made available to the component.

Components, with style.

The final piece of the puzzle is how we add the selected styles to the components during content creation. Once someone on the content team adds a component to the page and selects a style, we then need to apply the style to the component. This is handled by preprocess functions for each type of component we’re working with. In this case, Paragraphs and Blocks.

In both of the examples below we check to see if the entity being rendered has our ‘field_styles’. If the field exists, we load its values and the default class attributes already applied to the entity. We then iterate over any styles applied to the component and add any classes those styles define to an array. Those classes are merged with the default classes for the paragraph or block entity. This allows the classes defined to be applied to the container for the component without a need for modifying any templates.

/**
 * Implements hook_preprocess_HOOK().
 */
function bcbsmn_styles_preprocess_paragraph(&$variables) {
  /** @var Drupal\paragraphs\Entity\Paragraph $paragraph */
  $paragraph = $variables['paragraph'];
  if ($paragraph->hasField('field_styles')) {
    $styles = $paragraph->get('field_styles')->getValue();
    $classes = isset($variables['attributes']['class']) ? $variables['attributes']['class'] : [];
    foreach ($styles as $value) {
      /** @var \Drupal\bcbsmn_styles\Entity\Styles $style */
      $style = Styles::load($value['target_id']);
      if ($style instanceof Styles) {
        $style_classes = $style->get('classes');
        foreach ($style_classes as $class) {
          $classes[] = $class;
        }
      }
    }
    $variables['attributes']['class'] = $classes;
  }
}

/**
 * Implements hook_preprocess_HOOK().
 */
function bcbsmn_styles_preprocess_block(&$variables) {
  if ($variables['base_plugin_id'] == 'block_content') {
    $block = $variables['content']['#block_content'];
    if ($block->hasField('field_styles')) {
      $styles = $block->get('field_styles')->getValue();
      $classes = isset($variables['attributes']['class']) ? $variables['attributes']['class'] : [];
      foreach ($styles as $value) {
        /** @var \Drupal\bcbsmn_styles\Entity\Styles $style */
        $style = Styles::load($value['target_id']);
        if ($style instanceof Styles) {
          $style_classes = $style->get('classes');
          foreach ($style_classes as $class) {
            $classes[] = $class;
          }
        }
      }
      $variables['attributes']['class'] = $classes;
    }
  }
}

Try it out

We’ve contributed the initial version of this module to Drupal.org as the Style Entity project. We’ll continue to refine this as we use it on future projects and with the input of people like you. Download Style Entity and give it a spin, then let us know what you think in the issue queue.

Apr 04 2019
Apr 04
Julia GutierrezMassachusetts Digital ServicePublished in

4 min read

Apr 4, 2019

DrupalCon2019 is heading to Seattle this year and there’s no shortage of exciting sessions and great networking events on this year’s schedule. We can’t wait to hear from some of the experts out in the Drupalverse next week, and we wanted to share with you a few of the sessions we’re most excited about.

Adam is looking forward to:

Government Summit on Monday, April 8th

“I’m looking forward to hearing what other digital offices are doing to improve constituents’ interactions with government so that we can bring some of their insights to the work our agencies are doing. I’m also excited to present on some of the civic tech projects we have been doing at MassGovDigital so that we can get feedback and new ideas from our peers.”

Bryan is looking forward to:

1. Introduction to Decoupled Drupal with Gatsby and React

Time: Wednesday, April 10th from 1:45 pm to 2:15 pm

Room: 6B | Level 6

“We’re using Gatsby and React today on to power Search.mass.gov and the state’s budget website, and Drupal for Mass.gov. Can’t wait to learn about Decoupled Drupal with Gatsby. I wonder if this could be the right recipe to help us make the leap!”

2. Why Will JSON API go into Core?

Time: Wednesday, April 10th from 2:30 pm to 3:00 pm

Room: 612 | Level 6

“Making data available in machine-readable formats via web services is critical to open data and to publish-once / single-source-of-truth editorial workflows. I’m grateful to Wim Leers and Mateu Aguilo Bosch for their important thought leadership and contributions in this space, and eager to learn how Mass.gov can best maximize our use of JSON API moving forward.”

I (Julia) am looking forward to:

1. Personalizing the Teach for America applicant journey

Time: Wednesday, April 10th from 1:00 pm to 1:30 pm

Room: 607 | Level 6

“I am really interested in learning from Teach for America on how they implemented personalization and integrated across applications to bring applicants a consistent look, feel, and experience when applying for a Teach for America position. We have created Mayflower, Massachusetts government’s design system, and we want to learn what a single sign-on for different government services might look like and how we might use personalization to improve the experience constituents have when interacting with Massachusetts government digitally. ”

2. Devsigners and Unicorns

Time: Wednesday, April 10th from 4:00 pm to 4:30 pm

Room: 612 | Level 6

“I’m hoping to hear if Chris Strahl has any ‘best-practices’ and ways for project managers to leverage the unique multi-skill abilities that Devsigners and unicorns possess while continuing to encourage a balanced workload for their team. This balancing act could lead towards better development and design products for Massachusetts constituents and I’d love to make that happen with his advice!”

Melissa is looking forward to:

1. DevOps: Why, How, and What

Time: Wednesday, April 10th from 1:45 pm to 2:15 pm

Room: 602–604 | Level 6

“Rob Bayliss and Kelly Albrecht will use a survey they released as well as some other important approaches to elaborate on why DevOps is so crucial to technological strategy. I took the survey back in November of 2018, and I want to see what those results from the survey. This presentation will help me identify if any changes should be made in our process to better serve constituents from these results.”

2. Advanced Automated Visual Testing

Time: Thursday, April 11th from 2:30 pm to 3:00 pm

Room: 608 | Level 6

“In this session Shweta Sharma will speak to what visual testings tools are currently out there and a comparison of the tools. I am excited to gain more insight into the automated visual testing in faster and quicker releases so we can identify any gotchas and improve our releases for Mass.gov users.

P.S. Watch a presentation I gave at this year’s NerdSummit in Boston, and stay tuned for a blog post on some automation tools we used at MassGovDigital coming out soon!”

We hope to see old friends and make new ones at DrupalCon2019, so be sure to say hi to Bryan, Adam, Melissa, Lisa, Moshe, or me when you see us. We will be at booth 321 (across from the VIP lounge) on Thursday giving interviews and chatting about technology in Massachusetts, we hope you’ll stop by!

Interested in a career in civic tech? Find job openings at Digital Services.
Follow us on Twitter | Collaborate with us on GitHub | Visit our site

Mar 11 2019
Mar 11
Photo by Bureau of Reclamation https://www.flickr.com/photos/usbr/12442269434

You’ve decided to use Acquia DAM for managing your digital assets, and now you need to get those assets into Drupal where they can be put to use. Acquia has you covered for most use cases with the Media: Acquia DAM module. This module provides a suite of tools to allow you to browse the DAM for assets and associate them to Media entities. It goes a step farther by then ensuring that those assets and their metadata stay in synch when updates are made in the DAM.

This handles the key use case of being able to reference assets to an existing entity in Drupal, but what if your digital assets are meant to live stand-alone in the Drupal instance? This was the outlying use case we ran into on a recent project.

The Challenge

The customer site had the requirement of building several filterable views of PDF resources. It didn’t make sense to associate each PDF to a node or other entity, as all of the metadata required to build the experience could be contained within the Media entity itself. The challenge now was to get all of those assets out of the DAM and into media entities on the Drupal site without manually referencing them from some other Drupal entity.

The Solution

By leveraging the API underlying the Media: Acquia DAM module we were able to create our own module to manage mass importing entire folders of assets from Acquia DAM into a specified Media bundle in Drupal. This takes advantage of the same configuration and access credentials used by Media: Acquia DAM and also leverages that module for maintaining updates to metadata for the assets post-import.

The Acquia DAM Asset Importer module allows the site administrator to specify one or more folders from Acquia DAM to import assets from. Once configured, the module runs as a scheduled task through Drupal’s cron. On each cron run, the module will first check to see if there are any remaining import tasks to complete. If not, it will use the Acquia DAM API to retrieve a list of asset IDs for the specified folders. It compares that to the list of already imported assets. If new assets exist in the folders in Acquia DAM, they’re then added to the module’s Queue implementation to be imported in the background.

The QueueWorker implementation that exists as part of the Acquia DAM Asset Importer will then process it’s queue on subsequent cron runs, generating a new Media entity of the specified bundle, adding the asset_id from Acquia DAM and executing save() on the entity. At this point the code in Media: Acquia DAM takes over, pulling in metadata about the asset and synching it and the associated file to Drupal. Once the asset has been imported into Drupal as a Media entity, the Media: Acquia DAM module keeps the metadata for that Media Entity in synch with Acquia DAM using its own QueueWorker and Cron implementations to periodically pull data from DAM and update the Media entity.

Try it out

Are you housing assets in Acquia DAM and need to import them into your Drupal site? We’ve contributed the Acquia DAM Asset Importer module on Drupal.org. Download it here and try it out.

Mar 06 2019
Mar 06

Using Paragraphs to define components in Drupal 8 is a common approach to providing a flexible page building experience for your content creators. With the addition of Acquia Lift and Content Hub, you can now not only build intricate pages – you can personalize the content experience for site visitors.

Personalization with Acquia Lift and Content Hub

Acquia Lift is a personalization tool optimized for use with Drupal. The combination of Acquia Lift and Content Hub allows for entities created in Drupal to be published out to Content Hub and be made available through Lift to create a personalized experience for site visitors. In many instances, the personalized content used in Lift is created by adding new Blocks containing the personalized content, but not all Drupal sites utilize Blocks for content creation and page layout.

Personalizing paragraph components

To personalize a Paragraph component on a page, we’ll need to create a new derivative of that component with the personalized content for export to Content Hub. That means creating duplicate content somewhere within the Drupal site. This could be on a different content type specifically meant for personalization.

To make this process easier on our content creators we developed a different approach. We added an additional Paragraphs reference to the content types we wanted to enable personalization on. This “Personalized Components” field can be used to add derivatives of components for each segment in Acquia Lift. The field is hidden from display on the resulting page, but the personalized Paragraph entities are published to Content Hub and available for use in Lift. This allows the content team to create and edit these derivatives in the same context as the content they’re personalizing. In addition, because Paragraphs do not have a title of their own, we can derive a title for them from combination of the title of their parent page and the type of component being added. This makes it easy for the personalization team to find the relevant content in Acquia Lift’s Experience Builder.

In addition to all of this, we also added a “Personalization” tab. If a page has personalized components, this tab will appear for the content team allowing them to review the personalized components for that page.

Keeping the personalized experience in the context of the original page makes it easier for the entire team to build and maintain personalized content.

The technical bits

There were a few hurdles in getting this all working. As mentioned above, Paragraph entities do not have a title property of their own. This means that when their data is exported to Content Hub, they all appear as “Untitled”. Clearly this doesn’t make for a very good user experience. To get around this limitation we leveraged one of the API hooks in the Acquia Content Hub module.

loadEntityByUuid($cdf->getType(), $cdf->getUuid());

  /** @var \Drupal\node\Entity\Node $node */
  $node = _get_parent_node($paragraph);
  $node_title = $node->label();

  $paragraph_bundle = $paragraph->bundle();
  $paragraph_id = $paragraph->id();

  $personalization_title = $node_title . ' - ' . $paragraph_bundle . ':' . $paragraph_id;

  if ($cdf->getAttribute('title') == FALSE) {
    $cdf->setAttributeValue('title', $personalization_title, 'en');
  }
}

/**
 * Helper function for components to identify the current node/entity.
 */
function _get_parent_node($entity) {
  // Recursively look for a non-paragraph parent.
  $parent = $entity->getParentEntity();
  if ($parent instanceof Node) {
    return $parent;
  }
  else {
    return _get_parent_node($parent);
  }
}

This allows us to generate a title for use in Content Hub based on the title of the page we’re personalizing the component on and the type of Paragraph being created.

In addition to this, we also added a local task and NodeViewController to allow for viewing the personalized components. The local task is created by adding a mymodule.links.task.yml and mymodule.routing.yml to your custom module.

*.links.task.yml:

personalization.content:
  route_name: personalization.content
  title: 'Personalization'
  base_route: entity.node.canonical
  weight: 100

*.routing.yml:

personalization.content:
  path: '/node/{node}/personalization'
  defaults:
    _controller: '\Drupal\mymodule\Controller\PersonalizationController::view'
    _title: 'Personalized components'
  requirements:
    _custom_access: '\Drupal\mymodule\Controller\PersonalizationController::access'
    node: \d+

The route is attached to our custom NodeViewController. This controller loads the latest revision of the current Node entity for the route and builds rendered output of a view mode which shows any personalized components.

entityManager->getStorage('node')
      ->revisionIds($node);
    $last_revision_id = end($revision_ids);
    if ($node->getLoadedRevisionId() <> $last_revision_id) {
      $node = $this->entityManager->getStorage('node')
        ->loadRevision($last_revision_id);
    }
    $build = parent::view($node, $view_mode, $langcode);
    return $build;
  }

  /**
   * Custom access controller for personalized content.
   */
  public function access(AccountInterface $account, EntityInterface $node) {
    /** @var \Drupal\node\Entity\Node $node */
    $personalized = FALSE;
    if ($account->hasPermission('access content overview')) {
      if ($node->hasField('field_personalized_components')) {
        $revision_ids = $this->entityManager->getStorage('node')
          ->revisionIds($node);
        $last_revision_id = end($revision_ids);
        if ($node->getLoadedRevisionId() <> $last_revision_id) {
          $node = $this->entityManager->getStorage('node')
            ->loadRevision($last_revision_id);
        }
        if (!empty($node->get('field_personalized_components')->getValue())) {
          $personalized = TRUE;
        }
      }
    }
    return AccessResult::allowedIf($personalized);
  }
}

The controller both provides the rendered output of our “Personalization” view mode, it also uses the access check to ensure that we have personalized components. If no components have been added, the “Personalization” tab will not be shown on the page.

Mar 04 2019
Mar 04


Bitbucket Pipelines is a CI/CD service, built into Bitbucket and offers an easy solution for building and deploying to Acquia Cloud for project’s whose repositories live in Bitbucket and who opt out of using Acquia’s own Pipelines service. Configuration of Bitbucket Pipelines begins with the creation of a bitbucket-pipelines.yml file and adding that file to the root of your repository. This configuration file details how Bitbucket Pipelines will construct the CI/CD environment and what tasks it will perform given a state change in your repository.

Let’s walk through an example of this configuration file built for one of our clients.

bitbucket-pipelines.yml

image: geerlingguy/drupal-vm:4.8.1
clone:
  depth: full
pipelines:
  branches:
    develop:
      - step:
         script:
           - scripts/ci/build.sh
           - scripts/ci/test.sh
           - scripts/ci/deploy.sh
         services:
           - docker
           - mysql
         caches:
           - docker
           - node
           - composer
    test/*:
      - step:
         script:
           - scripts/ci/build.sh
           - scripts/ci/test.sh
         services:
           - docker
           - mysql
         caches:
           - docker
           - node
           - composer
  tags:
    release-*:
      - step:
          name: "Release deployment"
          script:
            - scripts/ci/build.sh
            - scripts/ci/test.sh
            - scripts/ci/deploy.sh
          services:
            - docker
            - mysql
          caches:
            - docker
            - node
            - composer
definitions:
  services:
    mysql:
      image: mysql:5.7
      environment:
        MYSQL_DATABASE: 'drupal'
        MYSQL_USER: 'drupal'
        MYSQL_ROOT_PASSWORD: 'root'
        MYSQL_PASSWORD: 'drupal'

The top section of bitbucket-pipelines.yml outlines the basic configuration for the CI/CD environment. Bitbucket Pipelines uses Docker at its foundation, so each pipeline will be built up from a Docker image and then your defined scripts will be executed in order, in that container.

image: geerlingguy/drupal-vm:4.8.1
clone:
  depth: full

This documents the image we’ll use to build the container. Here we’re using the Docker version of  Drupal VM. We use the original Vagrant version of Drupal VM in Acquia BLT for local development. Having the clone depth set to full ensures we pull the entire history of the repository. This was found to be necessary during the initial implementation.

The “pipelines” section of the configuration defines all of the pipelines configured to run for your repository. Pipelines can be set to run on updates to branches, tags or pull-requests. For our purposes we’ve created three pipelines definitions.

pipelines:
  branches:
    develop:
      - step:
         script:
           - scripts/ci/build.sh
           - scripts/ci/test.sh
           - scripts/ci/deploy.sh
         services:
           - docker
           - mysql
         caches:
           - docker
           - node
           - composer
    test/*:
      - step:
         script:
           - scripts/ci/build.sh
           - scripts/ci/test.sh
         services:
           - docker
           - mysql
         caches:
           - docker
           - node
           - composer

Under branches we have two pipelines defined. The first, “develop”, defines the pipeline configuration for updates to the develop branch of the repository. This pipeline is executed whenever a pull-request is merged into the develop branch. At the end of execution, the deploy.sh script builds an artifact and deploys that to the Acquia Cloud repository. That artifact is automatically deployed and integrated into the Dev instance on Acquia Cloud.

The second definition, “test/*”, provides a pipeline definition that can be used for testing updates to the repository. This pipeline is run whenever a branch named ‘test/*’ is pushed to the repository. This allows you to create local feature branches prefixed with “test/” and push them forward to verify how they will build in the CI environment. The ‘test/*’ definition will only execute the build.sh and test.sh scripts and will not deploy code to Acquia Cloud. This just gives us a handy way of doing additional testing for larger updates to ensure that they will build cleanly.

The next section of the pipelines definition is set to execute when commits in the repository are tagged.

tags:
  release-*:
    - step:
        name: "Release deployment"
        script:
          - scripts/ci/build.sh
          - scripts/ci/test.sh
          - scripts/ci/deploy.sh
        services:
          - docker
          - mysql
        caches:
          - docker
          - node
          - composer

This pipeline is configured to be executed whenever a commit is tagged with the name pattern of “release-*”. Tagging a commit for release will run the CI/CD process and push the tag out to the Acquia Cloud repository. That tag can then be selected for deployment to the Stage or Production environments.

The final section of the pipelines configuration defines services built and added to the docker environment during execution.

definitions:
  services:
    mysql:
      image: mysql:5.7
      environment:
        MYSQL_DATABASE: 'drupal'
        MYSQL_USER: 'drupal'
        MYSQL_ROOT_PASSWORD: 'root'
        MYSQL_PASSWORD: 'drupal'

This section allows us to add a Mysql instance to Docker, allowing our test scripts to do a complete build and installation of the Drupal environment, as defined by the repository.

Additional resources on Bitbucket Pipelines and bitbucket-pipelines.yml:

Scripts

The bitbucket-pipelines.yml file defines the pipelines that can be run, and in each definition it outlines scripts to run during the pipeline’s execution. In our implementation we’ve split these scripts up into three parts:

  1. build.sh – Sets up the environment and prepares us for the rest of the pipeline execution.
  2. test.sh – Runs processes to test the codebase.
  3. deploy.sh – Contains the code that builds the deployment artifact and pushes it to Acquia Cloud.

Let’s review each of these scripts in more detail.

build.sh

#!/bin/bash
apt-get update && apt-get install -o Dpkg::Options::="--force-confold" -y php7.1-bz2 curl && apt-get autoremove
curl -sL https://deb.nodesource.com/setup_8.x | sudo -E bash -
apt-get install -y nodejs
apt-get install -y npm
cd hive
npm install
npm install -g gulp
cd ..
composer install
mysql -u root -proot -h 127.0.0.1 -e "CREATE DATABASE IF NOT EXISTS drupal"
export PIPELINES_ENV=PIPELINES_ENV

This script takes our base container, built from our prescribed image, and starts to expand upon it. Here we make sure the container is up-to-date, install dependencies such as nodejs and npm, run npm in our frontend library to build our node_modules dependencies, and instantiate an empty database that will be used later when we perform a test install from our codebase.

test.sh

#!/bin/bash
vendor/acquia/blt/bin/blt validate:phpcs --no-interaction --ansi --define environment=ci
vendor/acquia/blt/bin/blt setup --yes  --define environment=ci --no-interaction --ansi -vvv

The test.sh file contains two simple commands. The first runs a PHP code sniffer to validate our custom code follows prescribed standards. This command also runs as a pre-commit hook during any code commit in our local environments, but we execute it again here as an additional safeguard. If code makes it into the repository that doesn’t follow the prescribed standards, a failure will be generated and the pipeline will halt execution. The second command takes our codebase and does a complete Drupal installation from it, instantiating a copy of Drupal 8 and importing the configuration contained in our repository. If invalid or conflicting configuration makes it into the repository, it will be picked up here and the pipeline will exit with a failure. This script is also where additional testing could be added, such as running Behat or other test suites to verify our evolving codebase doesn’t produce regressions.

deploy.sh

#!/bin/bash
set -x
set -e

if [ -n "${BITBUCKET_REPO_SLUG}" ] ; then

    git config user.email "[email protected]"
    git config user.name "Bitbucket Pipelines"

    git remote add deploy $DEPLOY_URL;

    # If the module is -dev, a .git file comes down.
    find docroot -name .git -print0 | xargs -0 rm -rf
    find vendor -name .git -print0 | xargs -0 rm -rf
    find vendor -name .gitignore -print0 | xargs -0 rm -rf

    SHA=$(git rev-parse HEAD)
    GIT_MESSAGE="Deploying ${SHA}: $(git log -1 --pretty=%B)"

    git add --force --all

    # Exclusions:
    git status
    git commit -qm "${GIT_MESSAGE}" --no-verify

    if [ $BITBUCKET_TAG ];
      then
        git tag --force -m "Deploying tag: ${BITBUCKET_TAG}" ${BITBUCKET_TAG}
        git push deploy refs/tags/${BITBUCKET_TAG}
    fi;

    if [ $BITBUCKET_BRANCH ];
      then
        git push deploy -v --force refs/heads/$BITBUCKET_BRANCH;
    fi;

    git reset --mixed $SHA;
fi;

The deploy.sh script takes the product of our repository and creates an artifact in the form of a separate, fully-merged Git repository. That temporary repository then adds the Acquia Cloud repository as a deploy origin and pushes the artifact to the appropriate branch or tag in Acquia Cloud. The use of environment variables allows us to use this script both to deploy the Develop branch to the Acquia Cloud repository as well as deploying any tags created on the Master branch so that those tags appear in our Acquia Cloud console for use in the final deployment to our live environments. For those using BLT for local development, this script could be re-worked to use BLT’s internal artifact generation and deployment commands.

Configuring the cloud environments

The final piece of the puzzle is ensuring that everything is in-place for the pipelines to process successfully and deploy code. This includes ensuring that environment variables used by the deploy.sh script exist in Bitbucket and that a user with appropriate permissions and SSH keys exists in your Acquia Cloud environment, allowing the pipelines process to deploy the code artifact to Acquia Cloud.

Bitbucket configuration

DEPLOY_URL environment variable

Configure the DEPLOY_URL environment variable. This is the URL to your Acquia Cloud repository.

  1. Log in to your Bitbucket repository.
  2. In the left-hand menu, locate and click on “Settings.”
  3. In your repository settings, locate the “Pipelines section” and click on “Repository variables.”
  4. Add a Repository variable:
    1. Name: DEPLOY_URL
    2. Value: The URL to your Acquia Cloud repository. You’ll find the correct value in your Acquia Cloud Dashboard.

SSH keys

Deploying to Acquia Cloud will also require giving your Bitbucket Pipelines processes access to your Acquia Cloud repository. This is done in the form of an SSH key. To configure an SSH key for the Pipelines process:

  1. In the “Pipelines” section of your repository settings we navigated to in steps 1-3 above, locate the “SSH keys” option and click through.
  2. On the SSH keys page click the “Generate keys” button.
  3. The generated “public key” will be used to provide access to Bitbucket in the next section.

Acquia Cloud configuration

For deployment to work, your Bitbucket Pipelines process will need to be able to push to your Acquia Cloud Git repository. This means creating a user account in Acquia Cloud and adding the key generated in Bitbucket above. You can create a new user or use an existing user. You can find more information on adding SSH keys to your Acquia Cloud accounts here: Adding a public key to an Acquia profile.

To finish the configuration, log back into your Bitbucket repository and retrieve the Known hosts fingerprint.

Feb 06 2019
Feb 06

Mass.gov dev team releases open source project

Moshe WeitzmanMassachusetts Digital ServicePublished in

3 min read

Feb 6, 2019

The Mass.gov development team is proud to release a new open source project, Drupal Test Traits (DTT). DTT enables you to run PHPUnit tests against your Drupal web site, without wiping your database after each test class. That is, you test with your usual content-filled database, not an empty one. We hope lots of Drupal sites will use DTT and contribute back their improvements. Thanks to PreviousNext and Phase2 for being early adopters.

Mass.gov is a large, content-centric site. Most of our tests click around and assert that content is laid out properly, the corresponding icons are showing, etc. In order to best verify this, we need the Mass.gov database; testing on an empty site won’t suffice. The traditional tool for testing a site using an existing database is Behat. So we used Behat for over a year and found it getting more and more awkward. Behat is great for facilitating conversations between business managers and developers. Those are useful conversations, but many organizations are like ours — we don’t write product specs in Gherkin. In fact, we don’t do anything in Gherkin beside Behat.

Meanwhile, the test framework inside Drupal core improved a lot in the last couple of years (mea culpa). Before Drupal Test Traits, this framework was impossible to use without wiping the site’s database after each test. DTT lets you keep your database and still test using the features of Drupal’s BrowserTestBase and friends. See DrupalTrait::setUp() for details (the bootstrap is inspired by Drush, a different open source project that I maintain).

Zakim Bridge at Night, North End Boston. Photo by David Fox.
  • Our test cases extend ExistingSiteBase, a convenience class from DTT that imports all the test traits. We will eventually create our own base class and import the traits there.
  • Notice calls to $this->createNode(). This convenience method wraps Drupal’s method of the same name. DTT deletes each created node during tearDown().
  • Note how we call Vocabulary::load(). This is an important point — the full Drupal and Mink APIs are available during a test. The abstraction of Behat is happily removed. Writing test classes more resembles writing module code.
  • See the DTT repo for details on how to install and run tests
  • Typically, one does not run tests against a live web site. Tests can fail and leave sites in a “dirty” state so it’s helpful to occasionally refresh to a pristine database.

If you have questions or comments about DTT, please comment below or submit issues/PRs in our repository.

More from Moshe: Our modern development environment at Mass.gov

Interested in a career in civic tech? Find job openings at Digital Services.
Follow us on Twitter | Collaborate with us on GitHub | Visit our site

Oct 19 2018
Oct 19
Deirdre Habershaw2 min read

Oct 19, 2018

Today, more than 80% of people’s interactions with government take place online. Whether it’s starting a business or filing for unemployment, too many of these experiences are slow, confusing, or frustrating. That’s why the Commonwealth created Massachusetts Digital Service (Mass Digital) in the Executive Office of Technology and Security Services. Mass Digital is at the forefront of the state’s digital transformation. Its mission is to leverage the best technology and information available to make people’s interactions with state government fast, easy, and wicked awesome. There’s a lot of work to do, but we’re making quick progress.

Mass Digital worked with the new Department of Family and Medical Leave (DFML) to launch the Commonwealth’s first digitally native service: Paid Family and Medical Leave and PaidLeave.mass.gov. We’ve been on the front lines of the pandemic response, supporting the Department of Unemployment Assistance to create capabilities for multilingual unemployment claims (Unemployment.mass.gov) and doing content design to keep up with rapidly evolving pandemic unemployment benefits, and supporting the Department of Public Health and Command Center to launch the state’s vaccine preregistration system (VaccineSignUp.mass.gov).

If you want to work in a fast-paced agile environment, with a good work life balance, solving hard problems, working with cutting-edge technology, and making a difference in people’s lives, you should join Massachusetts Digital Service.

We are currently recruiting designers, researchers, and product managers. Here are links to open job postings:

If you are interested in working with us, please submit your resume here.

Check out more about hiring at the Executive Office of Technology and Security Services and submit your resume in order to be informed on roles as they become available.

Oct 10 2018
Oct 10

Authors are eager to learn, and a content-focused community is forming. But there’s still work to do.

Julia GutierrezMassachusetts Digital ServicePublished in

6 min read

Oct 10, 2018
Video showing highlights of speakers, presenters, and attendees interacting at ConCon 2018.

When you spend most of your time focused on how to serve constituents on digital channels, it can be good to simply get some face time with peers. It’s an interesting paradox of the work we do alongside our partners at organizations across the state. Getting in a room and discussing content strategy is always productive.

That was one of the main reasons behind organizing the first ever Massachusetts Content Conference (ConCon). More than 100 attendees from 35 organizations came together for a day of learning and networking at District Hall in Boston. There were 15 sessions on everything from how to use Mayflower — the Commonwealth’s design system — to what it takes to create an awesome service.

Graphic showing more than 100 attendees from 50 organizations attended 15 sessions from 14 presenters at ConCon 2018.

ConCon is and will always be about our authors, and we’re encouraged by the feedback we’ve received from them so far. Of the attendees who responded to a survey, 93% said they learned about new tools or techniques to help them create better content. More so, 96% said they would return to the next ConCon. The average grade attendees gave to the first ever ConCon on a scale of 1 to 10 — with 1 being the worst and 10 the best — was 8.3.

Our authors were engaged and ready to share their experiences, which made for an educational environment, for their peers as well as our own team at Digital Services. In fact, it was an eye opening experience, and we took a lot away from the event. Here are some of our team’s reflections on what they learned about our authors and our content needs moving forward.

“The way we show feedback and scores per page is great but it doesn’t help authors prioritize their efforts to get the biggest gain for their constituents. We’re working hard to increase visibility of this data in Drupal.”

— Joe Galluccio

Katie Rahhal, Content Strategist
“I learned we’re moving in the right direction with our analysis and Mass.gov feedback tools. In the breakout sessions, I heard over and over that our content authors really like the ones we have and they want more. More ways to review their feedback, more tools to improve their content quality, and they’re open to learning new ways to improve their content.”

Christine Bath, Designer
“It was so interesting and helpful to see how our authors use and respond to user feedback on Mass.gov. It gives us a lot of ideas for how we can make it easier to get user feedback to our authors in more actionable ways. We want to make it easy to share constituent feedback within agencies to power changes on Mass.gov.”

Embedded tweet from @MassGovDigital highlighting a lesson on good design practices from ConCon 2018.

Joe Galluccio, Product Manager
“I learned how important it is for our authors to get performance data integrated into the Drupal authoring experience. The way we show feedback and scores per page is great but it doesn’t help authors prioritize their efforts to get the biggest gain for their constituents. We’re working hard to increase visibility of this data in Drupal.”

Bryan Hirsch, Deputy Chief Digital Officer
“Having Dana Chisnell, co-founder of the Center for Civic Design, present her work on mapping and improving the journey of American voters was the perfect lesson at the perfect time. The page-level analytics dashboards are a good foundation we want to build on. In the next year, we’re going to research, test, and build Mass.gov journey analytics dashboards. We’re also spending this year working with partner organizations on mapping end-to-end user journeys for different services. Dana’s experience on how to map a journey, identify challenges, and then improve the process was relevant to everyone in the room. It was eye-opening, enlightening, and exciting. There are a lot of opportunities to improve the lives of our constituents.”

Want to know how we created our page-level data dashboards? Read Custom dashboards: Surfacing data where Mass.gov authors need it

Embedded tweet from @epubpupil highlighting her positive thoughts on Dana Chisnell’s keynote presentation on mapping and improving the journey of American voters.

“It’s great to see there’s a Mayflower community forming among stakeholders in different roles across state government. ”

— Minghua Sun

Sienna Svob, Developer and Data Analyst
“We need to work harder to build a Mayflower community that will support the diversity of print, web, and applications across the Commonwealth. Agencies are willing and excited to use Mayflower and we need to harness this and involve them more to make it a better product.”

Minghua Sun, Mayflower Product Owner
“I’m super excited to see that so many of the content authors came to the Mayflower breakout session. They were not only interested in using the Mayflower Design System to create a single face of government but also raised constructive questions and were willing to collaborate on making it better! After the conference we followed up with more information and invited them to the Mayflower public Slack channel. It’s great to see there’s a Mayflower community forming among stakeholders in different roles across state government. ”

Sam Mathius, Digital Communications Strategist
“It was great to see how many of our authors rely on digital newsletters to connect with constituents, which came up during a breakout session on the topic. Most of them feel like they need some help integrating them into their overall content strategy, and they were particularly excited about using tools and software to help them collect better data. In fact, attendees from some organizations mentioned how they’ve used newsletter data to uncover seasonal trends that help them inform the rest of their content strategy. I think that use case got the analytics gears turning for a lot of folks, which is exciting.”

“I’d like to see us create more opportunities for authors to get together in informal sessions. They’re such a diverse group, but they share a desire to get it right.”

— Fiona Molloy

Shannon Desmond, Content Strategist
“I learned that the Mass.gov authors are energetic about the new content types that have been implemented over the past 8 months and are even more eager to learn about the new enhancements to the content management system (CMS) that continue to roll out. Furthermore, as a lifelong Massachusetts resident and a dedicated member of the Mass.gov team, it was enlightening to see how passionate the authors are about translating government language and regulations for constituents in a way that can be easily and quickly understood by the constituents of the State.”

Fiona Molloy, Content Strategist
“Talking to people who came to ConCon and sitting in on various sessions, it really struck me how eager our content authors are to learn — whether from us here at Digital Services or from each other. I’d like to see us create more opportunities for authors to get together in informal sessions. They’re such a diverse group, but they share a desire to get it right and that’s really encouraging as we work together to build a better Mass.gov.”

Embedded tweet from @MassGovDigital highlighting a session from ConCon 2018 in which content authors offered tips for using authoring tools on Mass.gov.

Adam Cogbill, Content Strategist
“I was reminded that one of the biggest challenges that government content authors face is communicating lots of complex information. We need to make sure we understand our audience’s relationships to our content, both through data about their online behavior and through user testing.”

Greg Derosiers, Content Strategist
“I learned we need to do a better job of offering help and support. There were a number of authors in attendance that didn’t know about readily-available resources that we had assumed people just weren’t interested in. We need to re-evaluate how we’re marketing these services and make sure everyone knows what’s available.”

Embedded tweet from @MassGovDigital highlighting the start of ConCon 2018.

Thinking about hosting your own content conference? Reach out to us! We’d love to share lessons and collaborate with others in the civic tech community.

Oct 04 2018
Oct 04

Variety of content and the need for empathy drive our effort to simplify language across Mass.gov

Massachusetts Digital ServiceMassachusetts Digital ServicePublished in

3 min read

Oct 4, 2018

Nearly 7 million people live in Massachusetts, and millions more visit the state each year. These people come from different backgrounds and interact with the Commonwealth for various reasons.

Graphic showing more than 3 million visitors go to Mass.gov each month.

We need to write for everyone while empathizing with each individual. That’s why we write at a 6th grade reading level. Let’s dig into the reasons why.

The Commonwealth has a high literacy rate and a world-renowned education network. From elementary school to college and beyond, you can get a great education here.

We’re proud of our education environment, but it doesn’t affect our readability standards. Navigating the Women, Infants, and Children (WIC) Nutrition Program might be challenging for everyone.

People searching for nutrition services are doing so out of necessity. They’re probably frustrated, worried, and scared. That affects how people read and retain information.

Learn about our content strategy. Read the 2017 content team review.

This is the case for many other scenarios. Government services can be complicated to navigate. Our job is to simplify language. We get rid of the white noise and focus on essential details.

You don’t browse Mass.gov in your free time. It’s a resource you use when you have to. Think of it as a speedboat, not a cruise ship. They’ll both get you across the water, just at different speeds.

Graphic showing desktop visitors to Mass.gov look at more pages and have longer sessions than mobile and tablet visitors.

Mass.gov visitors on mobile devices spend less time on the site and read fewer pages. The 44% share of mobile and tablet traffic will only increase over time. These visitors need information boiled down to essential details. Simplifying language is key here.

A 6th-grade reading level doesn’t work all the time. We noticed this when we conducted power-user testing. Lawyers, accountants, and other groups who frequently use Mass.gov were involved in the tests.

These groups want jargon and industry language. It taught us that readability is relative.

We use the Flesch-Kincaid model to determine reading level in our dashboards. It accounts for factors like sentence length and the number of syllables in words.

This is a good foundation to ensure we consistently hit the mark. However, time is the most important tool we have. The more content we write, the better we’ll get.

Writing is a skill refined over time, and adjusting writing styles isn’t simple. Even so, we’re making progress. In fact, this post is written at a 6th grade reading level.

Sep 28 2018
Sep 28

Pairing Composer template for Drupal Projects with Lando gives you a fully working Drupal environment with barely any setup.

Lando is an open-source, cross-platform local development environment. It uses Docker to build containers for well-known frameworks and services written in simple recipes. If you haven’t started using Lando for your local development, we highly recommend it. It is easier, faster, and relatively pain-free compared to MAMP, WAMP, VirtualBox VMs, Vagrant or building your own Docker infrastructure.

Prerequisites

You’ll need to have Composer and Lando installed:

Setting up Composer Template Drupal Project

If you want to find details about what you are getting when you install the drupal-project you can view the repo. Otherwise, if you’d rather simply set up a Drupal template site, run the following command.

composer create-project drupal-composer/drupal-project:8.x-dev [your-project] --stability dev --no-interaction

Once that is done running, cd into the newly created directory. You’ll find that you now have a more than basic Drupal installation.

Getting the site setup on Lando

Next, run lando init, which prompts you with 3 simple questions:

? What recipe do you want to use? > drupal8
? Where is your webroot relative to the init destination? > web
? What do you want to call this app? > [your-project]

Once that is done provisioning, run lando start—which downloads and spins up the necessary containers. Providing you with a set of URLs that you can use to visit your site:

https://localhost:32807
http://localhost:32808
http://[your-project].lndo.site:8000
https://[your-project].lndo.site

Setup Drupal

Visit any of the URLs to initialize the Drupal installation flow. Run lando info to get the database detail:

Database: drupal8
Username: drupal8
Password: drupal8
Host: database

Working with your new Site

One of the useful benefits of using Lando is that your toolchain does not need to be installed on your local machine, it can be installed in the Docker container that Lando uses. Meaning you can use commands provided by Lando without having to install other packages. The commands that come with Lando include lando drush, lando drupal, and lando composer. Execute these commands in your command prompt as usual, though they'll execute from within the container.

Once you commit your lando.yml file others can use the same Lando configuration on their machines. Having this shared configuration makes it easy to share and set up local environments that have the same configuration.

Sep 02 2018
Sep 02
Drupal Europe3 min read

Sep 2, 2018

In only 8 days Drupal Europe will be happening from September 10 to 14 in Darmstadt, Germany. Are you coming?

Throughout the last 12 months a lot of volunteers worked really hard to make this event happen. Starting with our decision and commitment at DrupalCon Vienna to organize Drupal Europe, followed by an extensive search for locations, numerous volunteers have been busy for a year. Reaching out to sponsors, structuring the program, organizing the Open Web Lounge, planning the venue spaces, answering all your emails, writing visa invitation letters, launching trainings, reviewing sessions and putting together the big schedule.

How it started in the community keynote photo by Amazee Labs

Drupal Europe hosts 162 hours of sessions, 9 in-depth workshops, 3 training courses, contribution every day but the biggest value of all is meeting everyone. This conference brings together CEOs, project managers, marketing professionals, and developers alike. It is both a technology conference and a family reunion for the Drupal community and that is why we organized it.

Drupal Europe is a unique possibility to meet your (international) colleagues and talk about what drives, connects and challenges our community. There is only one open source community where “you come for the code and stay for the community” is so deeply rooted. And Drupal Europe is also a great place to connect with other open source technologies. WordPress, Rocket.Chat, Typo3, Mautic, you name it! You may be surprised that there are more that connect us than what separates us.

Have a look at the diverse and interesting program.

Besides the sessions and BoFs we also plan our other traditional activities.

On Thursday evening we organise the exciting Trivia Night where you can win eternal fame with your team.

Contribution opportunities are open all week. On Monday and especially Friday, mentors will be around to help you get started contributing. Contribution is for everyone, all skill and energy levels are invited.

New this year at Drupal Europe is the first international Splash Awards! All golden and silver winners from local Splash Awards will compete for the European awards.

All together we think there are plenty of reasons why you should come to Darmstadt and participate at Drupal Europe.

To make our offer even better, if you buy a ticket before end of the late ticket deadline (today or tomorrow), you enter a raffle for a free hotel room for Sept 10–13 at Intercity Hotel Darmstadt! Use FLS-LPNLGS5DS84E4 to also get 100 EUR off the ticket price.

The hotel room raffle closes and online ticket sales will stop at end of Monday. You will only have a chance to buy a ticket onsite at Drupal Europe afterwards.

Grab this last chance to join us at Drupal Europe, book your travels and have a safe trip getting here.

See you in Darmstadt!

Image Darmstadium venue in Darmstadt, Germany
Aug 27 2018
Aug 27

This post is part 5 in the series [“Hashing out a docker workflow”]({% post_url 2015-06-04-hashing-out-docker-workflow %}). I have resurrected this series from over a year ago, but if you want to checkout the previous posts, you can find the [first post here]({% post_url 2015-06-04-hashing-out-docker-workflow %}). Although the beginning of this blog series pre-dates Docker Machine, Docker for Mac, or Docker for Window’s. The Docker concepts still apply, just not using it with Vagrant any more. Instead, check out the Docker Toolbox. There isn’t a need to use Vagrant any longer.

We are going to take the Drupal image that I created from my last post [“Creating a deployable Docker image with Jenkins”]({% post_url 2016-01-20-jenkins-build-docker-images %}) and deploy it. You can find the image that we created last time up on Docker Hub, that is where we pushed the image last time. You have several options on how to deploy Docker images to production, whether that be manually, using a service like AWS ECS, or OpenShift, etc… Today, I’m going to walk you through a deployment process using Kubernetes also known as simply k8s.

Why use Kubernetes?

There are an abundance of options out there to deploy Docker containers to the cloud easily. Most of the options provide a nice UI with a form wizard that will take you through deploying your containers. So why use k8s? The biggest advantage in my opinion is that Kubernetes is agnostic of the cloud that you are deploying on. This means if/when you decide you no longer want to host your application on AWS, or whatever cloud you happen to be on, and instead want to move to Google Cloud or Azure, you can pick up your entire cluster configuration and move it very easily to another cloud provider.

Obviously there is the trade-off of needing to learn yet another technology (Kubernetes) to get your app deployed, but you also won’t have the vendor lock-in when it is time to move your application to a different cloud. Some of the other benefits to mention about K8s is the large community, all the add-ons, and the ability to have all of your cluster/deployment configuration in code. I don't want to turn this post into the benefits of Kubernetes over others, so lets jump into some hands-on and start setting things up.

Setup a local cluster.

Instead of spinning up servers in a cloud provider and paying for the cost of those servers while we explore k8s, we are going to setup a cluster locally and configure Kubernetes without paying a dime out of our pocket. Setting up a local cluster is super simple with a tool called Minikube. Head over to the Kubernetes website and get that installed. Once you have Minikube installed, boot it up by typing minkube start. You should see something similar to what is shown below:

$ minikube start
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Downloading Minikube ISO
 160.27 MB / 160.27 MB [============================================] 100.00% 0s
Getting VM IP address...
Moving files into cluster...
Downloading kubeadm v1.10.0
Downloading kubelet v1.10.0
Finished Downloading kubelet v1.10.0
Finished Downloading kubeadm v1.10.0
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.

This command setup a virtual machine on your computer, likely using Virtualbox. If you want to double check, pop open the Virtualbox UI to see a new VM created there. This virtual machine has loaded on it all the necessary components to run a Kubernetes cluster. In K8s speak, each virtual machine is called a node. If you want to log in to the node to explore a bit, type minikube ssh. Below I have ssh'd into the machine and ran docker ps. You’ll notice that this vm has quite a few Docker containers running to make this cluster.

 $ minikube ssh
                         _             _
            _         _ ( )           ( )
  ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __
/' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)

$ docker ps
CONTAINER ID        IMAGE                                      COMMAND                  CREATED             STATUS              PORTS               NAMES
aa766ccc69e2        k8s.gcr.io/k8s-dns-sidecar-amd64           "/sidecar --v=2 --lo…"   5 minutes ago       Up 5 minutes                            k8s_sidecar_kube-dns-86f4d74b45-kb2tz_kube-system_3a21f134-a637-11e8-894d-0800273ca679_0
6dc978b31b0d        k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64     "/dnsmasq-nanny -v=2…"   5 minutes ago       Up 5 minutes                            k8s_dnsmasq_kube-dns-86f4d74b45-kb2tz_kube-system_3a21f134-a637-11e8-894d-0800273ca679_0
0c08805e8068        k8s.gcr.io/kubernetes-dashboard-amd64      "/dashboard --insecu…"   5 minutes ago       Up 5 minutes                            k8s_kubernetes-dashboard_kubernetes-dashboard-5498ccf677-hvt4f_kube-system_3abef591-a637-11e8-894d-0800273ca679_0
f5d725b1c96a        gcr.io/k8s-minikube/storage-provisioner    "/storage-provisioner"   6 minutes ago       Up 6 minutes                            k8s_storage-provisioner_storage-provisioner_kube-system_3acd2f39-a637-11e8-894d-0800273ca679_0
3bab9f953f14        k8s.gcr.io/k8s-dns-kube-dns-amd64          "/kube-dns --domain=…"   6 minutes ago       Up 6 minutes                            k8s_kubedns_kube-dns-86f4d74b45-kb2tz_kube-system_3a21f134-a637-11e8-894d-0800273ca679_0
9b8306dbaab7        k8s.gcr.io/kube-proxy-amd64                "/usr/local/bin/kube…"   6 minutes ago       Up 6 minutes                            k8s_kube-proxy_kube-proxy-dwhn6_kube-system_3a0fa9b2-a637-11e8-894d-0800273ca679_0
5446ddd71cf5        k8s.gcr.io/pause-amd64:3.1                 "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD_storage-provisioner_kube-system_3acd2f39-a637-11e8-894d-0800273ca679_0
17907c340c66        k8s.gcr.io/pause-amd64:3.1                 "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD_kubernetes-dashboard-5498ccf677-hvt4f_kube-system_3abef591-a637-11e8-894d-0800273ca679_0
71ed3f405944        k8s.gcr.io/pause-amd64:3.1                 "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD_kube-dns-86f4d74b45-kb2tz_kube-system_3a21f134-a637-11e8-894d-0800273ca679_0
daf1cac5a9a5        k8s.gcr.io/pause-amd64:3.1                 "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD_kube-proxy-dwhn6_kube-system_3a0fa9b2-a637-11e8-894d-0800273ca679_0
9d00a680eac4        k8s.gcr.io/kube-scheduler-amd64            "kube-scheduler --ad…"   7 minutes ago       Up 7 minutes                            k8s_kube-scheduler_kube-scheduler-minikube_kube-system_31cf0ccbee286239d451edb6fb511513_0
4d545d0f4298        k8s.gcr.io/kube-apiserver-amd64            "kube-apiserver --ad…"   7 minutes ago       Up 7 minutes                            k8s_kube-apiserver_kube-apiserver-minikube_kube-system_2057c3a47cba59c001b9ca29375936fb_0
66589606f12d        k8s.gcr.io/kube-controller-manager-amd64   "kube-controller-man…"   8 minutes ago       Up 8 minutes                            k8s_kube-controller-manager_kube-controller-manager-minikube_kube-system_ee3fd35687a14a83a0373a2bd98be6c5_0
1054b57bf3bf        k8s.gcr.io/etcd-amd64                      "etcd --data-dir=/da…"   8 minutes ago       Up 8 minutes                            k8s_etcd_etcd-minikube_kube-system_a5f05205ed5e6b681272a52d0c8d887b_0
bb5a121078e8        k8s.gcr.io/kube-addon-manager              "/opt/kube-addons.sh"    9 minutes ago       Up 9 minutes                            k8s_kube-addon-manager_kube-addon-manager-minikube_kube-system_3afaf06535cc3b85be93c31632b765da_0
04e262a1f675        k8s.gcr.io/pause-amd64:3.1                 "/pause"                 9 minutes ago       Up 9 minutes                            k8s_POD_kube-apiserver-minikube_kube-system_2057c3a47cba59c001b9ca29375936fb_0
25a86a334555        k8s.gcr.io/pause-amd64:3.1                 "/pause"                 9 minutes ago       Up 9 minutes                            k8s_POD_kube-scheduler-minikube_kube-system_31cf0ccbee286239d451edb6fb511513_0
e1f0bd797091        k8s.gcr.io/pause-amd64:3.1                 "/pause"                 9 minutes ago       Up 9 minutes                            k8s_POD_kube-controller-manager-minikube_kube-system_ee3fd35687a14a83a0373a2bd98be6c5_0
0db163f8c68d        k8s.gcr.io/pause-amd64:3.1                 "/pause"                 9 minutes ago       Up 9 minutes                            k8s_POD_etcd-minikube_kube-system_a5f05205ed5e6b681272a52d0c8d887b_0
4badf1309a58        k8s.gcr.io/pause-amd64:3.1                 "/pause"                 9 minutes ago       Up 9 minutes                            k8s_POD_kube-addon-manager-minikube_kube-system_3afaf06535cc3b85be93c31632b765da_0

When you’re done snooping around the inside the node, log out of the session by typing Ctrl+D. This should take you back to a session on your local machine.

Interacting with the cluster

Kubernetes is managed via a REST API, however you will find yourself interacting with the cluster mainly with a CLI tool called kubectl. With kubectl, we will issue it commands and the tool will generate the necessary Create, Read, Update, and Delete requests for us, and execute those requests against the API. It’s time to install the CLI tool, go checkout the docs here to install on your OS.

Once you have the command line tool installed, it should be automatically configured to interface with the cluster that you just setup with minikube. To verify, run a command to see all of the nodes in the cluster kubectl get nodes.

$ kubectl get nodes
NAME       STATUS    ROLES     AGE       VERSION
minikube   Ready     master    6m        v1.10.0

We have one node in the cluster! Lets deploy our app using the Docker image that we created last time.

Writing Config Files

With the kubectl cli tool, you can define all of your Kubernetes objects directly, but I like to create config files that I can commit in a repository and mange changes as we expand the cluster. For this deployment, I’ll take you through creating 3 different K8s objects. We will explicitly create a Deployment object, which will implicitly create a Pod object, and we will create a Service object. For details on what these 3 objects are, check out the Kubernetes docs.

In a nutshell, a Pod is a wrapper around a Docker container, a Service is a way to expose a Pod, or several Pods, on a specific port to the outside world. Pods are only accessible inside the Kubernetes cluster, the only way to access any services in a Pod is to expose the Pod with a Service. A Deployment is an object that manages Pod’s, and ensures that Pod’s are healthy and are up. If you configure a deployment to have 2 replicas, then the deployment will ensure 2 Pods are always up, and if one crashes, Kubernetes will spin up another Pod to match the Deployment definition.

deployment.yml

Head over to the API reference and grab the example config file https://v1-10.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#deployment-v1-apps. We will modify the config file from the docs to our needs. Change the template to look like below (I changed the image, app, and name properties in the yml below):

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  
  name: deployment-example
spec:
  
  replicas: 3
  template:
    metadata:
      labels:
        
        
        app: drupal
    spec:
      containers:
      - name: drupal
        
        image: tomfriedhof/docker_blog_post

Now it’s time to feed that config file into the Kubernetes API, we will use the CLI tool for this:

$ kubectl create -f deployment.yml

You can check the status of that deployment by asking the k8s for all Pod and Deployment objects:

$ kubectl get deploy,po

Once everything is up and running you should see something like this:

 $ kubectl get deploy,po
NAME                        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/deployment-example   3         3         3            3           3m

NAME                                    READY     STATUS    RESTARTS   AGE
po/deployment-example-fc5d69475-dfkx2   1/1       Running   0          3m
po/deployment-example-fc5d69475-t5w2j   1/1       Running   0          3m
po/deployment-example-fc5d69475-xw9m6   1/1       Running   0          3m

service.yml

We have no way of accessing any of those Pods in the deployment. We need to expose the Pods using a Kubernetes Service. To do this, grab the example file from the docs again and change it to the following: https://v1-10.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#service-v1-core

kind: Service
apiVersion: v1
metadata:
  
  name: service-example
spec:
  ports:
    
    - name: http
      port: 80
      targetPort: 80
  selector:
    
    
    app: drupal
  
  
  
  type: LoadBalancer

Create this service object using the CLI tool again:

$ kubectl create -f service.yml

You can now ask Kubernetes to show you all 3 objects that you created by typing the following:

$ kubectl get deploy,po,svc
NAME                        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/deployment-example   3         3         3            3           7m

NAME                                    READY     STATUS    RESTARTS   AGE
po/deployment-example-fc5d69475-dfkx2   1/1       Running   0          7m
po/deployment-example-fc5d69475-t5w2j   1/1       Running   0          7m
po/deployment-example-fc5d69475-xw9m6   1/1       Running   0          7m

NAME                  TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
svc/kubernetes        ClusterIP      10.96.0.1       <none>        443/TCP        1h
svc/service-example   LoadBalancer   10.96.176.233   <pending>     80:31337/TCP   13s

You can see under the services at the bottom that port 31337 was mapped to port 80 on the Pods. Now if we hit any node in the cluster, in our case it's just the one VM, on port 31337 we should see the Drupal app that we built from the Docker image we created in the last post. Since we are using Minikube, there is a command to open a browser on the specific port of the service, type minikube service :

$ minikube service service-example

This should open up a browser window and you should see the Installation screen for Drupal. You have successfully deployed the Docker image that we created to a production-like environment.

What is next?

We have just barely scratched the surface of what is possible with Kubernetes. I showed you the bare minimum to get a Docker image deployed on Kubernetes. The next step is to deploy your cluster to an actual cloud provider. For further reading on how to do that, definitely check-out the KOPS project.

If you have any questions, feel free to leave a comment below. If you want to see a demo of everything that I wrote about on the ActiveLAMP YouTube channel, let us know in the comments as well.

Aug 23 2018
Aug 23
Drupal Europe2 min read

Aug 23, 2018
Image by LuckyStep @shutterstock
  • to revolutionize publishing, with a new rewarding model in an environment which can build trust and allows community governance.
  • to reshape open source communities, with a better engagement and rewarding system.
  • to free digital identity, thus killing the need of middlemen at the protocol layer.

Blockchain is an universal tool and can be applied in many different areas.

Communities, like the Drupal Community, can find new ways to flourish. Even larger and risky projects can be financed in new ways, with ICO (Initial Coin Offer). Taco Potze (Co-Founder Open Social) has a 10 year Drupal background and is an expert on Communities. He is working on blockchain technology to build a better engagement and rewarding systems for communities. Wouldn’t that be really nice for us?

See also Taco’s session: ICOs, a revolutionary way to raise money for your company

Publishing and its classic monetization model is challenged. Intermediates are about to disrupt the relationship between authors and publishers and their readers. This is based on a troublesome business model, with massive tracking and profile building, to turn our engagement in advertisement money. At the same time poor content and fake news has become a threat to our society. Gagik Yeghiazarian (CEO, Co-Founder Publiq) is looking for new ways to address these problems, with a non profit, distributed media platform based on blockchain.

See also Gagik’s session: Blockchain Distributed Media — A Future for good publishing

The Internet is broken and blockchain can fix it. The biggest promise with blockchain is to make middlemen obsolete, by creating trusted identities in an open protocol. This is to break the monopoly of the middlemen and to retain a free web. We recognize aribnb, amazon, ebay, netflix, itunes as middlemen. We understand, when we by or book, they get their share. With Google, Facebook and YouTube there are some other huge monopoly middlemen, they get their share based on our attention and personal data. They know how to transfer our attention into dollars, by selling it to advertisers. Ingo Rübe (CEO Bot Lab) is working on a protocol, which will allow people to gain control of their digital identity. It will be called KILT Protocol. (Ingo is well known in the Drupal Community and a Member of Drupal’s Advisory Board. As a former CTO of Burda he was the Initiator of the Drupal Thunder Distribution)

Our Panel will be moderated by Audra Martin Merrick, a board member of Drupal Association.

signed
Drupal Europe
Your Track Chairs

Aug 20 2018
Aug 20

Helping content creators make data-driven decisions with custom data dashboards

Greg DesrosiersMassachusetts Digital ServicePublished in

4 min read

Aug 20, 2018

Our analytics dashboards help Mass.gov content authors make data-driven decisions to improve their content. All content has a purpose, and these tools help make sure each page on Mass.gov fulfills its purpose.

Before the dashboards were developed, performance data was scattered among multiple tools and databases, including Google Analytics, Siteimprove, and Superset. These required additional logins, permissions, and advanced understanding of how to interpret what you were seeing. Our dashboards take all of this data and compile it into something that’s focused and easy to understand.

We made the decision to embed dashboards directly into our content management system (CMS), so authors can simply click a tab when they’re editing content.

GIF showing how a content author navigates to the analytics dashboard in the Mass.gov CMS.

The content performance team spent more than 8 months diving into web data and analytics to develop and test data-driven indicators. Over the testing period, we looked at a dozen different indicators, from pageviews and exit rates to scroll-depth and reading grade levels. We tested as many potential indicators as we could to see what was most useful. Fortunately, our data team helped us content folks through the process and provided valuable insight.

Love data? Check out our 2017 data and machine learning recap.

We chose a sample set of more than 100 of the most visited pages on Mass.gov. We made predictions about what certain indicators said about performance, and then made content changes to see how it impacted data related to each indicator.

We reached out to 5 partner agencies to help us validate the indicators we thought would be effective. These partners worked to implement our suggestions and we monitored how these changes affected the indicators. This led us to discover the nuances of creating a custom, yet scalable, scoring system.

Line chart showing test results validating user feedback data as a performance indicator.

For example, we learned that a number of indicators we were testing behaved differently depending on the type of page we were analyzing. It’s easy to tell if somebody completed the desired action on a transactional page by tracking their click to an off-site application. It’s much more difficult to know if a user got the information they were looking for when there’s no action to take. This is why we’re planning to continually explore, iterate on, and test indicators until we find the right recipe.

Using the strategies developed with our partners, we watched, and over time, saw the metrics move. At that point, we knew we had a formula that would work.

We rolled indicators up into 4 simple categories:

  • Findability — Is it easy for users to find a page?
  • Outcomes — If the page is transactional, are users taking the intended action? If the page is focused on directing users to other pages, are they following the right links?
  • Content quality — Does the page have any broken links? Is the content written at an appropriate reading level?
  • User satisfaction — How many people didn’t find what they were looking for?
Screenshot of dashboard results as they appear in the Mass.gov CMS.

Each category receives a score on a scale of 0–4. These scores are then averaged to produce an overall score. Scoring a 4 means a page is checking all the boxes and performing as expected, while a 0 means there are some improvements to be made to increase the page’s overall performance.

All dashboards include general recommendations on how authors can improve pages by category. If these suggestions aren’t enough to produce the boost they were looking for, authors can meet with a content strategist from Digital Services to dive deeper into their content and create a more nuanced strategy.

GIF showing how a user navigates to the “Improve Your Content” tab in a Mass.gov analytics dashboard.

We realize we can’t totally measure everything through quantitative data, so these scores aren’t the be-all, end-all when it comes to measuring content performance. We’re a long way off from automating the work a good editor or content strategist can do.

Also, it’s important to note these dashboards are still in the beta phase. We’re fortunate to work with partner organizations who understand the bumps in the proverbial development road. There are bugs to work out and usability enhancements to make. As we learn more, we’ll continue to refine them. We plan to add dashboards to more content types each quarter, eventually offering a dashboard and specific recommendations for the 20+ content types in our CMS.

Aug 14 2018
Aug 14

Drupal Europe: Publishing + Media Special Focus

Drupal Europe2 min read

Aug 14, 2018

What industries come to mind when you hear blockchain? Banking? Trading? Healthcare? How about publishing? At Drupal Europe publishers will gain insights into the potential blockchain technology offers and learn how they can benefit. Meet Gagik Yeghiazarian, founder of the nonprofit foundation Publiq, and learn how he wants to fight fake news and build a censorship-resistant platform — using blockchain.

The publishing world is changing. Publishers no longer solely control media distribution. Big players like Facebook and Google are middlemen between the publishers and their readers, and technology built to entice publishers — Google’s AMP (Accelerated Mobile Pages) and Facebook Instant Articles — has strengthened social platforms as distribution channels. Additionally, publishers have lost money making classifieds business as employment and real estate markets create their own platforms and portals to reach the audience.

Photo by Ian Schneider on Unsplash

As a result of these developments, publishers are losing direct relationships with their readers as well as critical advertising which traditionally supported the editorial and operational costs. The platforms act as middlemen, using the content of the publishers for collecting data and selling them to advertisers. The publishers are left out in the cold.

Critically, publishers are also facing a crisis of confidence. As social platforms are used to spread fake news and poor content, mistrust in journalism grows.

The nonprofit foundation Publiq wants to face these challenges with a blockchain-powered infrastructure. It aims at removing unnecessary intermediaries from the equation and helping to create an independent, censorship-free environment. Gagik Yeghiazarian, CEO and Co-Founder of Publiq, is convinced: “Blockchain infrastructure allows content creators, readers and other participants to build a trusted relationship.”

You can learn more about Publiq and its blockchain infrastructure at Drupal Europe in Darmstadt: Gagik Yeghiazarian’s session “Blockchain Distributed Media — A Future for good publishing” will give you a glimpse into this new technology and a real-world application of it.

While you’re at Drupal Europe, be sure to check out the exciting blockchain panel discussion where Gagik, Ingo Rübe of Botlabs, and Taco Potze of Open Social, will share insights and use cases for blockchain technology. Don’t miss this!

Drupal Europe
Publishing & Media — Track Chairs

Aug 13 2018
Aug 13

Drupal Europe: Publishing + Media Special Focus

Drupal Europe2 min read

Aug 13, 2018

Drupal Europe offers up a plethora of cases and solutions to help you with your DAM integration.

Multichannel publishing by Oleksiy Mark on Shutterstock

With so much to organize and store, publishers typically use Digital Asset Management Systems (DAM) to manage their assets. Add multiple channels to the mix and you have big operational hurdles. Thanks to the Media Initiative, Drupal now has a well-defined ecosystem for media management and its architecture is designed to play well with all kinds of media, media management systems, and web services that support them. The system is highly adaptable — the media management documentation outlines 15 modules shaping Drupal’s new ecosystem for media assets.

The Drupal Europe program offers several sessions to help you learn more about solutions building on this foundation. Case studies of demanding media management projects around the publishing industry include:

Drupal Europe
Publishing & Media — Track Chairs

Jul 23 2018
Jul 23
Moshe WeitzmanMassachusetts Digital ServicePublished in

4 min read

Jul 23, 2018

I recently worked with the Mass.gov team to transition its development environment from Vagrant to Docker. We went with “vanilla Docker,” as opposed to one of the fine tools like DDev, Drupal VM, Docker4Drupal, etc. We are thankful to those teams for educating and showing us how to do Docker right. A big benefit of vanilla Docker is that skills learned there are generally applicable to any stack, not just LAMP+Drupal. We are super happy with how this environment turned out. We are especially proud of our MySQL Content Sync image — read on for details!

Pretty docks at Boston Harbor. Photo credit.

The heart of our environment is the docker-compose.yml. Here it is, then read on for a discussion about it.

Developers use .env files to customize aspects of their containers (e.g. VOLUME_FLAGS, PRIVATE_KEY, etc.). This built-in feature of Docker is very convenient. See our .env.example file:

The most innovative part of our stack is the mysql container. The Mass.gov Drupal database is gigantic. We have tens of thousands of nodes and 500,000 revisions, each with an unholy number of paragraphs, reference fields, etc. Developers used to drush sql:sync the database from Prod as needed. The transfer and import took many minutes, and had some security risk in the event that sanitization failed on the developer’s machine. The question soon became, “how can we distribute a mysql database that’s already imported and sanitized?” It turns out that Docker is a great way to do just this.

Today, our mysql container builds on CircleCI every night. The build fetches, imports, and sanitizes our Prod database. Next, the build does:

That is, we commit and push the refreshed image to a private repository on Docker Cloud. Our mysql image is 9GB uncompressed but thanks to Docker, it compresses to 1GB. This image is really convenient to use. Developers fetch a newer image with docker-compose pull mysql. Developers can work on a PR and then when switching to a new PR, do a simple ahoy down && ahoy up. This quickly restores the local Drupal database to a pristine state.

In order for this to work, you have to store MySQL data *inside* the container, instead of using a Docker Volume. Here is the Dockerfile for the mysql image.

Our Drupal container is open source — you can see exactly how it’s built. We start from the official PHP image, then add PHP extensions, Apache config, etc.

An interesting innovation in this container is the use of Docker Secrets in order to safely share an SSH key from host to the container. See this answer and mass_id_rsa in the docker-compose.yml above. Also note the two files below which are mounted into the container:

Configure SSH to use the secrets file as private keyAutomatically run ssh-add when logging into the container

Traefik is a “cloud edge router” that integrates really well with docker-compose. Just add one or two labels to a service and its web site is served through Traefik. We use Traefik to provide nice local URLs for each of our services (www.mass.local, portainer.mass.local, mailhog.mass.local, …). Without Traefik, all these services would usually live at the same URL with differing ports.

In the future, we hope to upgrade our local sites to SSL. Traefik makes this easy as it can terminate SSL. No web server fiddling required.

Our repository features a .ahoy.yml file that defines helpful aliases (see below). In order to use these aliases, developers download Ahoy to their host machine. This helps us match one of the main attractions of tools like DDev/Lando — their brief and useful CLI commands. Ahoy is a convenience feature and developers who prefer to use docker-compose (or their own bash aliases) are free to do so.

Our development environment comes with 3 fine extras:

  • Blackfire is ready to go — just run ahoy blackfire [URL|DrushCommand] and you’ll get back a URL for the profiling report
  • Xdebug is easily enabled by setting the XDEBUG_ENABLE environment variable in a developer’s .env file. Once that’s in place, the PHP in the container will automatically connect to the host’s PHPStorm or other Xdebug client
  • A chrome-headless container is used by our suite which incorporates Drupal Test Traits — a new open source project we published. We will blog about DTT soon

Of course, we are never satisfied. Here are a couple issues to tackle:

Jun 27 2018
Jun 27
Drupal Europe3 min read

Jun 27, 2018

Community. Sharing. Helping. This is the spirit of Drupal. These things bind us all together. Be a part of it by joining us during Drupal Europe between 10–14 September 2018 in Darmstadt, Germany.

photo credit Susanne Coates @flickr

The track dedicated to Social + Non-Profit will gather ambitious life stories about helping others and projects whose purpose is to invest everything in making the world a better place. You will have the opportunity to meet colleagues from your field of interest and join forces, learn how to use pre-configured Drupal distributions and get inspired by ambitious social impact projects built with Drupal. Also learn how Drupal can be used to ensure accountability, trustworthiness, honesty, and openness to every person who has invested time, money, and faith into a non-profit organization. Talk and share ideas, learn from each other, improve, innovate … and take a leap forward. There are a lot of things you will learn, no matter your technical skill level. From developers to people with a big heart, you will for sure find something that inspires you.

Interested in attending? Buy your ticket now at https://www.drupaleurope.org/tickets.

We are looking for submissions in various topics. Here are some ideas to share your experience on with the rest of the world.

  1. Every nonprofit organization must apply the 3 E’s: Economy, Efficiency, Effectiveness. Economy forces you to handle your project with low budgets, that is almost always the case with non-profit organizations. Efficiency is required also due to low resources available to most non-profit organizations. Effectiveness ensures you get the job done and complete your targets. How are you doing that? What tools and practices ensure this?
  2. We live in a world that is changing every day and technology is a big part of it. What are the new technologies you integrate in social projects? What do you need and what do you find on the market? How drupal is helping you achieve your goals?
  3. Transparency, accountability and full disclosure on operations is a must for all non-profit organizations. People will donate to and support campaigns only if they know exactly where the money goes and how are things handled. This way, they ensure their credibility in front of the world. How do you technically implement this?
  4. A lot of people talk about making the world a better place. But talking is not enough. You have to take action! How do you plan to do it? How do social activities raise the level of engagement in your community? How are people’s lives improved by your actions?
  5. Non-profit is done mainly from the heart. Volunteering is the key word. What are your life stories about helping others, inspirational first hand experiences? Why, what and how did you do it? What drives you? What are your goals?

We look forward to your submission sharing you experience with the other attendees.

See you in Darmstadt!

As you’ve probably read in one of our previous blog posts, industry verticals are a new concept being introduced at Drupal Europe and replace the summits, which typically took place on Monday. At Drupal Europe these industry verticals are integrated with the rest of the conference — same location, same ticket and provide more opportunities to learn and exchange within the industry verticals throughout three days.

Now is the perfect time to buy your ticket for Drupal Europe. Session submission is only open for a few more days so please submit your sessions and encourage others who have great ideas.

Please help us to spread the word about this awesome conference. Our hashtag is #drupaleurope.

To recommend speakers or topics please get in touch at [email protected].

Drupal is one of the leading open source technologies empowering digital solutions in the government space around the world.

Drupal Europe 2018 brings over 2,000 creators, innovators, and users of digital technologies from all over Europe and the rest of the world together for three days of intense and inspiring interaction.

Drupal Europe will be held in Darmstadtium in Darmstadt, Germany — which has a direct connection to Frankfurt International Airport. Drupal Europe will take place 10–14 September 2018 with Drupal contribution opportunities every day. Keynotes, sessions, workshops and BoFs will be from Tuesday to Thursday.

Jun 26 2018
Jun 26
Drupal Europe4 min read

Jun 26, 2018
photo: Paul Johnson @ flickr

Drupal is our business.

Regardless of being a freelancer, a two person shop or a hundred plus agency, Drupal is vital to our success in growing and supporting our business.

The business ecosystem is changing rapidly, thereby making it a necessity for agency leaders, managers and advisors to focus on a multitude of challenges and opportunities.

Understanding how the marketplace is evolving, driving innovation, fostering the right company culture, and adopting efficient project management methodologies, are all challenges faced by businesses today.

We all want to transform our business by working with the smartest team, create and deliver amazing projects, and have ideal customers lining up to work with us.

Any Drupal conference cannot be complete without in-depth discussions and debates about these challenges and more.

The Agency Business track will provide insight, support and real stories from people running businesses and managing projects. Learn about other people’s experiences, and get tips and ideas on how to tackle the challenges faced in your business or project.

Photo: Michael Cannon @ Flickr

Growing and scaling your business can be a tricky and daunting task. We need to consider strategies for how to grow our businesses, and how to do so sustainably.

With increased competition from both other agencies and other platforms, we need to look at not only how we generate new leads for our businesses, but how do we convince potential clients that Drupal is the best, that we are the best?

What is the right company culture for my business? How can I better lead my agency through the challenges ahead? How can I provide good leadership to my team? How can we grow and scale our business, without losing our company culture along the way? These are just some of the questions we will look to answer in the Agency Business track.

Project management is a bit of a juggling act, with many different needs and tasks that need to be taken care of simultaneously. We’re always on the look-out for ways to increase a project’s effectiveness and efficiency, while reducing the risk of it getting out of control. Let’s share our experiences and ideas on how we can improve project planning, better manage timelines & budgets, and keep staff motivated, while all the time keeping clients happy and engaged in the process.

Markets change faster and faster, so does our market. We need to adapt our products and offering to stay competitive and minimize our business risks. Perhaps it means diversifying your service offerings, perhaps it means developing a product, perhaps it means extending into new markets or verticals. However, we also need to consider how to keep clients happy and how to continue to meet their changing needs through innovation and/or diversification.

At Drupal Europe, we want to ensure that attendees get the most from this track through highly valuable and insightful sessions. We are looking for speakers to openly and honestly share stories about their challenges and how they solved it. We want to hear about your experiments, successes and failures, process discoveries, strategies, and tactics. We want real-life learnings, supported by facts and figures — prove to us that your way is best.

Session submissions are open and will close on 30 June 2018.

Whatever your experience is, whether it be running a small 2 person operation or scaling to 30 and beyond, or managing projects and project teams, we want to hear from you. Your experience and insight is invaluable and we know others will think so too.

Come to Drupal Europe and share your experiences with us — submit a session to the Agency Business track today!

Do you know someone who could be a great speaker? Or perhaps you know someone who has an interesting story to share? If so, please get in touch with the program team at [email protected].

And don’t forget to help us to spread the word about this awesome conference. Our hashtag is #drupaleurope.

We look forward seeing you in Darmstadt!

Drupal is one of the leading open source technologies empowering digital solutions in the government space around the world.

Drupal Europe 2018 brings over 2,000 creators, innovators, and users of digital technologies from all over Europe and the rest of the world together for three days of intense and inspiring interaction.

Drupal Europe will be held in Darmstadtium in Darmstadt, Germany — with a direct connection to Frankfurt International Airport. Drupal Europe will take place 10–14 September 2018 with Drupal contribution opportunities every day. Keynotes, sessions, workshops and BoFs will be from Tuesday to Thursday.

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web