Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Mar 04 2020
Mar 04

Make your life easier by using the terminal to start, stop or restart Apache 2 web server on Mac OX by simply executing the 'apachectl' commands.

Start Apache server command:

sudo /usr/sbin/apachectl start

or

sudo apachectl start

Stop Apache server command:

sudo /usr/sbin/apachectl stop

or

sudo apachectl stop

Restart Apache server command:

sudo /usr/sbin/apachectl restart

or

sudo apachectl restart

And yes it's not always necessary to check what the status of Apache is, you can simply run the restart command mention above without worrying about whether is running or stopped.

Restart command is also very helpful if you make any changes related to your Apache server configuration in httpd.conf. And sometimes just restart the server to before starting debugging why the website is not responding or something wrong with your Apache PHP setup.

Mar 04 2020
Mar 04

Historically, upgrading to a new major version of Drupal has been a time-consuming and expensive process involving first building an entirely new website in Drupal (content modeling, module installation, setting configuration, creating a new theme), and then creating a detailed process to migrate content from the previous Drupal website.

Then Drupal 8 came along five years ago with a new promise: Making Drupal upgrades easy forever.

Here’s how it works: Minor versions (8.1, 8.2, 8.3, etc) are being released every 6 months. They include bug fixes, new features, and API improvements. Some APIs may also get deprecated between minor versions. Major versions (9.0, 10.0, 11.0, etc) are being released when external dependencies (Symfony 3, etc.) have reached end-of-life and new versions are needed that would break backwards compatibility.

Major versions are similar to a minor version release, with two distinctions:

  1. Updates to external dependencies (new PHP version, new Symfony version, etc)
  2. Removing all parts of the API that have been marked as deprecated for this major version.

Assuming the environment is compatible, a website that doesn’t have any deprecated API calls in its modules / themes / custom code can be upgraded to the next major version as easily as a minor version upgrade.

Chart: Drupal 9.0 API = Drupal 8.9 API minus deprecated parts plus third party dependencies updatedDrupal 9 guide on Drupal.org

Getting your Drupal 8 site ready for Drupal 9

To remove deprecated code today, you would:

  1. Upgrade all contrib modules/themes to the latest version available
  2. Run drupal-check or Upgrade Status to get a list of all deprecations in your project
  3. For each project in the report, check its corresponding drupal.org page to see if it has Drupal 9 plans that would point to the issue summarizing Drupal 9 readiness plans and efforts.
  4. Apply the patches that are already available.
  5. Run drupal-check or Upgrade Status again to get the list of all the rest of the deprecations that are left in your project
  6. For every deprecation found:
    1. Search for documentation (Upgrade Status module help to find them)
    2. Manually fix the code

The current way of fixing deprecated API use involves a lot of manual and repetitive work. If one type of deprecation appears multiple times in your code, you will need to figure out how to fix it for each occurrence.

Chart: All drupal.org project errors by fixabilityAll Drupal.org project errors by fixability

There are approximately 33,000 occurrences of deprecated code in all contributed modules today. That’s a lot of repetitive manual work to get ready for Drupal 9!

The light at the end of the tunnel

While Googling various topics regarding Drupal upgrades, I stumbled upon a proof of concept called Drupal 8 Rector, by Dezső Biczó at Pronovix. I discovered the brilliant tool that powered it, called Rector.

What is Rector?

Rector automates PHP code upgrades.
Rector understands PHP which allows complex edge cases.

Rector PHP, by @VotrubaT, is one of the most useful projects that PHP developers overlook. It automates the upgrade and refactoring of your code (e.g. upgrades your Symfony 3 app to Symfony 4). @symfony_en

How does Rector work?

  1. Parses PHP code into an abstract syntax tree (AST)
  2. Runs  a set of upgrade rules
  3. Saves all changes back to a PHP file

What can Rector already do?

  • Rename classes, methods, properties, namespaces or constants
  • Upgrade code from PHP 5.3 to PHP 7.4
  • Upgrade code from Symfony 3 to Symfony 4

Drupal 8 Rector was the missing bridge to enable automated code upgrades from Drupal 8 to 9. While Dezső unfortunately did not have time to continue working on it any more, the tremendous potential value was clear. Palantir was very interested in taking it further and making development for it easier, as we saw this as a tool that could save us time (and avoid creating new bugs) when manually fixing modules that our clients will need for Drupal 9. So we started cleaning up the repository, pared it down to its essentials and published under a new name, Drupal-Rector

What is Drupal Rector?

A set of Rector rules. Each rule upgrades code for a deprecated API use in Drupal.

When can I start using Drupal Rector?

Today! Just go to https://github.com/palantirnet/drupal-rector or run

composer require palantirnet/drupal-rector --dev

Image of terminal window running composer command for drupal-rector

What's the status of Drupal Rector?

It has one Rector rule so far that fixes only one deprecation: `drupal_set_message` (Originally implemented by Dezső in the proof of concept, thank you!) This depreciation alone occurs over 8,000 times in various contributed modules and it roughly accounts for a quarter of all deprecated API uses!

https://dev.acquia.com/drupal9/deprecation_status/graphs

Our immediate goal 

Is to create Rector rules for the 15 most popular deprecations. Those 15 rules will cover 50% of all Drupal deprecations!

What are potential future goals?

We are exploring the possibility of Drupal.org integration, which would allow Drupal-Rector to be run on drupal.org projects and automatically create issues with patches. The ultimate situation would be if each API change came with a new rector rule, so projects can stay evergreen without much manual effort as the rules would be executed on them continually. 

Help us make it a better tool!

See https://github.com/palantirnet/drupal-rector. Issues are maintained on drupal.org at https://drupal.org/project/rector. We also plan to be leading contribution activities at upcoming Drupal events, including:

  • MidCamp in Chicago on March 18-21, 2020
  • DrupalCon in Minneapolis on May 18-22, 2020

We hope to see you there!

Additional Resources:

Special thanks to Jennifer Shaal, Adam Bergstein, Dan Montgomery, Gábor Hojtsy, AmyJune Hineline and George DeMet for their contributions to this post.

Status of Drupal 9.0.0-beta1: a March beta is still possible

Mar 03 2020
xjm
Mar 03
Mar 03 2020
xjm
Mar 03

This week is the first target release window for the Drupal 9 beta.

We’ve made significant progress on the beta requirements during the last couple of months thanks to the tremendous amount of work by the community. Based on the outstanding tasks in the meta issue, we are close to completing beta requirements, but not close enough to release it this week.

When will Drupal 9.0.0-beta1 be released?

Given how close we are to completing the beta requirements, we are considering releasing the beta in mid-March if the requirements are complete by March 13. If we do release the beta in 1-2 weeks, Drupal 9.0.0 will still be scheduled for release on June 3, 2020. We will make a final announcement by March 16 about whether there will be a June release.

If any must-have issues remain unresolved by March 13, we will move the beta target window to the first week of May, and Drupal 9.0.0 will be scheduled for August 3, 2020.

This does not affect the expected release date of Drupal 8.9.0 (scheduled for June 3, 2020) nor that of Drupal 9.1.0 (planned for December 2, 2020). The Drupal 8 and 7 end-of-life is also still November 2021.

We need your help to meet the beta deadline, whether it is March 13 or April 28! Drupal 9 readiness meetings are every Monday at 7pm UTC in the #d9readiness channel on https://drupal.org/slack. Help us with the issues below.

Current status and issues left for Drupal 9.0.0-beta1

Dependency updates

All PHP dependencies (Symfony, Laminas, Twig) have been updated to the versions we intend to use for Drupal 9.0.0, although we will continue to keep up to date with bugfix releases.

Nearly all JavaScript and CSS dependencies have been updated, with just two issues to go:
jQuery Cookie (needs review and testing from JavaScript developers!) and Normalize.css (RTBC and awaiting committer review).

Upgrade paths

In addition to deprecations, we are improving and simplifying in-place updates with update.php to ensure the Drupal 8 to 9 update is smooth and reliable. Four issues remain and could use additional help from experienced contributors:

We are also ensuring that all known critical Drupal 8 upgrade path bugs that may prevent updates from older Drupal 8 versions are fixed in 8.8 and 8.9. Only three remain. These technically difficult issues are critical for Drupal's data integrity:

Platform requirement changes

Drupal 9 has already raised the minimum PHP version to 7.3. We also want to increase MySQL, PostgreSQL, and SQLite requirements. This includes:

  • MySQL, MariaDB, and Percona (required prior to beta1)
  • PostgreSQL (Beta deadline; help needed! If you use Postgres, document what versions are available from your hosting provider.)
  • SQLite (Might be completed during the beta phase)

Drupal 9 base theme API

We want to add an up-to-date Drupal 9 version of the 'Stable' base theme. This issue and the related issues to decouple core themes from Drupal 8 base themes could use review and feedback from theme contributors.

Once all the above issues are complete, Drupal 9 is beta-ready. That means that we will have completed all of the significant code changes we intend to make, and that it should be possible to test updates of real sites against it if contributed and custom code has been ported. Site owners can use the Upgrade Status module to check the Drupal 9 readiness of their modules and themes; see the guide on updating to Drupal 9 for more information.

Other issues to complete prior to final release of Drupal 9

There is a second category of issues that we want to complete before Drupal 9.0.0 is released. Since we have fixed release windows, these also need to be either complete or close to completion, especially for the first beta window, since this gives the shortest amount of time to finish them.

Complete migrations for multilingual content so that all Drupal 7 sites can migrate before Drupal 7 reaches EOL.

Drupal.org and Drupal core should be able to fully handle contributed projects that are compatible with both 8.x and 9.x and not assume core compatibility based on the version or branch name. This requirement spans multiple projects and areas including project listings, the Composer façade, update.xml from Drupal.org, and localization files. Most of the minimum required changes for Drupal core and Drupal.org are already complete; however, there remains one outstanding core regression related to module compatibility as well as a number of followup issues.

Finally, Drupal 9 currently relies on Node.js 8 for transpiling and linting frontend assets. This is core-developer-facing only so we may raise the Node.js requirement after beta.

Mar 03 2020
xjm
Mar 03

This week is the first target release window for the Drupal 9 beta.

We’ve made significant progress on the beta requirements during the last couple of months thanks to the tremendous amount of work by the community. Based on the outstanding tasks in the meta issue, we are close to completing beta requirements, but not close enough to release it this week.

When will Drupal 9.0.0-beta1 be released?

Given how close we are to completing the beta requirements, we are considering releasing the beta in mid-March if the requirements are complete by March 13. If we do release the beta in 1-2 weeks, Drupal 9.0.0 will still be scheduled for release on June 3, 2020. We will make a final announcement by March 16 about whether there will be a June release.

If any must-have issues remain unresolved by March 13, we will move the beta target window to the first week of May, and Drupal 9.0.0 will be scheduled for August 3, 2020.

This does not affect the expected release date of Drupal 8.9.0 (scheduled for June 3, 2020) nor that of Drupal 9.1.0 (planned for December 2, 2020). The Drupal 8 and 7 end-of-life is also still November 2021.

We need your help to meet the beta deadline, whether it is March 13 or April 28! Drupal 9 readiness meetings are every Monday at 7pm UTC in the #d9readiness channel on https://drupal.org/slack. Help us with the issues below.

Current status and issues left for Drupal 9.0.0-beta1

Dependency updates

All PHP dependencies (Symfony, Laminas, Twig) have been updated to the versions we intend to use for Drupal 9.0.0, although we will continue to keep up to date with bugfix releases.

Nearly all JavaScript and CSS dependencies have been updated, with just two issues to go:
jQuery Cookie (needs review and testing from JavaScript developers!) and Normalize.css (RTBC and awaiting committer review).

Upgrade paths

In addition to deprecations, we are improving and simplifying in-place updates with update.php to ensure the Drupal 8 to 9 update is smooth and reliable. Four issues remain and could use additional help from experienced contributors:

We are also ensuring that all known critical Drupal 8 upgrade path bugs that may prevent updates from older Drupal 8 versions are fixed in 8.8 and 8.9. Only three remain. These technically difficult issues are critical for Drupal's data integrity:

Platform requirement changes

Drupal 9 has already raised the minimum PHP version to 7.3. We also want to increase MySQL, PostgreSQL, and SQLite requirements. This includes:

  • MySQL, MariaDB, and Percona (required prior to beta1)
  • PostgreSQL (Beta deadline; help needed! If you use Postgres, document what versions are available from your hosting provider.)
  • SQLite (Might be completed during the beta phase)

Drupal 9 base theme API

We want to add an up-to-date Drupal 9 version of the 'Stable' base theme. This issue and the related issues to decouple core themes from Drupal 8 base themes could use review and feedback from theme contributors.

Once all the above issues are complete, Drupal 9 is beta-ready. That means that we will have completed all of the significant code changes we intend to make, and that it should be possible to test updates of real sites against it if contributed and custom code has been ported. Site owners can use the Upgrade Status module to check the Drupal 9 readiness of their modules and themes; see the guide on updating to Drupal 9 for more information.

Other issues to complete prior to final release of Drupal 9

There is a second category of issues that we want to complete before Drupal 9.0.0 is released. Since we have fixed release windows, these also need to be either complete or close to completion, especially for the first beta window, since this gives the shortest amount of time to finish them.

Complete migrations for multilingual content so that all Drupal 7 sites can migrate before Drupal 7 reaches EOL.

Drupal.org and Drupal core should be able to fully handle contributed projects that are compatible with both 8.x and 9.x and not assume core compatibility based on the version or branch name. This requirement spans multiple projects and areas including project listings, the Composer façade, update.xml from Drupal.org, and localization files. Most of the minimum required changes for Drupal core and Drupal.org are already complete; however, there remains one outstanding core regression related to module compatibility as well as a number of followup issues.

Finally, Drupal 9 currently relies on Node.js 8 for transpiling and linting frontend assets. This is core-developer-facing only so we may raise the Node.js requirement after beta.

Mar 03 2020
Mar 03

You Might Also Like

Olivero is a new theme that aims to be the new default front-end theme for Drupal 9. The theme's inception took place in a hotel lobby during DrupalCon Seattle and has now grown into a full Drupal core initiative (read about its inception here).

From Design to Proof of Concept

The Drupal 9 theme initiative started with stakeholder meetings and the design process (learn more here). Once the designs were close to being final, I started working on translating the designs into markup, styles, and JavaScript within a static proof of concept.

While working on the proof of concept, Putra Bonaccorsi was laying the theming groundwork by creating boilerplate code for the theme and transpiling of the CSS and JavaScript.

Proof of concept

The process of creating a proof of concept has been invaluable. The overarching goals are to validate major DOM architectural decisions and get sign-off from the Drupal core accessibility team on major decisions before moving into templating. Additionally, the proof of concept has validated and influenced design decisions across multiple devices and viewports.

You can view the proof of concept at https://olivero-poc.netlify.com, but note that progress is rapid, and changing by the hour!

Bumps in the Road

When the Olivero team was creating the initial schedule, the plan was to get the theme into Drupal 9.1, because the first version of Drupal 9 (9.0) was going to be the same as the last version of Drupal 8 — but with deprecated code removed.

However, during the Driesnote at DrupalCon Amsterdam, Drupal project lead, Dries Buytaert, stated that he wanted to get the theme into the initial release of Drupal 9.0. This pushed up the timeline significantly!

Balancing between onboarding and mentoring new developers versus rapidly closing issues has proven to be delicate. Many contributors want to help with the initiative; however, because they are volunteers (as are the core team), they are not on a timetable for closing issues.

Because of the tight timeline, I’ve been leaning toward the latter (rapidly fixing issues).

Florida DrupalCamp Sprint

We decided that the initiative needs a shot in the arm, so we put on a mini code-sprint within the annual contribution sprint at Florida DrupalCamp in last month. Because Putra couldn’t make it down for the actual conference, Florida DrupalCamp sponsored her to fly in for the sprint.

During the 2 day sprint, we accomplished the following

  • Cleaned up the PostCSS build process
  • Integrated a default database export with indicative content into the Tugboat build process.
  • Copied some of the latest scripts/JS and CSS from the PoC repo into the Drupal theme.
  • We've exported block configs for the theme initial install for the following Core's blocks:
    • Primary Menu
    • User Account Menu
    • Powered By Drupal
    • Content
  • The Primary Menu block is themed and configured to expose the drop-down menu by default.
  • The Secondary menu/User Account block is themed and configured.
  • The "Powered by" block is themed and configured.
  • The "Get Started" page has been created and will need to be revisited.
  • Latest preview is viewable on Tugboat: https://8-x-1-x-dev-2t4d1epwkj8tgwxduizmixhzevqwzi8w.tugboat.qa

Olivero's focus states (above) were heavily worked on at the Florida DrupalCamp sprint.

If you were to install the theme initially, you'll be able to see the different regions and blocked configured, however, please note that there is still more theme development that needs to be done for the beta release.

Current Status

The work of the theme includes the proof of concept and the actual theming.

Proof of Concept

We’re working on styling that will enable site owners to choose an “always-on” mobile theme in the event that the primary navigation has more items than the space can manage.

We’re also knocking out various accessibility issues—especially focus states in Windows high contrast mode, which are trickier than expected.

Drupal Theme

The Drupal theme looks close to the designs! Work continues on the search integration into the header, in addition to standard theming.

What’s Next?

We hope to pull in the final proof of concept markup, styling, and JavaScript into the theme toward the end of next week (around March 13th, 2020). At that point, work on the proof of concept will cease, and new styling fixes will go into the theme.

There’s still so much to do! We need:

  • Support for Drupal core features such as:
    • Book module
    • Forum module
    • Embedded media
    • Various Views rows styles (grid, etc.)
  • More accessibility
  • Internationalization
  • Tests
  • Coding standards
  • And more!

Standing on the Shoulders of Giants

I also want to note that the rapid pace of development would not have been possible without the contributions of Claro (Drupal 9’s new administrative theme), and Umami (the theme for the out of the box initiative).

These themes blazed the way by including support for technologies in core such as web fonts, PostCSS, and the overall core theme architecture.

Completion

Olivero was initially slated for inclusion in core in Drupal 9.1. That’s still the most likely scenario. That said, there’s a possibility that Drupal may shift the 9.0 beta deadline to the end of April. If that’s the case, there is a possibility to submit a core patch beforehand.

To commit by this time, we need to submit the patch a minimum of a few weeks ahead of time to give core committers time to review (and even that might not be enough time).

We’re currently working on [META] Add new default Olivero frontend theme to Drupal 9 core to define the minimum beta requirements to submit to core. Expect this issue to be more fleshed out within the coming days.

After Completion

After the theme is in core, we still would love to add additional features such as support for accessible color schemes, dark mode, etc. However, the first step is finishing up the minimal viable product for inclusion in core.

Join Us!

The Olivero team meets on Drupal Slack every Monday at 3 pm UTC (10 am ET) in the #d9-theme channel on drupal.slack.com‬. We post the agendas in the Olivero issue queue beforehand.

We need people to pick up issues and run with them, but keep in mind that for the next week or two, the primary styling is still in progress within the proof of concept on Github.

Upcoming Events and Sprinting

I will be attending Drupal Dev Days in Ghent Belgium, April 6-10th, 2020, and will be sprinting the entire time. We hope to work on getting the code ready for Drupal core inclusion by that time.

Putra and I (along with the majority of the Lullabot team) will be attending DrupalCon Minneapolis in May 2020. We will be heavily sprinting on Olivero during this time (especially on Friday).

Querying the migrate database from custom code

Mar 03 2020
Mar 03

Getting the most from Jira: 5 top tips

Mar 03 2020
Mar 03
Mar 03 2020
Mar 03

Jira is the Agile project management software that we use to help us to plan, manage and release our projects.  It’s brilliant as we can integrate agile software development with test case management and track our projects each step of the way.

We’ve managed to effectively streamline our processes by using Jira as it helps us to break down a project into Epics, User Stories and Tasks and our clients find it really straightforward to use.  But we want to make sure that our clients are getting the most from Jira so we’ve put our heads together to come up with a list of our top tips and tricks.

Woman working at computer

1. Drag and Drop

Rather than right-clicking on a ticket and selecting ‘send to (sprint x, Backlog)’, simply drag and drop the ticket to where it needs to be.  This will save you lots of time and it helps you to drag the tickets into a priority order at the same time. 

2. Use Keyboard Shortcuts

There are so many handy keyboard shortcuts within Jira and you can view the list here.  Our favourite is the ‘Quick Search’ shortcut, which you can find by simply pressing ‘/’ on your keyboard! 

3. Click on the Photo Bubbles

Within the Backlog or Active Sprint view, you can click on the photo bubbles to filter the list of tickets by assignee.  This is handy if you want to see, at a glance, which tickets are assigned to you and don’t want to have to scroll through the whole list.

4. Open and Close the Sidebar

If you use a laptop with a small screen like the majority of the Microserve team, you’ll like this tip! In the left-hand menu, hover and click on the ‘collapse’ arrow and this will close the sidebar.  Hover and click again and it opens once more.  This is super useful when you’re viewing tickets and need some more space to read the details properly.

5. Obtain a URL for a ticket

If you want a simple way to obtain a URL for a ticket, follow these steps!  Click on a ticket to either view or open it in another tab and find the ticket number at the top left of the screen.  Hover over the ticket number and a small copy icon should appear.  Click on this and paste it - sorted!

We hope that these top tips will help you to use Jira more effectively by making it easier to log new tickets, track existing work and know who’s assigned to which ticket.
 

Enabling the Media Library in Drupal 8.7

Mar 02 2020
Mar 02
Mar 02 2020
Mar 02

As of Drupal 8.7, the Media and Media Library modules can be enabled and used out-of-box. Below, you'll find a quick tutorial on enabling and using these features.

out-of-box before media and media library

In the past there were two different ways to add an image to a page.

  1. An image could be added via a field, with the developer given control over its size and placement:
     

    Image field before media library
  2. An image could be added via the WYSIWYG editor, with the editor given some control over its size and placement:
     

    Image field upload choices screen

A very straightforward process, but these images could not be reused, as they were not part of a reusable media library.

reusing uploaded media Before Drupal 8.7

Overcoming image placement limitations in prior versions of Drupal required the use of several modules, a lot of configuration, and time. Sites could be set up to reference a media library that allowed editors to select and reuse images that had previously been uploaded, which we explained here.

This was a great time to be alive.

What is available with Media Library

Enabling the Media and Media Library modules extends a site's image functionality. First, ensure that the Media and Media Library core modules are enabled. 

Enable media library in drupal

A media entity reference field must be used with the Media Library. It will not work with a regular image field out-of-box.

Image field on manage display page

On the Manage form display page, select "Media library" widget. 

Media library widget on manage display page

On the "Node Add" and "Node Edit" forms, you’ll see the below difference between a regular image field and a field connected to the media library.

Media library field on node edit

Click on “Add media” and you’ll see a popup with the ability to add a new image to the library or to select an image that is already in the library.

Media field grid

With a simple configuration of the field, if multiple media types are allowed in the field, you’ll see vertical tabs for each media type.

Media grid with multiple media types

WYSIWYG configuration

The WYSIWYG editor requires a few steps when configuring the media library for a specific text format. First, a new icon will appear with a musical note overlapping the image icon. This should be added to the active toolbar and the regular image icon should be moved to the available buttons.

wysiwyg toolbar configuration

Under “Enabled filters,” enable “Embed media."  Under the filter settings, vertical tab settings can be chosen for media types and view modes. Once that configuration is saved, you’ll see on a WYSIWYG editor that you have the same popup dialog for adding a new image to the media library, or selecting an already-uploaded image.

wysiwyg media configuration

Once you are on a "Node Add or "Node Edit" page with a WYSIWYG element, you’ll see the media button (image icon plus musical note).

Media button on wysiwyg editor

Clicking on the media button brings up the same, familiar popup that we saw earlier from the image field:

media library grid

This article is an update to a previous explainer from last year. 

Drupal Training – Helping New Developers Learn the Ropes

Mar 02 2020
Mar 02

Interview with Amitai Burstein, co-owner and CTO of Gizra

Feb 28 2020
Feb 28

The Finest of Drupal 8 Modules Popular for Website Development in 2020

Feb 28 2020
Feb 28
Feb 27 2020
Feb 27

by David Snopek on March 6, 2019 - 1:51pm

As you may know, Drupal 6 has reached End-of-Life (EOL) which means the Drupal Security Team is no longer doing Security Advisories or working on security patches for Drupal 6 core or contrib modules - but the Drupal 6 LTS vendors are and we're one of them!

Today, there is a Moderately Critical security release for the Ubercart module to fix a CSRF vulnerability.

The Ubercart module provides a shopping cart and e-commerce features for Drupal.

The taxes module doesn't sufficiently protect the tax rate cloning feature.

See the security advisory for Drupal 7 for more information.

Here you can download the Drupal 6 patch or the full release.

If you have a Drupal 6 site using the Ubercart module, we recommend you update immediately! We have already deployed the patch for all of our Drupal 6 Long-Term Support clients. :-)

If you'd like all your Drupal 6 modules to receive security updates and have the fixes deployed the same day they're released, please check out our D6LTS plans.

Note: if you use the myDropWizard module (totally free!), you'll be alerted to these and any future security updates, and will be able to use drush to install them (even though they won't necessarily have a release on Drupal.org).

Navigating Complex Integrations, Part 2: Planning Your Integration

Feb 27 2020
Feb 27
Feb 27 2020
Feb 27

The Pulse of the profession 2020 report revealed that nearly 11.4% of investment gets wasted because of poor project performance. Businesses, who often undervalue project management as a strategic competency for driving change, see their projects failing miserably. Managing a project can turn out to be a slippery slope when an organisation doesn’t have a solid grasp of all the moving pieces.

Glove with different coloured stones fixed on it to represent the infinity stones of Drupal development


When it comes to managing a website development project, with so much at stake and so much in flux, adhering to best practices becomes immensely important. Drupal development is no different. From site-building to theming to backend development, doing everything right during the Drupal development process can be a formidable task. Every stage of the Drupal website development process should be taken care of in order to get the desired result. We call these stages as the Infinity Stones of Drupal development.

In Avengers film series, Infinity Stones are said to be the group of gems that can grant an unprecedented amount of power to their owner. Marvel Cinematic Universe timeline is packed with instances where these Infinity Stones are mentioned. The Drupal world has its own set of gems that, when put together in the right manner, can lead to better project delivery. These are the essentials that have to fall in place during the development period to ensure the best yield.

Requirement gathering and analysis (Reality Stone)

A red stone on left, tasksheet icon on right, reality stone written at centre


The Reality Stone helps its owner in manipulating matter. In Drupal development, the requirement gathering and analysis phase acts as the Reality Stone.

Here, you receive inputs from all the stakeholders such as customers, salespeople, industry experts and developers. All the relevant information is gathered from the customer to build a solution that meets their expectations.

For instance, by setting up a meeting, a lot of things can come to an agreement between client and vendor. A plenitude of factors, that will majorly affect different parties involved with the project, can be figured out. Questions like what’s the budget, what will be the deadline, who will be the end-user, what’s the purpose of the building this website and other such questions could be raised and discussed. This stage helps resolve any ambiguities that might become a problem later on.

Architecture and design (Mind Stone)

a yellow stone on left, pencil icon on right, mind stone written at centre


Using Mind Stone, an artificially intelligent peace-keeping program called Ultron was built. This powerful stone allowed its owner to control the minds of others. The architecture and design stage of Drupal development is equivalent to Mind Stone. Once you have gathered the requirement from all the stakeholders, the next stage involves the process of understanding what was exactly in the mind of stakeholders and deriving a functional architecture out of it. All the stakeholders can then analyse the design specification and offer their feedback and suggestions. 

In this stage, several important questions can be pondered over like which of the core and contributed modules in Drupal can be utilised, should you go for Drupal 7 or Drupal 8 that will eventually decide the complexity of upgrade path to Drupal 9, should you go for headless approach etc. Any mistake that happens during the process of gathering stakeholders’ inputs and preparing the design and architecture plan would lead to cost overruns and project failure.

Development and implementation (Power Stone)

purple stone on left, desktop icon on right and power stone written at centre


Someone, who has acquired the Power Stone, would receive a colossal amount of energy that can even be used to destroy an entire planet. Development and implementation stage is the Power Stone of Drupal website building process. Here, developers put in all their energies into the actual development process i.e. to turn visualisation into reality.

Implementation phase begins on the basis of the design document. The design is translated into source code. A surfeit of code reviews takes place. The coding process follows the best practices of Drupal development like creating a clean content architecture by including all the fields and content types in the content structures; or choosing limited content types and fields to make things easier for content creators; or creating separate issues and patches to update the existing code and so on.

Testing (Space Stone)

blue stone on left, gear icon on right, and space stone written at centre


In Marvel films, there is a stone that is hidden inside a blue cube called the Tesseract. The Space Stone, as it is called, gives its owner the power over space. You can create a portal from one part of the universe to another. The testing phase is your Space Stone of Drupal development.

Testing phases begins once the coding gets completed. To find out every possible defect, testers have to open all the portals and do their magic. They have to keep a note of all the errors that they find during their intensive search in all of the portals. From preparing test cases to displaying the scope for more improvements in the user interface, user experience and speed, testers make a list of all the defects. These are then assigned to the developers to get them fixed. Regression testing is performed until the end product meets the client’s expectation.

Deployment (Soul Stone)

orange stone on left, rocket icon on right, and soul stone written at centre


Capturing and controlling others’ souls would call for a Soul Stone. The deployment phase is the Soul Stone of Drupal website creation process. This is the phase where you have got the final output after countless hours of hard work and are ready to see the result. You have got complete control over the resulting website.

The Drupal website, that has been built after all the requirement gathering and coding processes, is deployed in the production environment (or first UAT (User Acceptance Testing) is done based on the client’s needs).

Maintenance and improvement (Time Stone)

green stone on left, nut and bolt icon on right, time stone written at centre


Doctor Strange uses the Time Stone to trap a villain in a time loop and stop him from destroying the Earth. Whether you need to rewind or fast-forward the time, the Time Stone has got you covered.

The maintenance phase kicks in once the deployment is taken care of. This is the Time Stone of Drupal project management. In case any issue springs up during UAT tests or in the production environment, the developers can quickly rewind and check back the processes involved to find out where the error lies in. The improvement becomes easier as you go from incipient stage to the deployment stage or vice versa, find out the exact areas that are in need of enhancement or are under the threat of production downtime and act on it accordingly.

Project management (The Infinity Gauntlet)

The fancy golden glove, that Thanos wears with all the infinity stones fixed on it, is The Infinity Gauntlet. In Avengers, the Gauntlet, holding all the Infinity Stones, is the most sought-after thing. With all the stones united in the Gauntlet, its owner wields a massive amount of power. Project management is key to Drupal development and plays the role of Gauntlet. Like Thanos, with the snap of a finger, the project managers will be able to deliver the Drupal projects with ease if they ensure all the stages of Drupal development are followed accurately and effectively. Project managers are pivotal as they enable the application of knowledge, skills, tools and techniques to project activities so that the end result meets project requirements and ensures client satisfaction.

End thoughts

If Thanos, with his Gauntlet and the Infinity Stones on it, can become the most powerful being and go about destroying the universe, the Drupal project can benefit from its Infinity Stones too. The Infinity Stones of Drupal Development, starting from the requirement gathering and the designing to the deployment and the maintenance, if followed efficaciously, can make the process of creating a Drupal web property smoother and quicker.

With the presence of timely and efficient project management, everything falls in place. OpenSense Labs takes utmost care in adhering to best practices whilst developing a Drupal site. Ping us at [email protected] to know how we can bring all the Infinity Stones in its places and build an innovative Drupal website for your organisation.

Feb 27 2020
Feb 27

In a recent Drupal 8 project, we dealed with a multilingual translation issue: we need to translate the usual "View more" text on the Content: Link to Content field in Views.

After doing some research, we found a workaround like this:

1. Instead of using Content: Link to content, let's add two fields: Content: Path and Global: Custom text.

Note: the Content: Path field provides us a corresponding URL alias for each language.

2. On Global: Custom text, please enter:

<a href="https://www.symphonythemes.com/drupal-blog/translate-content-link-conten...{{ path }}">{% trans %} View more {% endtrans %}</a>

3. Now the text "View more" will be available in User interface translation tool under Configuration - Regional and Languages

Note: to translate that text into a given language, you must browse the User interface translation in that given language.

Feb 26 2020
Feb 26

Sending a Drupal Site into Retirement

Maintaining a fully functional Drupal 7 site and keeping it updated with security updates year-round takes a lot of work and time. For example, some sites are only active during certain times of the year, so continuously upgrading to new Drupal versions doesn't always make the most sense. If a site is updated infrequently, it's often an ideal candidate for a static site. 

To serve static pages, GitHub Pages is a good, free option, especially when already using GitHub. GitHub Pages deploys Jekyll sites, but Jekyll is perfectly happy to serve up static HTML, which doesn't require any actions other than creating functional HTML pages to get a solution working. Using this fishing tournament website as the basis for this article, here’s how to retire a Drupal site using HTTrack. 

Inactivate the Site

To get started, create a local copy of the original Drupal site and prepare it to go static using ideas from Sending A Drupal Site into Retirement.

Create GitHub Page

Next, create a project on GitHub for the static site and set it up to use GitHub Pages. Just follow the instructions to create a simple Hello World repository to be sure it’s working. It’s a matter of choosing the option to use GitHub Pages in the settings and identifying the GitHub Pages branch to use. The GitHub pages options are way down at the bottom of the settings page. There's an option to select a GitHub theme, but if there's one provided in the static pages, it will override anything chosen. So, really, any theme will do.

A committed index.html file echoes back "Hello World" and the new page becomes viewable at the GitHub Pages  URL. The URL pattern is http://REPO_OWNER.github.io/REPO_NAME; the GitHub Pages information block in the repository settings will display the actual URL for the project. 

Create Static Pages with HTTrack

Now that there's a place for the static site, it's time to generate the static site pages into the new repository. Wget could spider the site, but a preferred solution is one that uses HTTrack to create static pages. This is a tool that starts on a given page, generally the home page, then follows every link to create a static HTML representation of each page that it finds. This will only be sufficient if every page on the site is accessible from another link on the site and the navigation or other links on the home page. HTTrack won't know anything about unlinked pages, although there are ways to customize the instructions to identify additional URLs to spider. 

Since this solution doesn’t rely on Drupal at all, it's possible to use it for a site built with any version of Drupal, or even sites built with other CMSes. It self-discovers site pages, so there's no need to provide any manifest of pages to create. HTTrack has to touch every page and retrieve all the assets on each page, so it can be slow to run, especially when running it over the Internet. It's best to run it on a local copy of the site.

It's now time to review all the link elements in the head of the pages and make sure they are all intentional. Using the Pathauto module, the head elements added by Drupal 7, such as <link rel="shortlink" href="https://www.lullabot.com/articles/sending-drupal-site-retirement-using-h.../node/9999" />, should be removed. They point to URLs that don't require replication in the static site, and HTTrack will try to create all those additional pages when it encounters those links.

When using the Metatags module, configuring it to remove those tags is possible. Instead, a bit of code like the following is used in a custom module to strip tags out (borrowed  from the Metatags module, code appropriate for a Drupal 7 site):


/**
 * Implements hook_html_head_alter().
 *
 * Hide links added by core that we don't want in the static site.
 */
function MYMODULE_html_head_alter(&$elements) {
  $core_tags = array(
    'generator',
    'shortlink',
    'shortcut icon',
  );
  foreach ($elements as $name => &$element) {
    foreach ($core_tags as $tag) {
      if (!empty($element['#attributes']['rel']) && $element['#attributes']['rel'] == $tag) {
        unset($elements[$name]);
      }
      elseif (!empty($element['#attributes']['name']) && strtolower($element['#attributes']['name']) == $tag) {
        unset($elements[$name]);
      }
    }
  }
}

The easiest way to install HTTrack on a Mac is with Homebrew:

brew install httrack

Based on the documentation and further thought, it became clear that the following command string is the ideal way to use HTTrack. After moving into the local GitHub Pages repo, the following command should be executed where LOCALSITE is the path to the local site copy that's being spidering, and DESTINATION is the path to the directory where the static pages should go:

httrack http://LOCALSITE -O DESTINATION -N "%h%p/%n/index%[page].%t" -WqQ%v --robots=0 --footer ''

The -N flag in the command will rewrite the pages of the site, including pager pages, into the pattern /results/index.html. Without the -N flag, the page at /results would have been transformed into a file called results.html. This will take advantage of the GitHub Pages server configuration, which will automatically redirect internal links that point to /results to the generated file /results/results.html.

The --footer '' option means omit comments that HTTrack automatically adds to each page and looks like the following. This gets rid of the first comment, but nothing appears to get rid of the second one. Getting rid of the first one, which has a date in it, eliminates having a Git repository in which every page appears to change every time HTTrack runs. It also obscures the URL of the original site, which may be confusing since it's a local environment.

<!-- Mirrored from everbloom-7.lndo.site/fisherman/aaron-davitt by HTTrack Website Copier/3.x [XR&CO'2014], Sun, 05 Jan 2020 10:35:55 GMT -->

<!-- Added by HTTrack --><meta http-equiv="content-type" content="text/html;charset=utf-8" /><!-- /Added by HTTrack -->

The pattern also deals with paged views results. It tells HTTrack to find a value in the query string called "page" and inserts that value, if it exists, into the URL pattern in the spot marked by [page]. Paged views create links like /about/index2.html, /about/index3.html for each page of the view. Without specifying this, the pager links would be created as meaningless hash values of the query string. This way, the pager links are user-friendly and similar (but not quite the same) as the original link URLs.

Shortly after the process starts, it will stop and ask a question about how far to go in the following links. '*' is the response to that question:

The progress is viewable as it goes to see which sections of the site it is navigating into. The '%v' flag in the command tells it to use verbose output.

HTTrack runs on a local version of the site to spider and creates about 3,500 files, including pages for every event and result and every page of the paged views. HTTrack is to slow to use across the network on the live site URL, so it makes sense to do this on a local copy. The first attempt took nearly two hours because so many unnecessary files were created, such as an extra /node/9999.html file for every node in addition to the desired file at the aliased path. After a while, it was apparent they came from the shortlink in the header pointing to the system URL. Removing the short links, cut the spidering time by more than half. Invalid links and images in the body of some older content that HTTrack attempted to follow (creating 404 pages at each of those destinations) also contributed to the slowness. Cleaning up all of those invalid links caused the time to spider the site to drop to less than a half-hour.

The files created by HTTrack are then committed to the appropriate branch of the repository, and in a few minutes, the results appear at http://karens.github.io/everbloom.

Although incoming links to /results now work while internal links still look like this in the HTML:

/results/index.html

A quick command line fix to clean that up is to run this, from the top of the directory that contains the static files:

find . -name "*.html" -type f -print0 |   xargs -0 perl -i -pe "s/\/index.html/\//g"

That will change all the internal links in those 3,500 pages from results/index.html to /results/ resulting in a static site that pretty closely mirrors the original file structure and URL pattern of the original site.

One more change is to fix index.html at the root of the site. When HTTrack generates the site, it creates an index.html page that redirects to another page, /index/index.html. To clean things up a bit and remove the redirect, I copy /index/index.html to /index.html. The relative links in that file now need to be fixed to work in the new location, so I do a find and replace on the source of that file to remove ../ in the paths in that page to change URLs like ../sites/default/files/image.jpg to sites/default/files/image.jpg.

Once this is working successfully, the final step was to have the old domain name redirect to the new GitHub Pages site. GitHub provides instructions about how to do that.

Updating the Site

Making future changes requires updating the local site and then regenerating the static pages using the method above. Since Drupal is not publicly available, there's no need to update or maintain it, nor worry about security updates, as long as it works well enough to regenerate the static site when necessary. When making changes locally, regenerate the static pages using HTTrack and push up the changes. 

The next article in this series will investigate whether or not there is a faster way of creating a static site.

Navigating Complex Integrations, Part I: Understanding the Landscape

Feb 25 2020
Feb 25
Feb 25 2020
Feb 25

With the announcement of Drupal 9 we want to talk about how this affects our customers, what to expect when new versions come out and to let you know what we do at Amazee Labs to ensure the transition will be painless.

“The big deal about Drupal 9 is … that it should not be a big deal.”

- Dries Buytaert, Drupal Founder
 

Background

The changes to Drupal between versions 7 and 8 were, quite frankly, enormous. Drupal previously had a justified reputation for doing its own thing and ignoring burgeoning standards and practices utilised elsewhere in the PHP community. So, when Drupal 8 was announced, one of the main goals of the release was to get off the Drupal island and start to utilise some of the millions of lines of open source code and documentation available elsewhere.

There were many great sides to this upgrade. The code was being built on a more solid and tested foundation, principally being based on the Symfony framework and leveraging numerous other systems and libraries. This helped Drupal become more enterprise focussed whilst opening the development field to engineers of other systems who were already familiar with the standards and practices now utilised in Drupal.

Unfortunately, the major technical upgrade to Drupal also introduced some headaches. Migrating between Drupal 7 and Drupal 8 can be time consuming and expensive. As a result of this, businesses who undertook such a migration can be forgiven for worrying about Drupal 9 being released just 5 years after Drupal 8. Some clients have expressed concern about using Drupal 8 when another expensive upgrade seems to be just around the corner.

Why Drupal 9 is different

In short, if you keep your Drupal 8 website up-to-date, there will be no major upgrade worries. The core maintainers of Drupal want to make Drupal upgrades easy forever from now on. The Drupal team has a plan to ensure that Drupal 9 will essentially be a minor process. This is possible because Drupal 9 will be built in the same manner as Drupal 8, with the same practices and core libraries. Unlike Drupal 7 to Drupal 8, there will be no major architectural or structural changes to the codebase. 

The main changes, other than bug fixes, improvements and new features will be the upgrades to Drupal’s core libraries. For example, Symfony 3 (the library upon which Drupal is built) comes to its end-of-life in 2021, so it makes sense to have Drupal 9 running on Symfony 4 at that point.

End of support flow chart

How is this easy upgrade achievable? Well, the Drupal team will continue its 6-month release cycle until Drupal 9 is released. In these releases, the code will be deprecated and upgraded to bring it closer to the components and libraries that will be used by Drupal 9, ensuring that when the time does come to upgrade everything will be in place for an easy transition.

Maintenance is key 

Keeping up with new releases and updates ensures that your website stays relevant and secure, and also means that switching from Drupal 8 to 9 will be much more routine. By partnering with us even after your website is created, we can take proactive steps such as making sure there’s no deprecated code in your site before the newest release.

A commitment to quality: Creating a robust QA process

Feb 25 2020
Feb 25
Feb 25 2020
Feb 25

Having been working in software quality assurance for over 10 years, I have helped many organisations to set up internal QA teams and a vigorous QA process from scratch. It’s both a rewarding and challenging task! 

As part of Microserve’s commitment to producing high-quality work for our clients, I joined the team in 2018, tasked with building a QA team and creating a suite of processes. One of our values as a business is ‘excellence as standard’. Prioritising quality in this way would ensure that we would provide our clients with the excellence we strive for. 

An arrow pointing down a road

Where to start?

For me, the starting point of anything is the hardest part. On joining Microserve, I decided the best approach would be to:

  1. Identify and log what is trying to be achieved
  2. List the steps needed to get there and then question, how do you know if you have achieved what you set out to do?

That may seem oversimplified, but that’s what QA is; understanding complex needs and translating it into something that can be broken down, understood and tested.

What is trying to be achieved?

So, first things first…..what do we want to achieve? 

For the Microserve QA team our overall goal is to “proactively and systematically test code changes and ensure bad code isn’t being delivered into the end product.”

What are the steps needed?

Once we had defined what the QA team were trying to achieve, we then began mapping out how we could provide the best support to our development team, and therefore the best product to our clients. 

I’m a great believer in iterative change - I have found from experience the best way is to take little steps to make big changes. It’s impossible to achieve everything all at once and it’s important to realise that sometimes things may not always go exactly as planned.

We started forming our processes by: 

  • Understanding the environments needed to provide a platform for the test phases to be executed on, so that code can be monitored and tested safely through the development cycle.
  • Decide on the test phases to be run against each environment and the scope for each.
  • Incorporate the test phases: Ensure the application lifecycle management tool has a workflow that incorporates the QA phases and activities.
  • Choose a Test Management tool where test cases and test phases can be managed and executions tracked.
  • Understand the team: Get to know everyone and understand their role, skills and how they work.
  • Get to know the software/product: How can you test something if you have no idea how it works?  Write regression test packs so that you have a log of how the system works and what it is meant to do.
  • Define a defect management process: that enables defects to be logged and tracked efficiently.

While this isn’t everything that needed to be done, this was a great place to start to get an understanding of how we can make better processes. 

A crowd of people gathering

Taking our team on the journey

As humans, we are programmed to fear change and it’s not always easy to get buy-in from everyone. One of the key aspects of setting up any new team is to take others along the journey with you and constantly provide updates as to why we are doing these things and how they can contribute and help. Here at Microserve, our team was really encouraging of these process improvements, and helped me to understand the whole landscape which helped to make our QA team a success.

The measure of success

We wanted to make sure that the process we were putting together was working. It was critical that: 

  • Defects were being discovered earlier in the process and not in the latter stages of the life cycle. This would save clients time, effort and money. 
  • Everyone understood the QA process and its value. Both internally and externally. 
  • The QA process was integrated into the development workflow and did not become a bottleneck or blocker. 
  • Clients have confidence in the product being delivered to them.

The work is never done

There is more to our QA team than just testing code. Our team is constantly assessing processes, tools and ensuring gaps are filled.  We pride ourselves in the fact that we are always looking at ways we can improve these vital areas of our client delivery service. 

Working in development is an ever-evolving and fast-paced environment, and there are always new ideas, techniques and tools available. It’s vital that we adapt and customise our offering and process to the different needs of clients, projects and deadlines. 

I am thrilled with the work the QA team have done over the past 18 months. The QA team and the process we have created ensures that we deliver high-quality work to our clients, and products that the whole team are proud of.

Feb 25 2020
Feb 25

The opening talk as DrupalCamp Paris 2019 was a presentation given by Thomas Jolliet (FranceTV) and yours truly about how we rebuilt FranceTV Sport to a Symfony 4 / headless Drupal 8 combo.

The most salient points of the talk are probably the "defense in depth" mechanisms we built for scalability and fault tolerance, and the business results, like -85% full page load time, -65% speed index, or +50% iOS app traffic.

Feb 25 2020
Feb 25

I am working on notes for a draft that will be a book about migration processes made with Drupal and its Migrate API. It is expected to be released in June 2020 and the work of collecting, experimenting and articulating the content is being quite extensive. As there are still some months left for the launch, in order not to lose the mental sanity and to be able to give partial sense to these tasks, I have thought to publish here some small posts derived from the working notes.

This way I will be able to give something useful to the complementary notes and if the COVID-19 attacks me before seeing the book come out, at least I will have shared something before (I guess).

Well, what do I want to talk about in this post? I would like to make a list of Drupal modules related to migration processes, available as contrib modules and that can be used to provide functionality to a migration. This article will be only a lightweight set of basic resources (I swear).

This article was originally published in https://davidjguru.github.io
Picture from Unsplash, user Nils Nedel, @nilsnedel

Table of Contents

1- Introduction
2- Basic Resources - Core Modules
3- Other Basic Resources - Contrib Modules
4- Extra Resources - Contrib Modules for Plugins
5- Migration Runners - Contrib Modules Drush-Related
6- Authors you should know
7- :wq!

This article is part of a series of posts about Drupal Migrations:

1- Drupal Migrations (I): Basic Resources

2- Drupal Migrations (II): Examples

1- Introduction

It’s not very easy to talk about migrations in general and, of course, it is not easy in the context of Drupal either. To perform migrations it is necessary to have a good knowledge of the technology, data models (in origin and in destination), experience in ETL processes and a certain know-how about how to implement Drupal Plugins (In migrations there is an extensive use of Drupal-Way Plugins).

In any case, since the topic is extensive and my time is now short, I thought of this article as a summary catalogue (for quick consumption) of tools and basic resources for working with migrations.

2- Basic Resources - Core Modules

3- Other Basic Resources - Contrib Modules

  • Migrate Plus: [migrate_plus](https://www.drupal .org/project/migrate_plus). Migrate Plus is an essential contrib module wich extends the features and capabilities of the Migrate core module with a lot of plugins and extensions.

  • Migrate Tools: migrate_tools. Another essential resource: provides a lot of Drush commands for running and managing Migrations.

  • Migrate Status: migrate_status. This little contrib module allows get a feedback about a migration process. Do you need to know if a migration is running? this module gives you a service that you can call in order to check the migration.

  • Migrate Files: migrate_files. It’s such an interesting set of process plugins that you will want to move files and images with it.

  • Migrate Commerce: commerce_migrate. General-purpose framework that extends to the main Migrate module from Drupal Core, for moving data in a Drupal Commerce scenario.

In the Drupal migration processes, we’ll use diverse resources in order to processing the ETL migration plan. One of these basic resources (as I mentioned in the introduction) are the Drupal Plugins, of which you need to have good knowledge and some practice. In a migration scenario, plugins help us processing information from the E:Source (Source Plugins), making T:Processing(ProcessPlugins) in order to save data at L:Destination (Destination Plugins). The assembly of these three parts (usually) results in a correct migration process.

Many modules of the core already bring their own Plugins to facilitate migration processes (as the user module). So let’s review some migration plugins packaged in contributed modules.

Source Plugins Migrate Source Plugin

  • Migrate Source CSV: migrate_source_csv. Contrib Module for migrating data to Drupal 8 from a classical and simple CSV file.

  • Migrate Source SQL: custom_sql_migrate_source_plugin. As a peculiarity of the Plugins used for databases, this module allows to integrate in the .yml file describing a migration, directly SQL queries that will be executed against the source database.

  • Migrate Source YAML: migrate_source_yaml. It’s just a simple tool for migrating content from YAML files.

Processing Plugins Migrate Process Plugins

Destination Plugins: Migrate Destination Plugins & Examples

What kind of Drupal entities will be created in the migrating process? content entities? configuration entities? Take a look.

  • Migrate Destination CSV: d8migrate. It’s a light custom module created by @jonathanfranks.

  • Migrate Destination Config: Class Config.php. Offers a plugin for config migration.

  • Migrate Destination Block: Class EntityBlock.php. Just like and example about the resources that every element can offers in a migration scene, in case of moving Block Entities (are Config Entities) see the PHP classes included in its own module for migrating (Source, Process and Destination).

Drupal 8 Migrate Entity Block

6- Authors you should know

  • Mauricio Dinarte: Mauricio is a developer, consultant, trainer and owner of his own business https://agaric.coop, wrote what is probably the mandatory reading guide for all people who want to learn how to migrate on Drupal: 31 Days of Drupal Migrations, a set of 31 articles published in https://understanddrupal.com/migrations with the most important aspects&mldr;examples, exercises, descriptions&mldr;to understand the whole internal world of migrations inside Drupal.

An essential training material. In addition, his company’s website, under the tag “migrate” also hosts many very good articles about migration topics: https://agaric.coop/tags/migrate.

Some examples from Mauricio Dinarte:

  1. Introduction to paragraphs migrations in Drupal:
  1. Using migration groups to share configuration among Drupal migrations:
  1. What is the difference between migration tags and migration groups in Drupal?

His profile in Drupal.org: https://www.drupal.org/u/dinarcon.

  • Tess Flynn: I heard about Tess Flynn reading articles by Mauricio Dinarte. That’s how I met this expert developer, speaker and communicator of the Drupal community. On her website https://deninet.com I found content of a different nature, but above all, a series of very interesting articles about migrations under the tag “drupal-migration”: https://deninet.com/tag/drupal-migration.

    Along the way I also discovered that it has several contrib modules related to Migrations and Processing Plugins.

Some examples from Tess Flynn:

  1. Migrate Process URL: Provides Process Plugin to migrate link fields.
  1. Migrate Process Vardump: Helping to debugging migrations.
  1. Many Process Plugins:

Her profile in Drupal.org: https://www.drupal.org/u/socketwench.

Some examples from Danny Sipos:

  1. Your first Drupal 8 Migration:
  1. Dynamic migrations using “templates” in Drupal 8:
  1. Quickly generate the headers for the CSV migrate source plugin using Drush:

His profile in Drupal.org: https://www.drupal.org/u/upchuk.

7- :wq!

[embedded content]

Drupal Recording Initiative: #DrupalCampNJ and #FLDC20

Feb 24 2020
Feb 24
Feb 24 2020
Feb 24

The Drupal Association is seeking partners to help us advance the next phase of the Automatic Updates initiative.

The first phase of this work was generously sponsored by the European Commission, and supported by other partners including: Acquia, Tag1Consulting, Mtech, and Pantheon.

In this first phase, we accomplished a great deal:

  • Display of security PSAs directly in Drupal's admin interface
  • Automated readiness checks, to ensure that a site is prepared for updates
  • Automatic updates for Drupal Core in both Drupal 7 and Drupal 8.

But while this work laid the foundation, a great deal of work yet remains. The next phase hopes to add support for:

  • Sites managed using Composer
  • Automatic updates with Contributed modules
  • A front-end controller providing support for easy roll-back

The Drupal Association needs partners in order to move this work forward. We're looking both for organizations who can provide financial support, and teams who have expert developers who can contribute to development.

If you are interested, you can find a detailed scope of the remaining work attached to this post.

Download the Request for Sponsors

Contact: [email protected] with questions.

Advance Your Career With DebugAcademy at DrupalCon 2020

Feb 24 2020
Feb 24
Feb 24 2020
Feb 24

Joris Snoek

Digital Consultant

+31 (0)20 - 261 14 99

It is good to keep abreast of available open source 'contrib' Drupal modules. 'There's a module for that', applies to many use cases within Drupal; it's a sin to build something that already exists. We keep track of the latest module releases every month, this is what we noticed about module updates in the last month:

1. Rabbit Hole

An ingenious module, somewhat superlative, but I was surprised that I never saw it before. This Drupal module solves an issue that has required attention for Drupal implementations for years: you want to use certain content, files or Taxonomy terms combined for building specific other content pages - you do not want to use them as individually accessible pages.

For example: you have a page where a slideshow with 10 images is shown. Those 10 images are 10 manageable Drupal nodes in the backend. If you do not do anything about it, then your slideshow probably works fine, but those 10 nodes are also individually accessible anonymously - and indexable by Google (╯°□°)╯︵ ┻━┻

That is what you want to avoid: your SEO is broken and people can end up on pages that are not part of your website at all - and probably not styled -yikes!

We always solved this in custom code: access to relevant entities give a 404 page not found. But this module made it generically configurable, very nice!

Bonus
Oh yeah, if you use the XML sitemap module: do not forget to exclude items.

https://www.drupal.org/project/rabbit_hole

2. Quick Link

Following this blog on the track: this module provides an implementation of the Quicklink library from Google Chrome Lab for Drupal. Quicklink is a lightweight JavaScript library (less than 1 kb compressed) that enables faster consecutive page loads by following in-viewport links.

How Quicklink works
Quicklink attempts to speed up navigation to subsequent pages. It:

  • Detects links within the viewport (using Intersection Observer)
  • Waits until the browser is inactive (with requestIdleCallback)
  • Checks whether the user has a slow connection.
  • Prefetches from URLs to the links (using
    of XHR).

Under construction
The module just recently arrived and is still under heavy construction at present, but absolutely one to watch.

https://www.drupal.org/project/quicklink

3. Image Effects

A popular evergreen module, which was named in Drupal 7 era ImageCache Actions: contains a bundle of actions that you can apply to an image.

https://www.drupal.org/project/image_effects

4. Weight

If you want to set the order of content in a list (eg Drupal nodes), you will need a field to facilitate this. Drupal does not offer this by default, but this module helps you: after installation you can give content items (for example Drupal nodes) a weight, making them appear higher or lower in a (non-chronological) list.

Interesting in this context is Comparison of Node Ordering Modules.

https://www.drupal.org/project/weight

5. Automatic User Names

This Drupal module can automatically generate a username from other User fields (eg first name and last name). Because this username is automatically generated, it is no longer necessary to have it filled in manually. That is why this module also deactivates the username field. The Real Name module can be a good addition to this.

https://www.drupal.org/project/auto_username

6. Role Expire

This is a simple module that allows administrators to manage expiration dates on user roles. A common application of this module is the implementation of subscriptions in the form of a magazine, where someone has access to protected content for a certain period of time.

https://www.drupal.org/project/role_expire

7. Twig Field Value (Drupal theming)

A popular Drupal module, with which Drupal themers can get partial data from render arrays. So that there is more control over exactly which data ends up on the screen.

https://www.drupal.org/project/twig_field_value

8. Entity Browser

Developer module to provide a browser / selector / selector for a Drupal entity. It can be used in any context in which a content manager has to select one or more entities and something has to be done with it (content, image, video, audio, etc)

Possible applications:

  • to produce an entity reference widget;
  • use in a wysiwyg editor.

An example is the File Entity Browser module, a kind of media browser, that uses this Entity Browser module.

https://www.drupal.org/project/entity_browser

9. Paragraphs Previewer

An extension for the popular Drupal Paragraphs module. By default, there is no possibility in the backend to view a preview of an input piece of content in Paragraphs: you first have to save the entire content article (the node), then refresh the frontend and see what it looks like. This module solves this by giving the possibility of a preview in Paragraphs, in the backend.

You will have to make sure that the html/css styling of the Paragraph in question is also fully included there and is not dependent on a global context (eg page, section, div, etc classes).

https://www.drupal.org/project/paragraphs_previewer

10. Views Parity Row

In Drupal you can work with View Modes because content from the same content type in different places in a Drupal website can look different, some standard View Modes are:

  • Full Content
  • RSS
  • Search index
  • Teaser

You can add unlimited View Modes yourself. A list of all active View Modes can be found under DRUPAL_SITE_URL/admin/structure/display modes/view.

Now Drupal core also contains the Views module: for making lists in the broadest sense of the word. In such a Drupal View you can specify which View Mode of relevant content you want to show (teaser, full content, rss, etc). But you can only choose one in the View, so you can not alternate them. And you might want that in some cases; that is the issue that this module solves: different View Modes in one View, phew :)

https://www.drupal.org/project/views_parity_row

11. Leaflet

Leaflet is an open source JavaScript library for mobile-friendly interactive maps. This module integrates Leaflet into Drupal. An alternative to Google Maps or MapBox for example.

https://www.drupal.org/project/leaflet

12. Background Images Formatter

This module offers an image formatter that allows you to set an image in the background of a css tag. The images come from a Drupal entity field and not from a configuration page or a custom Drupal entity or anything else, so it's very easy to set up and manage - as the project page describes.

The module also contains a sub-module to process responsive images.

https://www.drupal.org/project/bg_image_formatter

13. OtherView Filter

Within the Drupal core you can easily create lists, for example of content or users, using Views. Sometimes within the many unwise standard options you just fail to build in that one exception: for example one or more specific content items that you do not want to have in the list.

If you install this module, you can have the results of one View excluded in the other. This sounds like a Rube Goldberg machine and perhaps a custom query is better, but that depends on the use case, the budget, system scale, future wishes and your development knowledge.

https://www.drupal.org/project/other_view_filter

14. Taxonomy Formatter

Drupal Taxonomy is a powerful, flexible system for categorizing content. The standard formatters merely build a lot of divs around the terms in the frontend. This module adds a new formatter that gives you more influence on these layout options.

https://www.drupal.org/project/taxonomy_formatter

15. Copy Prevention

This module applies a number of techniques making it more difficult to copy content from your Drupal website:

  • Switch off selecting text.
  • Disable copying to clipboard.
  • Disable right mouse button for all site content.
  • Disable right mouse button for images only.
  • Protect/hide images for search engines so that your images are not shown in search results.

If you really do not want information copied, then you should not put it on the internet, technically savvy people can always copy content from a public Drupal website, but this module makes it more difficult for non-technical people.

https://www.drupal.org/project/copyprevention

16. Consumers

A developer api module used in Contenta CMS, a Headless Drupal distribution. This module itself does not contain functions for end users, but facilitates an API for other modules to build on.

In this case Consumers can be registered (similar to https://developers.facebook.com), for decoupled Drupal installations to offer variations based on who makes the request. All these options are managed under a joint umbrella so that other modules can use them.

https://www.drupal.org/project/consumers

17. Nagios

This module integrates Drupal into the Nagios monitoring system and provides instant central insight into Nagios of:

  • Is the Drupal core up-to-date?
  • Are the Drupal contrib modules up-to-date?
  • Are the Drupal site settings correct?
  • Many other safety aspects in Drupal

https://www.drupal.org/project/nagios

18. Autoban

Drupal security module, which analyzes visitor behavior: when suspicious actions are detected, the relevant IP address is added to a blacklist.

There are various settings possible, so you can adjust how strictly the module occurs.

https://www.drupal.org/project/autoban

Wrap up

Ok, that's it for this now, next month I expect a new modules updates, so stay tuned!

How to make "Back To Top" functionality with own theme in five minutes

Feb 24 2020
Feb 24
Feb 23 2020
Feb 23

by David Snopek on February 27, 2019 - 12:54pm

As you may know, Drupal 6 has reached End-of-Life (EOL) which means the Drupal Security Team is no longer doing Security Advisories or working on security patches for Drupal 6 core or contrib modules - but the Drupal 6 LTS vendors are and we're one of them!

Today, there is a Moderately Critical security release for the Context module to fix an Open Redirect vulnerability.

The context module enables site builders to setup conditions and reactions for different parts of the site.

The module doesn't sufficiently sanitize user output when displayed leading to a Cross Site Scripting (XSS) vulnerability.

This vulnerability is mitigated by the fact that an attacker must have the ability to store malicious markup in the site (e.g. permission to create a node with a field that accepts "filtered html").

See the security advisory for Drupal 7 for more information.

Here you can download:

If you have a Drupal 6 site using the Context module, we recommend you update immediately! We have already deployed the patch for all of our Drupal 6 Long-Term Support clients. :-)

If you'd like all your Drupal 6 modules to receive security updates and have the fixes deployed the same day they're released, please check out our D6LTS plans.

Note: if you use the myDropWizard module (totally free!), you'll be alerted to these and any future security updates, and will be able to use drush to install them (even though they won't necessarily have a release on Drupal.org).

Feb 22 2020
Feb 22

Earlier we wrote about stress testing, featuring Blazemeter where you could learn how to do crash your site without worrying about the infrastructure. So why did I even bother to write this post about the do-it-yourself approach? We have a complex frontend app, where it would be nearly impossible to simulate all the network activities faithfully during a long period of time. We wanted to use a browser-based testing framework, namely WebdriverI/O with some custom Node.js packages on Blazemeter, and it proved to be quicker to start to manage the infrastructure and have full control of the environment. What happened in the end? Using a public cloud provider (in our case, Linode), we programmatically launched the needed number of machines temporarily, provisioned them to have the proper stack, and the WebdriverI/O test was executed. With Ansible, Linode CLI and WebdriverIO, the whole process is repeatable and scalable, let’s see how!

Infrastructure phase

Any decent cloud provider has an interface to provision and manage cloud machines from code. Given this, if you need an arbitrary number of computers to launch the test, you can have it for 1-2 hours (100 endpoints for a price of a coffee, how does this sound?).

There are many options to dynamically and programmatically create virtual machines for the sake of stress testing. Ansible offers dynamic inventory, however the cloud provider of our choice wasn’t included in the latest stable version of Ansible (2.7) by the the time of this post. Also the solution below makes the infrastructure phase independent, any kind of provisioning (pure shell scripts for instance) is possible with minimal adaptation.

Let’s follow the steps at the guide on the installation of Linode CLI. The key is to have the configuration file at ~/.linode-cli with the credentials and the machine defaults. Afterwards you can create a machine with a one-liner:

linode-cli linodes create --image "linode/ubuntu18.04" --region eu-central --authorized_keys "$(cat ~/.ssh/id_rsa.pub)"  --root_pass "$(date +%s | sha256sum | base64 | head -c 32 ; echo)" --group "stress-test"

Given the specified public key, password-less login will be possible. However this is far from enough before the provisioning. Booting takes time, SSH server is not available immediately, also our special situation is that after the stress test, we would like to drop the instances immediately, together with the test execution to minimize costs.

Waiting for machine booting is a slightly longer snippet, the CSV output is robustly parsable:

## Wait for boot, to be able to SSH in.
while linode-cli linodes list --group=stress-test --text --delimiter ";" --format 'status' --no-headers | grep -v running
do
  sleep 2
done

However the SSH connection is likely not yet possible, let’s wait for the port to be open:

for IP in $(linode-cli linodes list --group=stress-test --text --delimiter ";" --format 'ipv4' --no-headers);
do
  while ! nc -z $IP 22 < /dev/null > /dev/null 2>&1; do
    sleep 1
  done
done

You may realize that this is overlapping with the machine booting wait. The only benefit is that separating the two allows more sophisticated error handling and reporting.

Afterwards, deleting all machines in our group is trivial:

for ID in $(linode-cli linodes list --group=stress-test --text --delimiter ";" --format 'id' --no-headers);
do
  linode-cli linodes delete "$ID"
done

So after packing everything in one script, also to put an Ansible invocation in the middle, we end up with stress-test.sh:

#!/bin/bash

LINODE_GROUP="stress-test"
NUMBER_OF_VISITORS="$1"

NUM_RE='^[0-9]+$'
if ! [[ $NUMBER_OF_VISITORS =~ $NUM_RE ]] ; then
  echo "error: Not a number: $NUMBER_OF_VISITORS" >&2; exit 1
fi

if (( $NUMBER_OF_VISITORS > 100 )); then
  echo "warning: Are you sure that you want to create $NUMBER_OF_VISITORS linodes?" >&2; exit 1
fi

echo "Reset the inventory file."
cat /dev/null > hosts

echo "Create the needed linodes, populate the inventory file."
for i in $(seq $NUMBER_OF_VISITORS);
do
  linode-cli linodes create --image "linode/ubuntu18.04" --region eu-central --authorized_keys "$(cat ~/.ssh/id_rsa.pub)" --root_pass "$(date +%s | sha256sum | base64 | head -c 32 ; echo)" --group "$LINODE_GROUP" --text --delimiter ";"
done

## Wait for boot.
while linode-cli linodes list --group="$LINODE_GROUP" --text --delimiter ";" --format 'status' --no-headers | grep -v running
do
  sleep 2
done

## Wait for the SSH port.
for IP in $(linode-cli linodes list --group="$LINODE_GROUP" --text --delimiter ";" --format 'ipv4' --no-headers);
do
  while ! nc -z $IP 22 < /dev/null > /dev/null 2>&1; do
    sleep 1
  done
  ### Collect the IP for the Ansible hosts file.
  echo "$IP" >> hosts
done
echo "The SSH servers became available"

echo "Execute the playbook"
ansible-playbook -e 'ansible_python_interpreter=/usr/bin/python3' -T 300 -i hosts main.yml

echo "Cleanup the created linodes."
for ID in $(linode-cli linodes list --group="$LINODE_GROUP" --text --delimiter ";" --format 'id' --no-headers);
do
  linode-cli linodes delete "$ID"
done

Provisioning phase

As written earlier, Ansible is just an option, however a popular option to provision machines. For such a test, even a bunch of shell command would be sufficient to setup the stack for the test. However, after someone tastes working with infrastructure in a declarative way, this becomes the first choice.

If this is your first experience with Ansible, check out the official documentation. In a nutshell, we just declare in YAML how the machine(s) should look, and what packages it should have.

In my opinion, a simple playbook like this below, is readable and understandable as-is, without any prior knowledge. So our main.yml is the following:

- name: WDIO-based stress test
  hosts: all
  remote_user: root

  tasks:
    - name: Update and upgrade apt packages
      become: true
      apt:
        upgrade: yes
        update_cache: yes
        cache_valid_time: 86400

    - name: WDIO and Chrome dependencies
      package:
        name: "{{ item }}"
        state: present
      with_items:
         - unzip
         - nodejs
         - npm
         - libxss1
         - libappindicator1
         - libindicator7
         - openjdk-8-jre

    - name: Download Chrome
      get_url:
        url: "https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb"
        dest: "/tmp/chrome.deb"

    - name: Install Chrome
      shell: "apt install -y /tmp/chrome.deb"

    - name: Get Chromedriver
      get_url:
        url: "https://chromedriver.storage.googleapis.com/73.0.3683.20/chromedriver_linux64.zip"
        dest: "/tmp/chromedriver.zip"

    - name: Extract Chromedriver
      unarchive:
        remote_src: yes
        src: "/tmp/chromedriver.zip"
        dest: "/tmp"

    - name: Start Chromedriver
      shell: "nohup /tmp/chromedriver &"

    - name: Sync the source code of the WDIO test
      copy:
        src: "wdio"
        dest: "/root/"

    - name: Install WDIO
      shell: "cd /root/wdio && npm install"

    - name: Start date
      debug:
        var=ansible_date_time.iso8601

    - name: Execute
      shell: 'cd /root/wdio && ./node_modules/.bin/wdio wdio.conf.js --spec specs/stream.js'

    - name: End date
      debug:
        var=ansible_date_time.iso8601

We install the dependencies for Chrome, Chrome itself, WDIO, and then we can execute the test. For this simple case, that’s enough. As I referred to earlier:

ansible-playbook -e 'ansible_python_interpreter=/usr/bin/python3' -T 300 -i hosts main.yml

What’s the benefit over the shell scripting? For this particular use-case, mostly that Ansible makes sure that everything can happen in parallel and we have sufficient error-handling and reporting.

Test phase

We love tests. Our starter kit has WebdriverIO tests (among many other type of tests), so we picked it to stress test the full stack. If you are familiar with JavaScript or Node.js the test code will be easy to grasp:

const assert = require('assert');

describe('podcasts', () => {
    it('should be streamable', () => {
        browser.url('/');
        $('.contact .btn').click();

        browser.url('/team');
        const menu = $('.header.menu .fa-bars');
        menu.waitForDisplayed();
        menu.click();
        $('a=Jobs').click();
        menu.waitForDisplayed();
        menu.click();
        $('a=Podcast').click();
        $('#mep_0 .mejs__controls').waitForDisplayed();
        $('#mep_0 .mejs__play button').click();
        $('span=00:05').waitForDisplayed();
    });
});


This is our spec file, which is the essence, alongside with the configuration.

Could we do it with a bunch of requests in jMeter or Gatling? Almost. The icing on the cake is where we stress test the streaming of the podcast. We simulate a user who listens the podcast for 10 seconds. For for any frontend-heavy app, realistic stress testing requires a real browser, WDIO provides us exactly this.

The WebdriverIO test execution - headless mode deactivated

Test execution phase

After making the shell script executable (chmod 750 stress-test.sh), we are able to execute the test either:

  • with one visitor from one virtual machine: ./stress-test.sh 1
  • with 100 visitors from 100 virtual machines for each: ./stress-test.sh 100

with the same simplicity. However, for very large scale tests, you should think about some bottlenecks, such as the capacity of the datacenter on the testing side. It might make sense to randomly pick a datacenter for each testing machine.

The test execution consists of two main parts: bootstrapping the environment and executing the test itself. If bootstrapping the environment takes too high of a percentage, one strategy is to prepare a Docker image, and instead of creating the environment again and again, just use the image. In that case, it’s a great idea to check for a container-specific hosting solution instead of standalone virtual machine.

Would you like to try it out now? Just do a git clone https://github.com/Gizra/diy-stress-test.git!

Result analysis

For such a distributed DIY test, analyzing the results could be challenging. For instance, how would you measure requests/second for a specific browser-based test, like WebdriverI/O?

For our case, the analysis happens on the other side. Almost all hosting solutions we encounter support New Relic, which could help a lot in such an analysis. Our test was DIY, but the result handling was outsourced. The icing on the cake is that it helps to track down the bottlenecks too, so a similar solution for your hosting platform can be applied as well.

However what if you’d like to somehow gather results together after such a distributed test execution?

Without going into detail, you may study the fetch module of Ansible, so you can gather a result log from all the test servers and have it locally in a central place.

Conclusion

It was a great experience that after we faced some difficulty with a hosted stress test platform; in the end, we were able to recreate a solution from scratch without much more development time. If your application also needs special, unusual tools for stress-testing, you might consider this approach. All the chosen components, such as Linode, WebdriverIO or Ansible are easily replaceable with your favorite solution. Geographically distributed stress testing, fully realistic website visitors with heavy frontend logic, low-cost stress testing – it seems now you’re covered!

Feb 22 2020
Feb 22

Here is a quick tip if you want to create a step definition that has an argument with multiple lines. A multiline string argument if you like.

I wanted to test that an email was sent, with a specific subject, to a specific person, and containing a specific body text.

My idea was to create a step definition that looked something like this:

Then an email has been sent to "[email protected]" with the subject "Subject example" and the body “one of the lines in the body

plus this is the other line of the body, after an additional line break”

So basically my full file is now this:

@api @test-feature
Feature: Test this feature
  Scenario: I can use this definition
    Then an email has been sent to "[email protected]" with the subject "Subject example" and the body “one of the lines in the body

    plus this is the other line of the body, after an additional line break”


My step definition looks like this:

  /**
   * @Then /^an email has been sent to :email with the subject :subject and the body :body$/
   */
  public function anEmailHasBeenSentToWithTheSubjectAndTheBody($email, $subject, $body) 
  {
      throw new PendingException();
  }

Let’s try to run that.

$ ./vendor/bin/behat --tags=test-feature

In Parser.php line 393:
                                                                                                                                                    
  Expected Step, but got text: "    plus this is the other line of the body, after an additional line break”" in file: tests/features/test.feature  

Doing it that way simply does not work. You see, by default a line break in the Gherkin DSL has an actual meaning, so you can not do a line break in your argument, expecting it to just pass along everything up until the closing quote. What we actually want is to use a PyString. But how do we use them, and how do we define a step to receive them? Let’s start by converting our step definition to use the PyString multiline syntax:

@api @test-feature
Feature: Test this feature
  Scenario: I can use this definition
    Then an email has been sent to "[email protected]" with the subject "Subject example" and the body
    """
    one of the lines in the body

    plus this is the other line of the body, after an additional line break
    """

Now let’s try to run it:

$ ./vendor/bin/behat --tags=test-feature                                                                                        
@api @test-feature
Feature: Test this feature

  Scenario: I can use this definition                                                                 # tests/features/test.feature:3
    Then an email has been sent to "[email protected]" with the subject "Subject example" and the body
      """
      one of the lines in the body
      
      plus this is the other line of the body, after an additional line break
      """

1 scenario (1 undefined)
1 step (1 undefined)
0m0.45s (32.44Mb)

 >> default suite has undefined steps. Please choose the context to generate snippets:

A bit closer. Our output actually tells us that we have a missing step definition, and suggests how to define it. That’s better. Let’s try the suggestion from the output, now defining our step like this:

  /**
   * @Then an email has been sent to :email with the subject :subject and the body
   */
  public function anEmailHasBeenSentToWithTheSubjectAndTheBody2($email, $subject, PyStringNode $string)
  {
      throw new PendingException();
  }

The difference here is that we do not add the variable name for the body in the annotation, and we specify that we want a PyStringNode type parameter last. This way behat will know (tm).

After running the behat command again, we can finally use the step definition. Let's have a look at how we can use the PyString class.

  /**
   * @Then an email has been sent to :email with the subject :subject and the body
   */
  public function anEmailHasBeenSentToWithTheSubjectAndTheBody2($email, $subject, PyStringNode $string)
  {
      // This is just an example.
      $mails = $this->getEmailsSomehow();
      // This is now the important part, you get the raw string from the PyStringNode class.
      $body_string = $string->getRaw();
      foreach ($mails as $item) {
          // Still just an example, but you probably get the point?
          if ($item['to'] == $mail && $item['subject'] == $subject && strpos($item['body'], $body_string) !== FALSE) {
              return;
          }
      }
      throw new \Exception('The mail was not found');
  }

And that about wraps it up. Writing tests are fun, right? As a bonus, here is an animated gif called "Testing".

Feb 20 2020
Feb 20

Drupal Camp London is a 3-day celebration of the users, designers, developers and advocates of Drupal and its community! Attracting 500 people from across Europe, after Drupalcon, it’s one of the biggest events in the Drupal Calendar. As such, we're pleased to sponsor such an event for the 6th time!

Drupalcamp weekend (13th-15th March) packs in a wide range of sessions featuring seminars, Birds of a feather talks, Sprints and much more. Over the weekend there are 3 Keynotes addressing the biggest upcoming changes to the technical platform, its place in the market, and the wider Drupal community.

Check out all of the accepted sessions on the Drupal Camp London website here. Or keep reading to see our highlights…

CXO Day - Friday 13th of March

From Front Room to Front Runner: how to build an agency that thrives, not just survives - Talk from Nick Rhind

Few digital agency start-ups reach their first birthday, let alone celebrate over 16 years of success. Our CEO Nick Rhind will be sharing anecdotes and advice from 2 decades of building the right teams to help his agency group, CTI Holdings, thrive.

Catch up with Nick, or any of our team attending Drupal Camp by connecting with them on LinkedIn, or via our contact form.

Come dine with us - Agency Leaders Dinner London

Hosts Paul Johnson (CTI Digital), Piyush Poddar (Axelerant), and Michel Van Velde (One Shoe) cordially invite agency leaders to join them for a night of meaningful discussions, knowledge sharing, and of course great food, excellent wine, and the best company you could ask for. Details of the dinner can be found here.

DCL Agency Leaders Dinner 2020

Agency Leaders Dinner London

Drupal Camp Weekend

Drupal in UK Higher Education - A Panel Conversation

Paul Johnson, Drupal Director at CTI Digital, will be hosting influential bodies from the Higher Education community as they discuss the challenges facing universities in a time of light-speed innovation and changing demand from students. In addition, they will explore the role Drupal has played in their own success stories and the way open source can solve problems for other universities. Drupal camp panel details available here.

The Panellists:

Adrian Ellison, Associate Pro Vice-Chancellor & Chief Information Officer University of West London - Adrian has been involved in Registry, IT and Library Services in higher education for over 20 years. He joined UWL in 2012 from the London School of Economics, where he was Assistant Director of IT Services. Prior to that, he was IT Director at Royal Holloway, University of London, and held several roles at the University of Leeds.

Adrian is a member of the UCISA Executive Committee, representing the voice of IT in UK education. He has spoken about information technology at national and international conferences and events and co-wrote the Leadership Foundation for Higher Education’s 'Getting to Grips with Information and Communications Technology' and UCISA’s ‘Social Media Toolkit: a practical guide to achieving benefits and managing risks’.

Billy Wardrop, CMS Service Support Officer at Edinburgh University - Billy is a Senior Developer with 15+ years experience and the current technical lead for the migration to Drupal 8 at The University of Edinburgh. He has worked with many platforms but his passion lies in developing websites and web applications using open source such as Drupal, PHP, JavaScript and Python. Billy is an advocate in growing the open-source community. As part of his role in the university, he regularly mentors at events and encourages software contribution. 

Iain Harper Head Of Digital, Saïd Business School, University of Oxford - Iain started his career at leading medical insurer MPS, developing their first online presence. He then ran digital projects at a leading CSR consultancy business in the Community before joining the Civil Service. Iain worked with the Government Digital Service on Alphagov, the precursor to GOV.UK. He then joined Erskine Design, a small digital agency based in Nottingham where he supervised work with the BBC on their Global Experience Language (GEL). He now leads the digital team at Oxford University’s Saïd Business School.

Open source has won. How do we avoid dying from success? - A Panel Conversation

Drupal, founded on a philosophy of open source, has steadily grown into a global community, a feat some may label as having achieved ‘Success’. Drupal users and contributors will be discussing the sustainability of Drupal and the future of open source in an open panel session.

What are the challenges faced by different roles? How can we make the whole ecosystem fair and future proof? What does an open source business model look like? 

Join our very own Paul Johnson and Drupal panellists for this thought provoking discussion on the future of open source. More details on the session are available here.

Why should you attend Drupal Camp?

Share useful anecdotes and up-to-date knowledge 

Discover the latest in UX, design, development, business and more. There’s no limit to the types of topics that could come up...as long as they relate to Drupal that is!

Meet peers from across the industry

From C-Level and Site managers to developers and designers over 500 people attended last year. Meet the best and brightest in the industry at talks and breakouts.

Find your next project or employer

A wide variety of business and developers attend Drupal Camp, make the most of it by creating connections to further your own career or grow your agency team.

Feb 20 2020
Feb 20

You may already be familiar with Amazon S3, the most popular solution for cost effective storage services nowadays. You will need it when you are looking for:

  • Low cost storage: it happens to be my case, when I implemented a Drupal based web app for a local governmental authority. The app is used by branches from all provinces of the country, and they usually upload a large amount of data (documents, photos, scans etc ...) regularly. Using the app server's storage is too expensive. So I converted the Drupal file system to Amazon S3, leaving only the core and modules on the app server.
  • Fast loading: many bloggers have used S3 to store their photos, videos, audios and files, for better serving their readers. As customers are from all over the world, saving the multimedia content to S3 will let them access them much faster.
  • And many more benefits

In this tutorial, we will show you how to convert the Drupal 7 & 8 file system to Amazon S3 and sync all existing files to S3 Storage.

1. Preparation

You will need to run several client programs like drush and awscli. So if your site is on a shared hosting, you are not able to install and execute them. Pls download it to your local host and configure it there. After that you can upload to your shared hosting.

The techniques that I use in this tutorial are:

  • Amazon S3 ofcourse.
  • Drush:  command line shell and Unix scripting interface for Drupal, very convenient and useful.
  • AWSCLI: the Amazon Web service client tool, I use it to sync data to S3.
  • S3FS: the Drupal module for connecting to Amazon S3

2. Install Drush:

Pls follow this guide to install Drush: http://docs.drush.org/en/master/install/

3. Install S3FS:

For Drupal 7, on the shell script, go to the web project folder and execute the following commands:

drush dl s3fs
drush en s3fs
drush make --no-core sites/all/modules/s3fs/s3fs.make

The first command is to download the S3FS module. The second one enables it. And the third one is to automatically download the required library AWSSDK to /sites/all/libraries/awssdk2. This command is very useful because there are many versions of AWSSDK and only the chosen version (in my case, 2.7.5) is able to work with this module.

If you have no drush, you can install the S3FS module manually and go to GitHub to download the AWSSDK for PHP library and place it under /sites/all/libraries/awssdk2 . Remember to choose the right version, which is written on the /sites/all/modules/s3fs/s3fs.make file.

On Drupal 8, please use composer to install this module. Composer will automatically install required libraries. So on the terminal and under your root project folder, run:

composer require drupal/s3fs

4. Get a key pair to access Amazon S3:

Pls go to Amazon IAM to create an user with Access Key. Pls follow this guide for more details: http://docs.aws.amazon.com/gettingstarted/latest/wah/getting-started-pre...

Then create a bucket on Amazon S3.

5. Configure S3FS access to Amazon S3

Browse Configuration - Media - S3 File System and enter your key pair and bucket name.

Hit Save. If there is no error, then the S3 connection is perfect. Otherwise, pls check your keys and bucket name, according to the error messages.

If you want the credentials and bucket name are not removed by mistakes, pls write them to the /sites/default/settings.php file

$conf['awssdk2_access_key'] = 'YOUR ACCESS KEY';
$conf['awssdk2_secret_key'] = 'YOUR SECRET KEY';
$conf['s3fs_bucket'] = 'YOUR BUCKET NAME';
$conf['s3fs_region'] = 'YOUR REGION'';

Update for Drupal 8: adding keys via the admin interface will be soon deprecated. Instead, add the configurations to the settings.php file like:

# S3FS setting
$settings['s3fs.access_key'] = 'yourkey';
$settings['s3fs.secret_key'] = 'yourkey';
$config['s3fs.settings']['bucket'] = 'yourbucketname';
$settings['s3fs.use_s3_for_public'] = TRUE;

# if you want to use S3 for private files, uncomment the line below:
#$settings['s3fs.use_s3_for_private'] = TRUE;

6. Configure S3FS to take over file system:

Now you may want this S3FS module to take over the public file system. Just open the Avanced Configuration Options section and select Use S3 for public:// files . Just tick and save and the module will take care of the rest.

Return to Amazon S3 and browse your S3 bucket, you should see a new folder named s3fs-public there (or s3fs-private if you choose Private files).

6. Clear the cache and try the first file upload

Now turn into the Actions tab and hit Refresh file metadata cache.

Then upload your first file via the node creation form, such as upload an image of an article. Save it and check the image path, it should look like http://your-bucket-name.s3-us-west-2.amazonaws.com/s3fs-public/... Your file are succesfully uploaded to Amazon S3.

If you don't see the uploaded file, it is because your bucket is not made public. Pls browse your bucket, select the s3fs-public folder and choose Actions - Make Public.

7. Sync existing files to S3

Also on the Actions tab, there is a button to Copy local public files to S3 . Just hit it and the uploading process will run in batch. If you have not many existing files, just wait for a few hours and it will be ok.

In case you have a rather large bunch of existing files, like I had 10GB of public files, the above method is just too slow. With the help of AWSCLI, it will be just a piece of cake.

To install AWSCLI, follow the guide here: http://aws.amazon.com/cli/

On your shell terminal, enter:

sudo apt-get install python-pip
pip install awscli

Then configure AWSCLI:

aws configure

AWS Access Key ID [None]: ENTER
AWS Secret Access Key [None]: ENTER
Default region name [None]: us-west-2
Default output format [None]: json

After that execute this command to sync the sites/default/files folder to your S3 bucket.

aws s3 sync sites/default/files s3://mybucket/s3fs-public --exclude *.tmp

Allow the sync process to complete and check the S3 bucket, you should see your files there.

Wrap up

Before using the S3FS module, I had a failed experiment with Amazon S3 module. The process was more complicated and I hit a deadend.

The S3FS module, in my experience, is much easier to configure and deal with. And I hope you will get the same results as mine. Good luck.

Feb 20 2020
Feb 20

Your browser does not support the audio element. TEN7-Podcast-Ep-082-Mike-Gifford-Accesibillity-Is-a-Journey.mp3

Summary

Mike Gifford’s mission is to build better and more inclusive software. He’s a Drupal 8 Core Accessibility Maintainer, founder of OpenConcept, an accessibility consulting firm, and all-around gem of the Drupal community. Mike’s been spearheading website accessibility improvements for over a decade, and we’re thrilled to have him on our podcast. You’ll learn a lot from this episode!

Guest

Mike Gifford of OpenConcept Consulting

Highlights

  • What is accessibility?
  • The four pillars of accessibility: POUR
  • Designing for 80%
  • Permanent, temporary, and situational disabilities
  • How Mike got started fixing Drupal accessibility issues
  • Drupal’s Accessibility Maintainers
  • Rachel Olivero, and the Drupal Olivero project
  • Pivoting OpenConcept Consulting from a Drupal shop to a consultancy
  • How OpenConcept eats its own dog food
  • The future of accessibility is personalization
  • The preferences framework widget
  • Establish trust by showing individual needs of website visitors are important
  • Accessibility and fonts
  • How do you help users when we can’t see their cues?
  • How accessible is Drupal Core?
  • Frontend pages AND backend pages must be made as accessible as possible
  • How are other CMSes and frameworks doing with regards to accessibility?
  • We are all affected by some sort of disability

Links

Transcript

IVAN STEGIC: Hey everyone! You’re listening to The TEN7 Podcast, where we get together every fortnight, and sometimes more often, to talk about technology, business and the humans in it. I’m your host Ivan Stegic. My guest today is Mike Gifford, Founder and President of OpenConcept Consulting, Inc. in Ottawa, Canada. OpenConcept is a web development agency specializing in Drupal, much like ours, and a benefit corporation, a B corp. Mike is also Drupal’s core accessibility maintainer and has been since 2012. Hey Mike, welcome. It’s great to have you on the podcast.

MIKE GIFFORD: It’s great to be on once again. It’s been a while, but definitely enjoy having an opportunity to talk again with you about Drupal and accessibility and things involved with digital tech.

IVAN: I love it. I can’t believe it’s been almost three years ago, episode 6. We’re coming up to episode 84. Wow. It’s been a long time. I feel like we’ve just scratched the surface.

MIKE: Absolutely. It’s amazing how time passes when you’re busy serving client needs and keeping up with changes in the Drupal community.

IVAN: Busy having fun I think, is what we call it. [laughing] So, let’s talk about accessibility. People throw that word around quite a bit, don’t they? I think we believe we know what it means, and some people say, “Oh your site has to be ADA compliant.” Other people say, “Yeah, we need WCAG or WCAD compatibility.” Why don’t we start with the definition for accessibility? What do we mean in our industry when we talk about that for a website?

MIKE: So, essentially it means removing barriers to make sure the people are able to access your content, whatever disabilities they have or whatever tools and devices they’re using to go off and to access it. The World Wide Web Consortium’s Web Accessibility Initiative broke this down into four main pillars that they’re using for the WCAG 2.0 framework, and I think to summarize it quite nicely:

Let’s make sure that your web content is perceivable so that people can understand it and read it and absorb the information.

Let’s make sure that it’s operable so if you’ve got some sort of interface people are working with, then the people can interact with the web forms, and they can engage with it, and click on the buttons and navigate the website.

Let’s make sure that it’s understandable. This is one that particularly government websites fail on. And there’s very, very few websites that really excel at plain language and making sure that things are written so that people can absorb the information on the flow and eliminate all that technical jargon and information that gets in the way of comprehension.

Finally, let’s make sure that it’s robust. So many websites work great if you’re in a dark room with a really new monitor that’s sitting there, but if you’re on your phone and navigating on a bright day, you’re going to have a hard time going off and viewing the information, accessing the information. So, let’s think about this in the real world situations that people engage with technology on. It’s not always going to be in that ideal environment where those light-gray-on-dark-gray backgrounds work.

You really need to be able to think about the context with which people are using the technology. So that’s how I think about technology within the web accessibility framework.

IVAN: And that’s POUR right? P-O-U-R. Perceivable, Operable, Understandable, Robust.

MIKE: That’s right. And there’s a whole bunch of criteria that you can use to evaluate your website based on that. It’s an interesting framework that’s designed to be technology agnostic. It doesn’t really matter what kind of technology you’re working with, whether it’s a PDF file or whether it’s a website—those principles are things that you can use to guide your thinking around accessibility.

IVAN: So those are high-level principles. That’s how we want to design from a design-first perspective so that the context of the design itself is available and accessible from anyone using the site.

MIKE: That’s right. There’s some really interesting work being done by people looking at inclusive design, and this isn’t a new movement, but there’s some neat work that Jutta Treviranus has set up to look at designing for the fringes. So often people think about the 80/20 rule, and it’s like, Well let’s just go off and design for that 80%, and then worry about that additional 20% later. We won’t necessarily factor that into the equation. Whereas, if you look at the fringes and design for the extremes, then you can be confident that everyone’s needs are going to be met, and you can work ahead to see that you’re able to deal with it.

And also, the 80/20 rule is a great concept for many things, but it doesn’t make a lot of sense. Like, how many businesses would just write off 20% of the United States? Would you get rid of New York and New Jersey? Whatever you think about them, would you just eliminate selling to those as potential customers, because it’s inconvenient for some reason? Probably not. It’s a large chunk of the population to ignore. So, thinking about the fringes and accessibility early on in the design process really allows you to go off and to serve a much broader range of the population than most people are aware of.

IVAN: The 80/20 rule is by definition exclusive. You are actually saying we’re excluding one-fifth of the population because we choose to.

MIKE: Because that’s inconvenient.

IVAN: Cause it’s inconvenient.

MIKE: Yeah. There’s really neat work done by Microsoft actually with the Inclusive Design Toolkit, and what they’ve done is try to look at, not just people with permanent disabilities, but to try and extend the definition out more broadly so that you have people that have permanent disabilities. But then there’s people who have temporary disabilities, and then there’s people who have situational disabilities. So, for example, a temporary disability might be you left your glasses at home, so you’re having trouble reading stuff at the office. Or, you’re in a situation where you’re taking medication, and so your eyesight may not be as good while you’re on this particular medication. Or, maybe you’ve broken your dominant arm, so you’re trying to navigate the mouse with your left hand, and it’s just not as good as it was with your dominant hand.

There are things like that, that are temporary issues that we all undergo as part of living in a complex world. But the situational ones are things like you’re in a noisy environment and you can’t use Siri to go off and interact, because Siri can’t cancel out all the noise that is coming from the area. Or, you’re in a situation where you want to use your laptop outside on a sunny day, and you can’t because there just isn’t enough contrast in the pages to be able to read the information effectively. So, again, thinking about all of these different ways that people interact, even if they’re not defined as having their disability, but there are times and places where everyone has a disability.

IVAN: Context is important, isn’t it?

MIKE: Yeah.

IVAN: You’re so passionate about accessibility. And besides it being the right thing to do, to be inclusive, to think about others that might not have the same abilities as you do. How did that happen? How did you get to be so passionate? Where did your start in accessibility come from?

MIKE: I think a huge part of my background came from having a good friend who has cerebral palsy, who is a real champion for disability rights and who schooled me on some of the theory about how to think about disability, and to think about the abilities that I have. So that certainly is a really key element to it. I started making changes to the Drupal community, getting them involved in affecting Drupal. And it suddenly became addictive because many people who work in the accessibility field go off and address a particular issue, and they fix a particular issue for a particular website. But that’s not what I was focusing my time and energy on. I was focusing my time and energy on fixing Drupal, which is 3% of the web.

So, I was able to go off, working with a whole team of other people to transition Drupal from being a reasonably good standards-compliant CMS to being by far the best, most accessible content management system out there, because of some of the work that I was spearheading. It was interesting to go off and look at ways of supporting people, and so the first Drupal Accessibility Maintainer was Everett Zufelt, who also taught me a great deal. When he was contributing the most to the Drupal community, he was working with me as staff at OpenConcept.

So, I was able to go off and learn from him as a blind user, and to learn what his experiences were with Drupal and review my own assumptions about what was possible, and how to address that. Everett is no longer an accessibility maintainer, but in Drupal 8 there are two other accessibility maintainers, Andrew MacPherson and Rain Breaw, who are taking on a role of pushing things ahead and addressing the accessibility community within Drupal.

So, we’re going to be having regular office hours and I think, it is the last Tuesday of every month.

IVAN: I’ve just been so impressed with the amount of accessibility that Drupal has garnered in the last five years. It’s just been so nice to see the improvements that have happened. Of the maintainers, do we have any maintainers that have disabilities that have the inclusive perspective of actually using the web and being able to maintain Core from a disability perspective?

MIKE: Actually no, and that’s an area where we could actually use a lot of additional work. I don’t think that either Andrew or Rain have a disability, at least not that I’m aware of. But we haven’t had enough people in the Drupal community who have disclosed their disabilities and who have stepped up to get involved in the Drupal community. We had a few people who have done that. Everett is one. Vincenzo Rubano was another. He came to DrupalCon Portland from Italy. And as an individual, Vincenzo has contributed more to the Drupal 8 accessibility than all of the governments in the world combined, and this is what he did in the year before he started university. So, it’s quite an amazing accomplishment in many ways, and also, why is it that governments around the world are not doing more on accessibility? It’s a bit baffling.

But the last person we’ve had to sort of highlight in terms of people with disabilities who have been involved with the accessibility team, was Rachel Olivero, who unfortunately died last year. So, that was a sad thing. She was quite involved in the diversity community within Drupal and had gone to two other DrupalCons and unfortunately, she died suddenly and is no longer with us.

IVAN: I recall that. That was devastating news that we’d heard. I’d had such a wonderful dinner with her at that DrupalCon that we both attended on trivia night.

MIKE: Yes.

IVAN: I think she was part of the National Association of the Blind, if I’m not mistaken.

MIKE: That’s right. She was working with the National Federation of the Blind and transferred a few different roles, but had actually just launched their Drupal 8 website and had made a lot of advancements in that, and it was really nice to see that, and she had also made a couple contributions to the Drupal community with accessibility bugs that she had identified. There is now the Olivero project within Drupal with the new themes which is going to be coming out with Drupal 9. I’m looking forward to going off and seeing that, a theme that’s being named in her honor. So, that’s lovely.

IVAN: That’s really lovely. We will link to that from the podcast episode page. So, if you’re listening, do visit the website for more information . Now, OpenConcept is focused on accessibility as a core part of your business, isn’t it?

MIKE: Yes, it is. We’re actually in a position where we’re pivoting from being a Drupal shop where that’s primarily what we do, to actually having a role as a digital agency and doing more consultation and support work with others. Because of the work that I’ve done on accessibility, we’ve been able to take systems perspective to accessibility and sustainability and security and really look at this at a higher level and to step back and address these issues. So, we’re doing more work as a digital agency going ahead, and not just as a Drupal shop.

IVAN: Well that’s a wonderful development for you. You certainly have the wherewithal and the knowledge to be providing that kind of consulting, so I love to hear that evolution. I love the fact that your website itself, openconcept.ca, eats its own dog food, so to speak. There’s a widget drawer at the top of some sort of preferences. I haven’t seen that before. It allows anyone to be able to change, essentially, the design and the contrast and the typeface, and everything you would to make the site more accessible, I would guess. Tell me about that preferences pane.

MIKE: Our own website is one that often doesn’t get as much attention as we’d like to. So, we started a process to rebuild our website, and I’ve had to put that on hold because of some other issues. But, yeah, our website, we’ve definitely built it for accessibility, but accessibility is a journey, you need to be able to invest in that on a regular basis. So, I’d like to be doing more with our website than we are.

But specifically about the widget that we have on our website, I realize that one of the our challenges with the WCAG process is that it’s building a single website, and meeting the needs of everyone through a single website. But unfortunately, disabilities are such that that doesn’t really make any sense. There are people who have low vision, and need high contrast. There’s people who have dyslexia, they need low contrast. That’s just sort of one example. There’s people who get really frustrated when they see Comic Sans as a font. There’s other people who, the best way that they can read content is with having Comic Sans or Openyslexic or some other customized font.

So, how do you try to give people the range of exposure to go off and absorb information in a way that suits their needs? And having a single site, that is not going to be able to achieve all of those goals. So, we see that the future of accessibility is really towards providing personalization. Yes, you want to go off and meet a minimum standard requirement. You want to make sure that your default website is meeting the base level, the Perceivable, Operable, Understandable, Robust guidelines that are being set forth in WCAG 2.1—actually, because the latest one is WCAG 2.1. So, that’s the goal that people should be aiming for. But, if you can extend it to even more people by going off and allowing individuals to have personal choices.

The IDRC, which is the Inclusive Design Research Centre in Toronto, put forward a preferences framework widget that we’ve incorporated in our website. We did this because we were working with the Canadian National Institute of the Blind, which is the equivalent of the NFB in the US, or the RNIB in the UK, and we wanted to incorporate that within their website.

So, we first tested it on our website and looked to make sure that we could work through the bugs and unknowns and uncertainties with that tool on our website, before we went off and implemented it with our clients. Again, that idea of eating your own dog food and evaluating this and building the best practice by demonstrating the best practice is something that we wanted to be able to do.

So, we implemented this widget and have contributed back to the IDRC, because the preferences framework is an open source widget that we were able to build on and incorporate into Drupal as a Drupal module. There’s now Drupal 7 and Drupal 8 implementations of the preferences framework.

IVAN: If you design a site so that it’s meeting these accessibility parameters in WCAG 2.1, then do you need the preferences framework?

MIKE: You don’t for the legislation. If your goal is to try and make sure that you’re just checking a box, and that you’re meeting the requirements and you’re not going to get sued, then no, you don’t need to worry about the widget. But if your goal is actually increasing the use and participation and usability of your site, if your goal is improving usability, then this widget is actually quite useful to go off and to give your users the ability to provide a custom interface, or custom display for the site. And there are ways that people can override the CSS pages that are custom built to their own browser, but that’s more complicated than most users would go off and know how to do, and often it is something that doesn’t work well with the website.

But, if you build in this framework then you can evaluate, Well, how does it work with the dark background? How does it work with the light background? How does it work with a yellow/black high contrast mode? And your developers and designers can evaluate some basic ideas to make sure that your SVG files show up appropriately, that you’re able to go off and provide a good experience for somebody, even if they do need to invert the color scheme. It’s useful to do that even for the number of people who are now preferring to use dark mode for their websites.

IVAN: How do you deal with marketers and brand stewards of corporations and organizations who will, I’m sure, inevitably say something like, “That preferences pane destroys our brand. It’s not consistent with what our brand guidelines say.” How do you deal with that kind of pushback?

MIKE: I think I would say that if something like this destroys your brand, then your brand is not very strong. You want to be able to go off and have some control over your presentation, your site, the default settings, but ultimately a brand is about establishing trust with the customer. And there’s no better way to establish trust with your customer than to demonstrate that their individual needs are important to you, and that you can serve their individual needs.

So, what could possibly be better than having a widget on their website that says, You can buy your Nike runners even if you're blind or if you’re dyslexic. We’re going to make it easy for you to buy our products and give you the support you need, however it is that it serves you best. And you’re not to get stuck on stupid proprietary fonts or highly custom color combinations that were approved by some highly paid branding office. Ultimately the brand has to be strong enough that the trust and that care for the user shines through, and I think that this preferences framework is a part of that.

IVAN: I’ve been tracking the variable fonts recently. I know they’ve been around for quite a while now, but they’ve only recently been getting more traction, I would guess. Do you know anything about variable fonts and how they affect or not affect accessibility? I would imagine there’d be a relationship there.

You have more control over the kerning and the size. They’re somewhat different than regular fonts that have specific sizes. A nd when you say bold for example, they go to a specific bold typeface. You can actually change the size and characteristics of these variable fonts with CSS. And I would imagine that would be highly useful from an accessibility perspective.

MIKE: Fonts are definitely an interesting area and it’s amazing to see the changes in the web that make it look more attractive, more compelling. But there isn’t a lot in WCAG to address fonts. Sometimes it just comes down to a matter of judgment. A lot of times fonts are too narrow to be easily read, and there’s no standard way to evaluate how thick or how thin a font can be without affecting the accessibility of it. And, as our monitors get more and more refined, you can create thinner and thinner fonts.

So, I think we are going to get to the point where fonts are going to be more easily evaluated, but some of it comes down to even just base readability. Like, all the debates have happened between, you know, is Helvetica better than Arial? Better than Times? There’s a lot of studies on this, but again there’s nothing in the standards that we’re looking at that say, This is the best font, or This is the way that you’re going to address fonts in meeting accessibility. Because it’s hard to pin down, hard to go off and quantify, and so much about WCAG is about making it quantifiable.

It’s not just about opinions. It’s about a quantifiable, demonstrable barrier that you’re able to address. I can see that with variable fonts, one of the neat opportunities there is to be able to say, How would you use something like the preferences framework to allow a user to go off and customize the fonts?

IVAN: Exactly.

MIKE: Let’s say you want to have a fat font for this. The font doesn’t need to be larger, it just needs to be bolder, just make everything bolder. And you could with this, very easily go off and have a setting that allows users to have that ability to build that in, and to think about ways—or just switching fonts entirely. You can either stick with just making the font that was chosen customizable, to go off to meet your specific user’s needs. Or you can go off and say, Let’s give them the option to pick from five or six other fonts that might meet their needs and allows them to more easily absorb the information. Because ultimately what we want is the ability for an author to communicate to another person.

So, how do you communicate that information so that the author or presenter is able to go off and convey as rich in information as possible for the person receiving that information to absorb. And, if you were doing it in a face-to-face conversation, we can know how to go off and slow the pace of our speech, or to speak more loudly if somebody has hearing impairments. We know how to do that because of personal cues that allow people to change their presentation for a particular individual.

But it’s harder on the web when you’ve got technology mediating that communication, and we don’t have those personal cues to guide us. So we need to actually give that opportunity for feedback to the user and encourage them to select preferences that allow them to choose something that allows them to more easily use your website.

IVAN: The standard that we’re looking at right now is WCAG 2.1, and people usually say that [level] A is the bare minimum. AAA is, Are you insane, do you have a lot of money? What is the goal of doing AAA? And AA is usually the one that organizations land on, if I’m not mistaken, to describe it loosely. Where are we at with Drupal for accessibility? For Drupal Core? And what’s next?

MIKE: So, just a bit of a correction. In the United States, the standard is still the revised Section 508 standard, which is pegged to WCAG 2.0 AA, and more or less that is the standard. Internationally, we’ve moved on from WCAG 2.0 AA, because that was the standard written in 2008.

IVAN: I didn’t realize it was that old.

MIKE: Yeah, so the original Section 508 that was in place up until January of 2017 was written in 1997, so that was how old the standard was for the Section 508. Which is still better than the—actually no, it wasn’t better than the WCAG 1.0—these are old, old standards. But WCAG 2.1 was released in 2018, so it’s a much more current guideline, and WCAG 2.2 should be released later this year. And the plan is to go off and make these releases much more regular, in order to keep up with the pace of technology. Waiting a decade or two between updates of accessibility standards truly does leave out a lot of people. So, the standards are evolving.

So, as far as where Drupal is, we’ve done a good job at Drupal Core of meeting WCAG 2.0 AA for both the frontend web pages and for the administrative pages. It’s not perfect, and there’s a lot of known issues in the accessibility queue, but we’ve addressed a lot of the base issues that people run into.

Because Drupal 8 has a lot of interactive elements, there’s content that changes on the fly with JavaScript, and we need to be able to add more instances for support for this, so that we can alert screen reader users that the screen has changed. So, ARIA live is something that was introduced in Drupal 8. There’s a Drupal JavaScript function called Drupal.announce() that allows developers to go off and to codify how screen readers are alerted to changes to dynamic content on the page. So, we need a lot more work done to implement that in Drupal 8. Drupal 8 has done a lot of work on ATAG 2.0. ATAG 2.0 is the Authoring Tool Accessibility Guidelines, and this essentially says, we’re going to look at the authoring experience, and Drupal is an authoring tool, and we’re going to say, part A of ATAG is Let’s make sure that the authors with disabilities can actually be publishers of content and not just consumers of content. Right?

And this something really that Everett Zufelt went off and drove home to us, is that we couldn’t just go off and rely on making the frontend pages accessible, we also needed to make the backend pages accessible, or people like Everett were not going to be able to publish content using Drupal. So, we went and, in Drupal 7, made some big advances in addressing the backend accessibility. That’s been carried over in Drupal 8, and that’s part of ATAG part A, it’s just sort of making that the authoring interface be as accessible as possible.

Part B is actually I think more interesting and more useful, particularly for institutions, and I’m sad to see that there isn’t more attention paid to Part B of ATAG, because that’s about how do we use these systems to make it easier for authors to create accessible content? And how if we don’t think about the authoring experience, like we can’t expect authors to be accessibility experts; we need to think through the tools and the technology that they use to support and guide users in doing the right thing.

We need to set good defaults for accessibility in the authoring tool, so that when the millions of users are adding new content to Drupal, the 3% of Drupal websites around the world, it’s a huge number of people that have used Drupal on a regular basis that need to be involved. And if you don’t have the system helping authors make the right decisions, then it should be no surprise that you have accessibility issues being added by authors who are not familiar with best practices.

But if you get the tools involved in setting up proper constructs, then you can limit the damage that users can do. You can guide them to make the right decisions. And there’s a lot more that can be done in that space. We’ve done more than anyone else, but we have not done enough in that space.

IVAN: I like to hear that we’ve done more than anyone else, but I’d be even happier to hear that “everyone else” is close on our heels, and equally accessible. So, before we close, I wanted to hear your assessment of other competitors, other CMSes and frameworks out there, and what their accessibility is looking like. How do they compare? Let’s talk about maybe a couple of the open source ones we all know and love, like WordPress, and maybe talk about React or Gatsby, any of the things that come to mind for you. How do they compare?

MIKE: So, I was really quite hopeful with the WordPress community up until the Gutenberg debacle that came out. When I was in Holland in the fall, I had a great meeting with one of the WordPress accessibility leads that had stepped down because of how that was handled. It was a really interesting presentation that she and I had with others in Holland around accessibility. So, I’m less optimistic about the future of WordPress accessibility than I was, because of leadership issues within the WordPress community.

But there’s lots of good people involved in creating accessible themes in WordPress and that’s great. But it does require it to be a priority for the leadership in order for it to be really ingrained in the community, and that’s one of the things that Drupal has really stood out in.

 I’m really impressed by Gatsby and with Marcy Sutton and others who have a really deep ingrained passion for accessibility that they’re building into the process. So, if you’re building a Gatsby site, or a Gatsby page, accessibility checks are now just part of the process of doing a Gatsby build. And just having that as a framework is just built into the process of how you build a good Gatsby website. That’s so wonderful. Marcy was involved in the Axcore team, which is an automated accessibility engine that the Deque folks built a while back, and it’s really been taking off. And the Microsoft community is jumping on board with that, and Microsoft has built a tool called Accessibility Insights that uses that.

There’s also the Google Lighthouse tool that uses Axcore as well. So, it’s nice to go off and see that that’s built into Gatsby, and that there’s a commitment to that from senior levels in the Gatsby community.

I hadn’t really seen a lot of other examples where content management systems are taking this seriously. I do think that Microsoft is an organization that we do really need to be aware of, both because of their interest in open source and their passion for accessibility, and that has really been a real transition in the last two or three years.  

IVAN: Yeah, who saw that coming?

MIKE: Yeah, like what the heck? So, they’re incredible leaders in the space and making a lot more money because of it, and that’s both wonderful and fascinating. I certainly did not see that coming [laughing]. I was definitely not one of the people that expected this. But it’s quite wonderful, and I think it will be neat to see what Microsoft comes up with, and I’m not sure there’s enough money in the world to go off and make SharePoint accessible [laughing].

IVAN: [laughing] Yeah, SharePoint doesn’t have a great, stellar accessibility. I mean, even for the rest of us, the user experience could be better.

MIKE: That’s right, but it is interesting that Microsoft has made a cultural shift in how they think about both open source and accessibility, and sustainability, for that matter. They’re committed to being carbon negative by 2030, so they are making some big bold leadership commitments in the tech space, and I think that they will pay off for Microsoft, and I think that is something that others will follow, but I haven’t seen a lot from most other, like React itself, I haven’t seen a lot of pickup and movement around this. I haven’t seen Angular or Core.

IVAN: What about Sitecore or any of the proprietary CMSes?

MIKE: I don’t think it’s part of the process. They’re not looking at building it accessible by default and partly that’s because clients are not demanding it. There are not enough organizations who are demanding accessibility as part of the default system. I think this is changing. The governments in Europe are starting to be aware of this and looking at Drupal and looking at that as a model, but it’s not being incorporated into the procurement process. So sales folks are not hearing this is something they’re not losing their contracts around it.

IVAN: Yeah, and you would expect there to be a more vocal demand. As you know there was this report from the World Health Organization in 2011 that said that about 15% of the world’s population lives with some form of disability. And it’s probably higher than that, that’s almost 10 years ago that that report was done. So, you would think that there was a demand, that corporations would see this, and would move in that direction, but I guess not.

MIKE: That’s only the people with permanent disabilities that they were addressing. They weren’t making it either temporary or situational. If you’re looking at temporary or situational, it’s a higher percentage.

IVAN: Right. We all experience high sunlight when we go out and use our phones, don’t we?

MIKE: That’s right. It’s a universal thing. Unless you don’t go outside [laughing].

IVAN: Right. [laughing] and the same thing with getting older, we all lose our eyesight as we’re getting older. Our vision becomes impaired and so there’s that to think about as well.

MIKE: That’s right, and you’ve got the aging baby boomer population. You think about the gray tsunami, all of those ideas require us to think differently about the web, because it’s not just the way that we see color change as we age, the way that we navigate websites changes. Disability is just part of life. We do not have the abilities we did when we were 20. This should not be a shocker to anyone right?

IVAN: I wish I had those abilities still, Mike. [laughing] It’s been so great talking to you. I feel like we didn’t even cover some of the things I wanted to get to, talking about the government and your work with the Canadian government and how you’ve been keeping track of the Accessible Canada Act. Would you come back sooner than the next three years, and we could have another recording and another episode, and we can get into those ideas as well?

MIKE: I would absolutely be keen on doing that, and then hopefully it’ll be something that’ll be done in person as well, which will be way more fun than doing it remotely.

IVAN: You know what, that would be great. Next time you’re in Minneapolis, I know you won’t be here for DrupalCon this year, but next time you are, let’s do that.

MIKE: That’d be great. And who knows, maybe I’ll find a way to get to DrupalCon. Maybe I’ll make that possible but it’s not in the cards right now, but, yeah, it’d sure be a lot of fun.

IVAN: That would be so much fun. Thank you so much for spending your time with me today. It’s been a great pleasure talking to you.

MIKE: No problem.

IVAN: Mike Gifford is Founder and President of OpenConcept Consulting Inc. in Ottawa, Canada. You can find them at openconcept.ca. They’re a web development agency that specialize in Drupal and are pivoting to be more of a strategic consulting firm. They’re also a B Corp. Mike is Drupal’s core accessibility maintainer, one of the few of them, and you could find him online @mgifford on Twitter. You’ve been listening to the TEN7 podcast. Find us online at ten7.com/podcast. And if you have a second, do send us a message. We love hearing from you. Our email address is [email protected]. Until next time, this is Ivan Stegic. Thank you for listening.

Feb 19 2020
Feb 19

Promet's acquisition last year of a team focused on user experience and strategy, has opened an exciting new sphere for the types of conversations that we are having with clients. 

The former DAHU Agency’s Human-Centered Design expertise has sparked a many questions and within a relatively short span of time, has driven the delivery of expectation-exceeding results for clients from a range of sectors. 

As the name suggests, Human-Centered Design occurs within a framework for creating solutions that incorporate the perspectives, desires, context, and behaviors for the people whom our clients want to reach. It factors into every aspect of development, messaging and delivery, and calls for:

  1. A deep and continuous questioning of all assumptions,
  2. A willingness to look beyond the “best practices” that others have established,
  3. An eagerness to find inspiration from anywhere and everywhere,
  4. The involvement and ideas of multiple stakeholders, from different disciplines, along with a process for ongoing testing, iterating and integration of feedback, and
  5. Constant emphasis on the concerns, goals and relevant behaviors of targeted cohort groups.


Within less than a year, this specialized approach has become fundamentally integrated into the ways that Promet thinks, works and engages with clients. We intentionally practice design techniques that combine inputs from our UXperts, the client, and the end user--bring empathy and human experience to the forefront of our process.

How Does this Approach Differ?

In contrast to traditional product-centered design, where the appeal, color, size, weight, features and functionality of the product itself serves as the primary focus, Human-Centered Design creates solutions that understand audiences from a deeper perspective.  We try to meet more than the basic needs of a captivating design. To do this, we must fulfill greater and more engaging purpose and meaning expressed within the designs we create.


Among the approaches that we’ve found particularly useful is that of Abstraction Laddering, in which we guide interdisciplinary teams through the process of stating a challenge or a goal in many different ways, and continuing to answer “how” and “why” for purposes of advancing toward greater clarity and specificity. 


Human-Centered Design fuels simplicity, collaborative energies, and a far greater likelihood that launched products will be adopted and embraced. When practiced in its entirety it helps to ensure success. As such, it benefits everyone and is perfectly aligned with Promet's User Experience (UX) Design practice.

Design that Delivers

As we engage with clients in the process of deepening our understanding of their customers, we draw upon the expertise of our highly skilled and creative team members, and leverage expertise at the leading edge of the digital landscape.


The addition of this new Human-Centered Design team to the Promet Source core of web developers has helped us to proactively approach new websites with a holistic mindset combining our technology expertise with great design and function, along with an essential empathy of how humans interact with technology.  

Contact us today to schedule a workshop or to start a conversation concerning Human-Centered Design as a strategy to accelerate your business goals. 

Feb 19 2020
Feb 19

Our normally scheduled call to chat about all things Drupal and nonprofits will happen TOMORROW, Thursday, February 20, at 1pm ET / 10am PT. (Convert to your local time zone.)

This month, in addition to our usual free-for-all, we'll be talking about hosting on Pantheon. There has been a lot of discussion in the community and on the Drupal Slack #nonprofits channel about some of the pricing changes they have implemented. If you would like to discuss and contribute to the conversation, please join us.

We will also have an update on our community's plans for the upcoming Nonprofit Technology Conference (20NTC).

All nonprofit Drupal devs and users, regardless of experience level, are always welcome on this call.

Feel free to share your thoughts and discussion points ahead of time in our collaborative Google doc: https://nten.org/drupal/notes

This free call is sponsored by NTEN.org but open to everyone.

REMINDER: New call-in information -- we're on Zoom now!

  • Join the call: https://zoom.us/j/308614035
    • Meeting ID: 308 614 035
    • One tap mobile
      • +16699006833,,308614035# US (San Jose)
      • +16465588656,,308614035# US (New York)
    • Dial by your location
      • +1 669 900 6833 US (San Jose)
      • +1 646 558 8656 US (New York)
  • Follow along on Google Docs: https://nten.org/drupal/notes
  • Follow along on Twitter: #npdrupal

View notes of previous months' calls.

Feb 19 2020
Feb 19

One of the biggest benefits of an open-source community like Drupal is the ability to collaborate with fantastic people that you wouldn’t otherwise have the opportunity to work with. However, when you have an idea that you think would be a good initiative for a Drupal core release (such as Drupal 9) you might find yourself thinking: "How do I even begin? How can I advocate for my idea?” We all find ourselves asking these questions as we navigate the complex journey of turning an idea into a core initiative.

During DrupalCon Seattle, a handful of people had a casual conversation in a hotel lobby. This conversation turned into an official Drupal core strategic initiative to create a new default front-end theme for Drupal 9. Here's the story of how that happened, the steps we took, and the work we did before opening the project to the community.  

The Beginning: Is "Your Idea" Already in the Works?

On the last day of DrupalCon Seattle, Mike Herchel (Senior Developer at Lullabot), Lauri Eskola (Drupal core committer and front-end framework manager), Angie Byron (Drupal core committer and product manager), and I had a conversation about what exactly distinguishes a good CMS theme. Naturally, that led to the discussion of the current status of Drupal 9.

Mike and I were surprised to find out that there was no initiative in place for a Drupal 9 default theme. Having been in the community for quite a while, we knew that Bartik was created for Drupal 7 and has long served as the default theme, but it’s nearly ten years old. By 2019, the design had begun to look dated and no longer spoke to Drupal's strengths.

We began envisioning what kind of first impression a clean and modern default theme would have on users when Lauri mentioned something along the lines of, “Why don't you get involved since Drupal 9 is just around the corner and is expected to be released around mid-2020?” We were excited by the idea and that we already had buy-in from a key figure within the community. On our way to the airport the following morning, Mike and I continued chatting about ways this project could start.

Setting Goals: Identify Why This Initiative Matters

Before announcing to the world that you have an idea that can be shipped into Drupal core, stop and ask yourself what your goals are for the project. Mike and I started by writing down some of the pain points and challenges of the current status quo. As Dries pointed out in his keynote, experts love Drupal. However, Drupal as a CMS still has a negative connotation among beginners for its outdated interfaces and user experiences. Therefore, prioritizing the beginner experience through potential initiatives like the new default theme, guided tours (aka Out of the Box initiative), and WYSIWYG page building would give Drupal a much-needed new look and feel that users expect from a modern CMS.

Here are the three goals that we identified:

  • Update to modern design: Design a theme that feels modern and ages well for the next 5 to 10 years.
  • Add functionality that supports new features: Include support for commonly used Drupal functionality, such as second-level navigation, embedded media, layout builder, and more.
  • Create a WCAG AA compliant theme: Work closely with accessibility experts and the Drupal Accessibility team to ensure that the theme passes Drupal’s stringent accessibility standards with flying colors.

Drupal Core Ideas Queue

Setting these goals helped us stay focused on what we needed to do and got us prepared to open an “idea issue” for the redesign and development of a theme that could ship with the release of Drupal 9. The ideas queue section of Drupal.org let us propose ideas for Drupal core and got them through validation and planning phases.

Here’s a link to the issue that we submitted to the Drupal ideas project: https://www.drupal.org/project/ideas/issues/3064880

Forming the Band: Putting Together Your Initial Team

With any big or small initiative, you can't do the work all by yourself. You need a team that can help bring in new perspectives and fill in the areas that are outside of your discipline. Once we knew our idea was valid and sought-after by the community, Mike engaged Lullabot designers Jen Witkowski and Jared Ponchot to lead the design effort for the new Drupal 9 default theme. Kat Shaw and Matthew Tift also joined for assistance with accessibility and project management.

Identifying Stakeholders

Part of this team's responsibility was to identify design stakeholders who could help us refine the design. We iterated on the design multiple times internally before presenting it to the community to avoid bikeshedding. Doing this helped speed up the proposal process, which was one of the key contributing points to us getting traction and building excitement for this core initiative.

The following people were chosen as stakeholders:

Document and Design

As the discovery process started to take shape, we continuously documented all of the discussions we had regarding the project. Documentation isn’t as fun or exciting as writing code, but it's one of the contributing factors to keeping us on track and getting to our goal of releasing a proposal to the community. 

Meanwhile, we worked with our stakeholders to identify adjectives that would help guide the visual design. We created a sliding-scale exercise where stakeholders could add points across several tone spectrums, a common practice that the design team at Lullabot likes to conduct on client projects. Some of these were one adjective versus another (“formal” not “casual”), while others highlight the importance of a balance (“approachable” and “official”).

Voice and Tone of the Theme

Below are keywords that were identified to serve as the voice and tone of the new theme:

  • Formal
  • Light and bright (vs. dark & impactful)
  • Contemporary
  • Approachable and official
  • Novel (with some constraint)
  • Cool
  • High contrast with some restraint
  • Light (not heavy)

Design Principles

The following principles were established through research and collaboration, and are useful for guiding future additions and feedback for changes.

  • Accessible: WCAG AA conformance is a top priority for the design. The layout, colors, and functionality should provide an accessible theme that can be enjoyed by everyone.
  • Simple: Avoid unnecessary visual elements, colors, effects, and complexity.
  • Modern: Take advantage of the capabilities and strengths of modern browsers and interaction patterns.
  • Focused: Embrace high contrast, saturated color, and negative space to draw the eye to what’s most important.
  • Flexible: Provide options and overrides to account for varied needs and preferences.

The Meeting / Feedback Loop

Although this initiative is not a client project nor one that we work on daily, we established a routine of meeting every week to discuss what needed to be done to present a design to the stakeholders. Once we established the design principles and the voice and tone, we used zoom mocks to explore style using the adjectives and design principles as our guide. We presented these to the stakeholders, who chose a design with which to move forward.

We continued to iterate on the chosen design direction based on the feedback from the stakeholders. The design process continued with the addition of internal accessibility testing, which highlighted several contrast deficiencies that we subsequently fixed.

Proof of Concept

Throughout the process, we built a prototype in static HTML, CSS, and JavaScript. The intention was to validate the new features and help answer potential UI/UX issues that might arise during the design process. We also used it as an opportunity to vet the use of the CSS grid and ensure the front-end architecture could be accessible, as well as work with Internet Explorer 11 (and other core supported browsers). This proof of concept is not yet fully accessible, although it will be eventually. The next step is to get full sign-off from the Drupal accessibility team, which will hopefully alleviate last-minute time crunches when submitting the patch to Drupal core.

The following are key activities we’re focusing on:

  • Investigating the use of the header on scroll interaction on mobile and tablet devices.
  • Validating the use of the CSS grid in legacy browsers such as Internet Explorer 11 and identifying whether or not we’ll need to account for progressive enhancement features.
  • Verifying that the markup is semantic and meets the accessibility requirements.

Community Announcement: The Formal Processes on Submitting Your Idea to the Community

Once the design was in a good place, we drafted a proposal to the community and sought feedback for the work that had been done (see link - Designs for new front-end theme for Drupal 9). The announcement issue included several processes that we took to get to where we are today. The response from the community was overwhelmingly positive, and we were thrilled to see the excitement.

What's next?

The Drupal 9 theme initiative is currently underway. If you're interested in contributing to the new Olivero frontend theme effort, please check out "How to contribute to Olivero" and get involved with the team.

Building Olivero was the first time some of us have contributed to a Drupal core initiative, and admittedly, it can be scary and a little overwhelming. Sometimes you don't feel like you have enough years of experience or enough in-depth specific knowledge. But no matter what your background or experience level is, chances are there’s something you can do to help within the open-source community. In our case, we happened to be in the right place and know the right people. However, having a well-thought-out proposal, identifying key stakeholders, and having a phenomenal team involved can give legitimacy to your idea. I hope hearing the journey of how we got here provides some helpful takeaways and inspires you to jump-start your idea and advocate for getting your initiatives into Drupal core.

A huge special thank you to everyone who has contributed to the Olivero project so far! We wouldn’t be where we are without your support. 

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web