Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Oct 27 2020
Oct 27

Drupal 9.1 is set to release in December 2020 but will enter the alpha phase this month, Oct 2020 and will have a beta release in November 2020. The minor version update of Drupal will see more deprecations and updating third party packages. However, there are 2 notable changes which will be happening in the Drupal 9.1 release.

drupal theme


 

A New Default Drupal Theme

Olivero is the new Drupal theme which is in the experimental state. The default theme Bartik has been around since 2011 and has served well over the years. After the Drupal 8 release, a new version of Bartik was released which was responsive out of the box. However, with the change in web design trends, the Bartik theme seems outdated compared to the prowess of Drupal 9 which needed a new modern & clean theme. This is also necessary for evaluators of Drupal to have a good first impression then they assess Drupal for their next big project.

drupal theme

The Olivero theme is added to the Drupal core and will be shipped with Drupal 9.1.0 as an experimental theme. The maintainers are planning to have a stable release and make it the default theme swapping out Bartik in the next minor release - Drupal 9.2

Native Image Lazy Load

One of the biggest factors for a successful website is how performant the site is. Lazy-loading of page elements, especially images, is a powerful way to increase perceived performance, reduce time-to-first-render, and reduce user friction on the web.

Images are the most requested assets during a page load and takes up more bandwidth than other resources. Lazy loading images can be done using JavaScript plugins or using Intersection Observers. In Drupal there are contributed modules available which does the job. However, these solutions are not native and often requires some complex setup and configurations to achieve a proper lazy loading of images.

The html 'loading' attribute used by Chrome platform which implies to simple add a loading="lazy" to the images to defer loading of the resource until it reaches a calculated distance from the viewport.

This is now added to all images out of the box and will be available in the Drupal 9.1.0 release. Supports any Chromium based browser like Chrome, Edge, Opera, and Firefox (with support for Safari still in progress). This will highly impact the perceived performance of Drupal sites and combined with other mechanisms like Big Pipe, it will greatly improve the performance.

Updated to PHPUnit 9

Another update which Drupal 9.1.0 will receive is for developers. The third-party dependency, PHPUnit which powers solid the testing framework used in Drupal will be updated to the latest version while keeping the support of 8.4 for backward compatibility. This update was added to make Drupal 9 ready for PHP 8 which will be releasing on November 26th, 2020. 

Other changes which are included are mostly deprecation of methods and introducing overridable services. A full list of change records can be found here.

Oct 27 2020
Oct 27

A lot of the time, a custom content entity type only needs a single bundle: all the entities of this type have the same structure. But there are times where they need to vary, typically to have different fields, while still being the same type. This is where entity bundles come in.

Bundles are to a custom entity type what node types are to the node entity type. They could be called sub-types instead (and there is a long-running core issue to rename them to this; I'd say go help but I'm not sure what can be done to move it forward), but the name 'bundle' has stuck because initially the concept was just about different 'bundles of fields', and then the name ended up being applied across the whole entity system.

History lesson over; how do you define bundles on your entity type? Well there are several ways, and of course they each have their use cases.

And of course, Module Builder can help generate your code, whichever of these methods you use.

Quick and simple: hardcode

The simplest way to do anything is to hardcode it. If you have a fixed number of bundles that aren't going to change over time, or at least so rarely that requiring a code deployment to change them is acceptable, then you can simply define the bundles in code. The way to do this is with a hook (though there's a core issue -- which I filed -- to allow these to be defined in a method on the entity class).

Here's how you'd define your hardcoded bundles:

/**
 * Implements hook_entity_bundle_info().
 */
function mymodule_entity_bundle_info() {
  $bundles['my_entity_type'] = [
    'bundle_alpha_' => [
      'label' => t('Alpha'),
      'description' => t('Represents an alpha entity.')
    ],
    'bundle_beta_' => [
      'label' => t('Beta'),
      'description' => t('Represents a beta entity.')
    ],
  ];

  return $bundles;
}

The machine names of the bundles (which are used as the bundle field values, and in field names, admin paths, and so on) are the keys of the array. The labels are used in UI messages and page titles. The descriptions are used on the 'Add entity' page's list of bundles.

Classic: bundle entity type

The way Drupal core does bundles is with a bundle entity type. In addition to the content entity type you want to have bundles, there is also a config entity type, called the 'bundle entity type'. Each single entity of the bundle entity type defines a bundle of the content entity type. So for example, a single node type entity called 'page' defines the 'page' node type; a single taxonomy vocabulary entity called 'tags' defines the 'tags' taxonomy term type.

This is great if you want extensibility, and you want the bundles to be configurable by site admins in the UI rather than developers. The downside is that it's a lot of extra code, as there's a whole second entity type to define.

Very little glue is required between the two entity types, though. They basically each need to reference the other in their entity type annotations:

The content entity type needs:

 *   bundle_entity_type = "my_entity_type_bundle",

and the bundle entity needs:

 *   bundle_of = "my_entity_type",

and the bundle entity class should inherit from \Drupal\Core\Config\Entity\ConfigEntityBundleBase.

Per-bundle functionality: plugins as bundles

This third method needs more than just Drupal core: it's a technique provided by Entity module.

Here, you define a plugin type (an annotation plugin type, rather than YAML), and each plugin of that type corresponds to a bundle. This means you need a whole class for each bundle, which seems like a lot of code compared to the hook technique, but there are cases where that's what you want.

First, because Entity module's framework for this allows each plugin class to define different fields for each bundle. These so-called bundle fields are installed in the same way as entity base fields, but are only on one bundle. This gives you the diversification of per-bundle fields that you get with config fields, but with the definition of the fields in your code where it's easier to maintain.

Second, because in your plugin class you can code different behaviours for different bundles of your entity type. Suppose you want the entity label to be slightly different. No problem, in your entity class simply hand over to the bundle:

class PluginAlpha {

  public function label() {
    $bundle_plugin = \Drupal::service('plugin.manager.my_plugin_type')
    return $bundle_plugin->label($this);
  }

}

Add a label() method to the plugin classes, and you can specialise the behaviour for each bundle. If you want to have behaviour that's grouped across more than one plugin, one way to do it is to add properties to your plugin type's annotation, and then implement the functionality in the plugin base class with a conditional on the value in the plugin's definition.

/**
 * @MyPluginType(
 *   id = "plugin_alpha",
 *   label = @Translation("Alpha"),
 *   label_handling = "combolulate"

class MyPluginBase {

  public function label() {
    switch ($this->getPluginDefinition()['floopiness']) {
      case 'combolulate':
        // Return the combolulated label. 
    }
  }

}

For a plugin type to be used as entity bundles, the plugins need to implement \Drupal\entity\BundlePlugin\BundlePluginInterface, and your entity type needs to declare the plugin:

 *   bundle_plugin_type = "my_plugin_type",

Here, the string 'my_plugin_type' is the part of the plugin manager's service name that comes after the 'plugin.manager' prefix. (The plugin system unfortunately doesn't have a standard concept of a plugin type's name, but this is informally what is used in various places such as Entity module and Commerce.)

The bundle plugin technique is well-suited to entities that are used as 'machinery' in some way, rather than content. For example, in Commerce License, each bundle of the license entity is a plugin which provides the behaviour that's specific to that type of license.

One thing to note on this technique: if you're writing kernel tests, you'll need to manually install the bundle plugins for them to exist in the test. See the tests in Entity module for an example.

These three techniques for bundles each have their advantages, so it's not unusual to find all three in use in a single project codebase. It's a question of which is best for a particular entity type.

Oct 27 2020
Oct 27

How did the COVID-19 crisis affect client relationships and what can we take out of it? Take our global survey to help collect insights about how client relationships developed through the last year, what were our high and low lights and what can we take out of it?

The survey is open until November 8, 2020.

An anonymised summary of the survey results will be published in a blog post on Drupal Planet in November 2020.

You can then also participate in our online retrospective session at DrupalCon Europe, Wednesday 9th December - 15:15 to 17:15 UTC

Thank you for sharing your insights to help everyone grow. The data collected in this survey will be processed respecting everyone’s privacy. Link to the survey.

Photo by Julia M Cameron from Pexels

Oct 26 2020
Oct 26

Our nominations

This year 1xINTERNET got nominated in three categories:  Non-Profit, Government & Enterprise. 

IFOAM Organics got nominated in the Non-Profit category.  The company is a membership-based organization working to bring true sustainability to agriculture across the globe. The challenge during this project was to create a consistent logic in displaying content as well as creating great editorial experience for multiple numbers of editors. To make this happen, we created a highly customizable backend. Each content page can use a different colour theme and paragraphs within the page can also have their own colour settings. The structure of the pages is up to the editors and what the content is all about. Basic pages act as overview pages or landing pages (with their own navigation) for the other content types. We implemented various views for specific content types (like news or events), as well as reusable elements, which allows the editors to modify a piece of content in one place that will then change everywhere that the element is used. 

Bodensee Schiffsbetriebe - BSB got nominated in the Government category. BSB runs a fleet of passenger ships and ferries and is a unique provider of local public transportation in Germany. BSB was looking for a professional partner who was able to provide the technical expertise for the implementation of a modern, mobile-optimised website and who also had a great experience in online marketing.  To achieve this our team at 1xINTERNET developed a solution to personalize the content for certain user groups of the BSB website. Visitors are segmented with targeted marketing campaigns or via the user behaviour on the website. To enhance the user experience through their whole stay on the website, preferred contents are displayed matching the interests of the visitors.

Transgourmet got nominated in the Enterprise Category. 1xINTERNET was asked to relaunch all websites of Transgourmet and their brand Selgros Cash and Carry. The task was to create a consistent and flexible solution that would enable the relaunch of all websites of the company. Therefore a robust, secure and efficient CMS was needed whose development and maintenance could be decoupled in time. For a company like Transgourmet a project like this is an essential part to complete the digital strategy.

Oct 26 2020
Oct 26

This year DrupalGov is virtual. The PreviousNext team is sponsoring and helping to run the DrupalGov 2020 Sprint Day on Wednesday 4 November, and there are a few things you can do now to hit the ground running on the day.

This year the DrupalGov sprint will be virtual

We’ll start the day with a brief Zoom meeting to introduce the organisers, and outline how the day will run.

We’ll use #australia-nz Drupal Slack as the main communication channel, with ad hoc Zoom or Meet video calls for those who want to dive deeper into a topic.

For the majority of the day, we’ll be using Slack threads to keep track of sprint topics and reduce the noise in the main channel.

Join us on Slack

If you haven’t already done so, now is a great time to sign up and join the Australian / New Zealand Drupal community in Slack. Instructions for how to join are here: https://www.drupal.org/slack

Let us know about your experience

Please fill in the following survey to let us know about your experience with Drupal, and the areas you’re interested in Sprinting on. This will help us better prepare for the day.

https://www.surveymonkey.com/r/2DPWDPL

How to contribute

Sprint day is not just for developers! Contribution comes in many forms. If you’re interested in the different ways you can contribute to this amazing project, see the list of contributor tasks: https://www.drupal.org/contributor-tasks

Tagging issues to work on

If you want to see what might be an interesting issue to work on, head over to the Drupal.org Issue Queue and look for issues tagged with 'DrupalGov 2020'. These are issues that others have tagged.

You can also tag an issue yourself to be added to the list.

Set Up a Development Environment

There is more than one way to shear a sheep, and there is also more than one way to set up a local development environment for working on Drupal.

If you don't already have a local development environment setup, we recommend using Docker Compose for local development - follow the instructions for installing Docker Compose on OSX, Windows and Linux.

Once you've setup Docker compose, you need to setup a folder containing your docker-compose.yml and a clone of Drupal core. The instructions for that vary depending on your operating system, we have instructions below for OSX, Windows and Linux, although please note the Windows version is untested.

Mac OSX

mkdir -p ~/dev/drupal
cd ~/dev/drupal
wget https://gist.githubusercontent.com/larowlan/9ba2c569fd52e8ac12aee962cc9319c9/raw/e69795e7219c9c73eb8d8d171c31277eeb5bcbaa/docker-compose.yml
git clone --branch 8.9.x https://git.drupalcode.org/project/drupal.git app
docker-compose up -d
docker-compose run -w /data/app app composer install

Windows

git clone --branch 8.9.x https://git.drupalcode.org/project/drupal.git app
docker-compose up -d
docker-compose run -w /data/app app composer install

Linux

mkdir -p ~/dev/drupal # or wherever you want to put the folder
cd ~/dev/drupal
wget https://gist.githubusercontent.com/larowlan/63a0f6efacee71b483af3a2184178dd0/raw/248dff13557efa533c0ca297d39c87cd3eb348fe/docker-compose.ymlgit clone --branch 8.9.x https://git.drupalcode.org/project/drupal.git app
docker-compose up -d
docker-compose exec app /bin/bash -c "cd /data/app && composer install"

If you have any issues, join us on Drupal slack in the #australia-nz channel beforehand and we'll be happy to answer any questions you might have.

Install dreditor browser extension

Dreditor is a browser extension that makes it easier to review patches on Drupal.org. Its a must for anyone contributing to Drupal.

There are versions for Firefox and Chrome.

Find Issues to Work On

If you want to see what might be an interesting issue to work on, head over to the Drupal.org Issue Queue and look for issues tagged with 'DrupalGov 2020'. These are issues that others have tagged.

You can also tag an issue yourself to be added to the list.

Being face-to-face with fellow contributors is a great opportunity to have discussions and put forward ideas. Don't feel like you need to come away from the day having completed lines and lines of code.

Code of conduct

To provide a safe and inclusive environment, the sprint day will abide by the DrupalSouth Code of Conduct: https://drupalsouth.org/code-of-conduct

We look forward to seeing you all there!

Photo of kim.pepper

Posted by kim.pepper
Technical Director

Dated 26 October 2020

Oct 24 2020
Oct 24

This month’s SC DUG featured Mauricio Orozco posing questions about getting started as a consultant to long-time members who have all done some work with Drupal as a consultant.

https://www.youtube.com/watch?v=4NG5lgDOFzs

If you would like to join us please check out our upcoming events on Meetup for meeting times, locations, and remote connection information.

We frequently use these presentations to practice new presentations, try out heavily revised versions, and test out new ideas with a friendly audience. So if some of the content of these videos seems a bit rough please understand we are all learning all the time and we are open to constructive feedback. If you want to see a polished version checkout our group members’ talks at camps and cons.

If you are interested in giving a practice talk, leave me a comment here, contact me through Drupal.org, or find me on Drupal Slack. We’re excited to hear new voices and ideas. We want to support the community, and that means you.

Oct 23 2020
Oct 23

Drush is the ultimate tool and companion for Drupal and it does a great job in situations where we want to import or export our Drupal database.

Export/Backup Drupal database

The best way to export the Drupal database is to first rebuild/clear the cache first and then export DB with sql-dump command. Clearing the cache first will empty the cache tables in the database and significantly decrease the database size.

Drupal export database

  1. The first step is to rebuild your cache before database export
    drush cr
    
  2. Now run the sql-dump command with the destination argument to export the Drupal 8 database to the desired location. This will dump the database into a gzip file without cache table content.
    drush sql-dump --gzip --result-file
    
    drush sql-dump --gzip --result-file=/path/to/file.sql
    

Export database in Drupal 6 or 7

  1. For Drupal 6 and 7 also, you need to clear the cache first to empty the cache from the database.
    drush cc
    
    
  2. Now run the sql-dump command to export the Drupal 6 and 7 databases. The export will be a gzip file without cache table content.
    drush sql-dump --gzip --result-file
    
    drush sql-dump --gzip --result-file=/path/to/file.sql
    

Import Drupal database

The below command can be used to import an SQL dump file into the Drupal database. Make sure to drop the database first and then run the import command which will import the data to Drupal's current database.

drush sql-drop -y
drush sql-cli < ~/path/to/file.sql
 
Oct 22 2020
Oct 22

Through a mix of our deep technical experience, our solutions orientated approach, our commitment to maintenance and reliability, and our technology stack, Amazee Labs is well-positioned to be the solution provider for your next enterprise digital project.

Let’s take a look at the reasons why enterprises are putting their trust in Amazee Labs and our Open Source Tech Stack over Adobe Experience Manager.


What is the Amazee Labs Open Source Tech Stack?


As you would have guessed, the golden thread that ties our technology stack together is that all the tools in our stack are completely open source. 

Our approach speaks to the fact that information and content have become some of the most valuable resources for many organisations. Consequently, we believe that the technology supporting your business’ digital presence is not just providing the virtual business card or magazine it once was - it is now both a vault and scaled distribution system for some of your most valuable communication tools and assets.

By dissolving traditional dependencies between content management and content presentation, you gain the freedom to build user interfaces that speak to your specific audience, meeting them where they are at, and providing them with what they need.


Graphic representing the Amazee Labs Open Source Tech Stack

At the core of our Open Source Stack is the Drupal CMS, which is integrated with an open source search engine to support dynamic and fast content search systems. 

Next in our stack, we use Gatsby to power your website front-end. Gatsby is a Progressive Web App generator. The Gatsby front-end builder creates a front-end application that is lightning fast, and hugely scalable. 

Finally, being deeply embedded in the Drupal ecosystem for over 10 years, we have seen multiple hosting providers come and go. The hosting layer in our stack is run by our sister company amazee.io. Their open-source, cloud-native application delivery platform - Lagoon - powers thousands of websites in the UK, Europe, the USA, Australia, New Zealand, and Japan. The list of locations is growing all the time. 

And yes, Lagoon is entirely open source.

Before we get to answering the question of why enterprises are trusting Amazee Labs and our Open Source Stack over Adobe Experience Manager (AEM), let's take a quick look at the AEM software stack.


What is Adobe Experience Manager?


Adobe Experience Manager (AEM) is Adobe’s enterprise-grade content management solution for managing and deploying digital content. 

As you would expect from any enterprise offering, AEM can deploy content directly to HTML in traditional web architecture, and can also be deployed as a headless CMS to drive decoupled websites, mobile applications, or really any application that you’d like to consume your content. What you may be surprised to learn is that Adobe Experience Manager also rests heavily on an Open Source base.

Argil DX provides a good overview of the AEM architecture in their post about The Open Source Core of AEM, but here are the highlights to support your comparison. The AEM stack comprises three key components which are all Open Source:

  • AEM Core rests on a Java Content Repository (which is based on the open-source Apache Jackrabbit project).
  • The modular architecture of AEM is provided by OSGi (Open Services Gateway Initiative), an open-source framework which is implemented as Apache Felix, which is supported by Adobe developers as well.
  • Apache Sling provides the RESTful architecture and an open-source web framework, which communicates with the Content Repository. 

In addition to these core components, AEM also works with several other Open Source projects such as MongoDB, Lucene Solr (built on Apache Lucene), and ElasticSearch. Finally, the AEM project is packaged in a multi-module structure, and dependencies, which are managed by Apache Maven.

Points of comparison


Price

If you’re hunting around for your next Enterprise Digital Platform, there is a good chance that you landed here with the preconception that Open Source is your cheaper option. 

On the surface, you’d be right. 

Adobe Experience Manager uses an enterprise licencing model, much the same as the other software and tools likely to be found in your enterprise. You can expect to pay at least 
$40,000 in annual licence fees for AEM and this can scale quickly if you have complex or large scale digital needs.

By comparison, our Open Source Stack will cost you an annual licence fee of precisely zero Dollars, zero Franks, zero Euros, and zero Pounds. Amazee Labs was founded on direct open source principles. This is baked into our DNA. 

However, it would be misleading of us to say that Drupal is inherently cheaper. When looking for your next digital platform, you should take into account the full and total cost of ownership, which includes at least: licences, setup, development and customization, reliability & security (including upgrades), training, and often hosting. 

Factoring all this in, the difference between AEM solutions and our open source solution shrinks. But, we don’t design our solutions to be cheap - we designed our stack to be cost-effective. 

The real difference from a pricing perspective - what you gain from our open-source solution (and any open-source solution, really) - is the freedom to choose what matters most to you. 

If you have complex needs, our solution scales to match. If you have simpler needs, we have that covered as well. You pay for the commensurate value the solution adds to your business right from the start. No bloat, and no unnecessary overhead licensing costs.

Development, customization, and extensions

Adobe has a network of deeply skilled agencies and partners that can implement Adobe Experience Manager for you. One of the challenges, however, is that to truly benefit at an Enterprise level, you often need to (or will be encouraged to) combine more of Adobe’s software and tools together, driving more and more vendor lock-in.

We have open source running through our veins, and vendor lock-in runs deeply against our company culture and values. Our approach to development is to use tried-and-tested plugins and modules where possible but to make sure they come together in a coherent and holistic way. And if you have a specific requirement, we can build that for you. 

With Drupal at our core, you’re able to configure complex editorial experiences and workflows. You’re able to support separate authoring and publishing environments and to preview what content changes might look like before pushing them to production. You can even create centrally developed content that can be pushed into separate markets or regions to be customized. Want to see what your regions have changed? No problem, we have that covered as well. Oh, and we routinely support all of this in multilingual environments!

Gatsby lets us consume your raw content, and to transform it for your audience through beautifully crafted website and web application experiences which are lightning-fast, accessible, and future proof.

Reliability and Security

When building custom features for you, we always think about how the software we’re writing will be maintained. One of the criticisms of open-source platforms is that it's easy for modules-and-plugins to get out of control. This is true, but it’s hardly just an open-source problem. “Engineering for maintenance” is required regardless of the platform you’re building on. 

AEM vendors deal with this by curtailing your choices.  
 
We deal with this by proudly being one of the agencies who have a maintenance team that stands independently from our development teams. Through our Managed Web Maintenance service, your digital platform’s reliability and security is not an afterthought, but actually, the reason that several people get up in the morning!

Cybersecurity is a key concern for enterprises. Adobe takes the lead in the security provisions of AEM sites. Our Software Stack is backed first by the Amazee Security Team, who are in turn in direct contact with the open-source projects security teams who are tasked with keeping the components of our Open Source Stack secure. Through these relationships, your security team is ultimately made up of thousands of passionate open-source engineers and experts.

Enterprise Innovation


The competitive landscape that enterprises find themselves in is increasingly fast-paced. “Idea today, test tomorrow” is how enterprise digital and communication teams are expected to act. 

While AEM is built on an open-source foundation, it is still very much characterized by a slower and less agile approach to experimentation and changes.

Amazee Labs has a dedicated extension team that will help you plan, design, execute, and measure new features and changes to your digital platform. With our Open Source stack, you can move quickly. 

Scaling


Decades into its existence, open-source solutions are still fighting against the cliche that open-source digital platforms have problems scaling, despite the fact that the vast majority of the internet is in some shape or form resting on vast oceans of open-source software.

Much like Adobe Experience Manager, we achieve scale for your platform through supporting you in planning, configuring, and running your infrastructure. While we support on-premise enterprise hosting, an increasingly popular choice for enterprise clients is to host their infrastructure in the cloud. With Amazee Labs, you can use our cloud - or in the spirit of freedom and choice - you can bring your own!

“Easy to say”, you say? The same systems that run our enterprise platforms, run Australia’s GovCMS. Check out this case study to learn how during the initial months of COVID-19, GovCMS scaled to support: 

  • Traffic up to 168TB/month, highest single day was 8.4TB
  • Hits up to 1.96 billion hits/month, highest single day was 103 million
  • Peak concurrent users on just one single site hit 157,000
  • Peak page views in a single hour was 478,000
  • Peak page views in a single minute was 89,000
  • All while maintaining a 99.9% SLA for the high-profile sites being managed

Why are enterprises choosing our Open Source Tech Stack?


We hope by now, the answer is obvious! 

Enterprises are choosing the Amazee Labs Open Source Tech Stack due to the fact that - while retaining a focus on security, reliability, and agility - our Open Source Stack unlocks your freedom of choice and helps you move quickly. It empowers them to craft beautifully designed bespoke sites and apps. 

Our Open Source Stack lets Enterprises have not only the digital platform they need but the digital platform they want, all at a price that their procurement teams love!

But don’t take our word for it
 

Would you like to see our Open Source Tech Stack in action? 

Reach out to us today to arrange a demo and take our stack for a spin!
 

Oct 22 2020
Oct 22

Third and Grove was recognized in the 2020 Acquia Engage Award program, winning an award for Leader of the Pack - Technology, for work with CloudHealth by VMware.

After VMware acquired it, the CloudHealth team needed to redesign their digital experience to align with VMware’s broader product portfolio and optimize for B2B lead generation efforts. In the first phase of the effort, we focused on redesigning the site’s blog. The blog had substantial content, but the goal was to leverage that content better to increase engagement. After the blog redesign went live, the team turned to redesign the main marketing site.

There were four challenges. The first challenge was that the existing site was on an old installation of Drupal 7 and did not take advantage of all of the tremendous editorial experience functionality available in Drupal 8 and 9. The site also lacked product extensibility. Precisely, the navigation and information architecture needed to be updated to support the growing business and product lines and support future product announcements. The final challenge was to update the UX and make it more elegant, properly reflect CloudHealth’s cutting edge technology, and help improve lead generation.

We formed a single team composed of the CloudHealth marketing team and our implementation team to design and build, from the ground up, a brand new, engaging, and elegant visitor experience. Together we implemented an agile approach, working in sprints, from ideation to execution to launch.

See it in action at https://www.cloudhealthtech.com/.

More About Acquia Engage 2020 Awards

More than 100 submissions were received from Acquia customers and partners. Nominations noted for functionality, integration, performance, and UX comprised the final rounds, where an outside panel of experts select winning projects.

Acquia Engage Award winners were announced at the Acquia Engage Conference, virtually, on October 21st, 2020.
 

Oct 22 2020
Oct 22

Third and Grove was recognized in the 2020 Acquia Engage Award program, receiving an award for Leader of the Pack - Financial Services, for work with The Carlyle Group.

The Carlyle website was overdue for a visual and functional refresh. It needed to reflect the business’s position as a leader in the investment space and showcase its new, forward-thinking digital brand positioning. Goals for the new website included a flexible content system that would allow Carlyle to control its content and put content first, as well as expand its original content strategy to show current and potential investors what makes Carlyle unique.

The Carlyle Group had a very ambitious timeline for this renovation. And as one of the top-five private equity firms globally, there was no margin for error and no pixels could be misplaced. The bar was as high as bars can be raised.

Working closely with the Carlyle team and Sub Rosa, the creative agency behind the design, the TAG project team led an aggressive schedule, keeping all three teams closely aligned throughout the ideation, implementation, testing, refinement, and seamless launch.

See it in action at https://carlyle.com.

More About Acquia Engage 2020 Awards

More than 100 submissions were received from Acquia customers and partners. Nominations noted for functionality, integration, performance, and UX comprised the final rounds, where an outside panel of experts select winning projects.

Acquia Engage Award winners were announced at the Acquia Engage Conference, virtually, on October 21st, 2020.  

Oct 22 2020
Oct 22

The impact of this years’ shelter-in-place orders and social distancing guidelines, along with every aspect of how we have coped -- or not coped -- with Covid-19 will be analyzed for years to come. Big changes have emerged in the ways that we interact with the world, and the next normal will likely bring with it many differences in the ways that we live, work, and play.  

From a web accessibility perspective, the pandemic is likely to usher in widespread changes. Now that online interactions are the primary means by which all of us are interacting with the world, we tend to be expecting more with a general understanding that the internet needs to be accessible to everyone, as well as compliant with ADA Accessibility guidelines.
 

Web Accessibility Advances

People of every age and with a wide range of disabilities are now working online, learning online, shopping online, connecting to medical care online, socializing online, and a lot more. We have replaced in-person events with virtual conferences, Zoom meetings and remote learning. Many medical facilities are asking patients to go online for information or even complete online visits. So how are we doing in ensuring that online activities are thoroughly accessible to people with disabilities, at a time when fighting isolation and stress can be a huge uphill battle? 

According to the US Department of Labor, in 2019 19.3 percent of individuals with a disability in the United States are employed. Many of these same individuals now need to work remotely, which is serving to shine a spotlight on the kinds of tools and resources that they need. Here are some of the challenges they are facing:

  • Documents and correspondence are often emailed without consideration given to color contrast ratios, fonts, and text size that would make them accessible to people with low vision. 

  • PDFs are being sent that are not created in an accessible manner, causing challenges for screen reader users. 

  • Video recordings of recent meetings or conferences are often sent without captions or a transcript, added. For someone who is deaf or hearing impaired, these videos can be hard to understand without a transcript or close captioning, especially in circumstances where they can not lip read.

                    Free webinar next week!  Web Design for Accessibility  October 27, 2020 11 a.m. Central  

What's Needed Now

In addition to new ways of working, so many basic services have shifted online. Everything from ordering groceries to seeking medical care has gone virtual. How much attention is being paid to the accessibility of these services? 

  • Complex layouts need to be simplified to make navigation accessible for individuals with cognitive disabilities.
  • Animations and videos on sites need to have a pause option built in to reduce seizure risks, and in some cases, avoid interaction issues with screen readers. 
  • Forms need to be fully accessible by including such items as matching visual and programmatic labels, screen reader accessible requirement instructions, and accessible error notifications. 

The Covid-19 pandemic is shining a bright light on the need for full web accessibility. 

In an environment where all aspects of our lives have moved online, we can no longer consider web accessibility to be a “nice to have” or a feature that we can put off until a later point. Creating fully accessible websites, media, and correspondence has always been both good business and the right thing to do. Covid-19 is helping us to understand why. 

Looking for information and insights on creating online experiences that are accessible, inclusive and ADA compliant? Contact us today.

Oct 20 2020
Oct 20

The AddEvent module for Drupal 8 lets you place easy-to-use Add to Calendar buttons on your website that integrate with Apple, Google, Yahoo, Outlook, and Office 365 calendar apps among others. The module also lets your users subscribe to entire calendars managed by AddEvent, adding all of your scheduled events to their calendars and keeping their calendars updated if your event details change.

The module is powered by AddEvent and you’ll need an AddEvent account for it to work properly. Free accounts include one calendar, restricted event-adds, and limited functionality; paid accounts include multiple calendars, higher subscriber limits, and a wide variety of other features. You’ll be able to build Add to Calendar proof-of-concept functionality with the AddEvent free version in just 15-20 minutes — but a professional tier account is necessary for any commercial level use.

First things first: Create an AddEvent account

Head over to AddEvent.com and register for an account. AddEvent is a paid service that offers a variety of features and high volume usage for licensed customers — but they also provide a free Hobby account that will get you started. After creating and verifying your account with the free Hobby plan, navigate to the Account screen by clicking on the user menu in the upper right corner of the screen, then clicking Account. From there, click Domains in the left-hand navigation, then follow instructions to add your domain.

AddEvent Module: domain whitelisting The domain where you will be using an Add to Calendar button needs to be whitelisted with AddEvent.

The Hobby plan only allows the use of one domain — but you’re able (at the time of writing this, anyway) to add a local development environment so that you can test without deploying to a publicly accessible website. When you’re ready to make the switch, you can simply remove your development domain and add in your live domain.

Install the Drupal 8 AddEvent module

Once you’ve registered for the service you’ll need to install the AddEvent module in your Drupal 8 website. You can install a Drupal 8 module in a variety of ways, but the easiest is probably using your terminal, starting from your project’s root directory. First require the module using composer:

composer require drupal/addevent

Then enable the module using drush:

drush en addevent

Now you’re all set to start using the AddEvent module. You can immediately create single-use Add to Calendar buttons using custom blocks, or combine custom blocks with a custom Event node type and the Token module to automatically produce Add to Calendar buttons for all of your events.

Add to Calendar: Static event buttons

Creating a static Add to Calendar button for a single event is super simple. Before we configure a block with our Add to Calendar link, though, we’ll need to fetch our Client ID from AddEvent. To do that, simply navigate to the Account page at AddEvent.com using the menu in the upper right corner of the screen. You should see a Client ID and API Token — just keep this window open or paste the Client ID in a safe place.

AddEvent Module: Client ID You'll need the Client ID to render your Add to Calendar buttons without the AddEvent branding.

Now on your Drupal site navigate to Structure > Block layout (/admin/structure/block) and click on Place block in the region you would like to place your Add to Calendar button. In my case, I’ll demonstrate adding the button to the Content region of a basic page that lives at the URL /my-basic-page.

From the Place a block dialogue that follows, click Place block next to the Add to Calendar item. Fill out the following fields as indicated to get your first example working:

  • Title: Add to Calendar
  • Display title: not checked
  • Button text: Add to Calendar
  • Title: My example event!
  • Description: My example event description!
  • Start: 10/31/2020 8:00 AM (the format is important, the date isn’t)
  • End: 10/31/2020 9:15 AM
  • Client: [Client ID from above]

Under the Customization portion of the form, paste your Client ID in the License field. This will remove the AddEvent branding from your button dialogue.

Finally, in the Visibility section of your block setup, choose Pages and write in the URL of the page you’d like the block on — in my case I’ll use /my-basic-page. Click Save block and navigate to the page you configured your block to appear on.

AddEvent module, an Add to Calendar button. A basic Add to Calendar button will present options for adding to Google, Yahoo, Apple, Office 365, Outlook calendars and more.

That’s it! Click the button and choose a calendar type to add the event to your calendar — it’s that easy. Next we’ll look at dynamically adding buttons to a custom Event node page.

Add to Calendar: Dynamic event buttons

You can create dynamic event buttons by using the Token module with a custom Event content type. Any node type with a properly configured date field will work. Make sure the Token module is installed before proceeding — use whatever Drupal module installation method works best for you. For this example, we’ll create a super simple Event node type. In my example below, I’ve added two custom fields of the Date type: field_event_date_start and field_event_date_end.

AddEvent module: Event content type A custom Event node type with date fields for start date and end date, for use with dynamically created Add to Calendar buttons.

Once your Event content type is set up, you just need to add a new Date format that will work well with the AddEvent module. To do that, navigate to Configuration > Date and time formats (/admin/config/regional/date-time) and click the Add format button. You can see below that I’m calling my format AddEvent and using the format string m/d/y g:i A which will translate to something like 10/30/20 4:45 PM — exactly what we need for our Add to Calendar button.

AddEvent module: date format Define a custom AddEvent compatible date format to get your Event content type working with Add to Calendar buttons.

Now we’ll place a new Add to Calendar block just like in the previous section, only we’ll use tokens for the Event title, description, and the start and end dates. When you open the Configure block interface for your new Add to Calendar block, click the Browse available tokens link in the description area of the Event Information section. The tokens you’re looking for will be in the Node group. Notice that the token options for field_event_date_start and field_event_date_end indicate our custom AddEvent format — tokens will be available for all of your custom date formats. Check out my example below.

AddEvent module: tokens The Token module lets block configuration be dynamic, allowing your Add to Calendar buttons to pull data from Event pages.

Everything else will be the same as it was in the first example in the section above, with the exception of the Visibility settings. For this block, we’ll restrict visibility by Content Type, ensuring that the block only appears on our custom Event node type pages.

AddEvent module: Block visibility Set block visibility to the Event content type to automatically create an Add to Calendar button on Event pages.

Once you’ve clicked Save block you’re all set. Now your Add to Calendar button will appear on every Event page in the region you specified, and it will inherit all of the event details from the specific event page.

Subscribe to an AddEvent calendar

Paid AddEvent accounts can manage multiple calendars of events on the AddEvent dashboard, and they have access to robust features like RSVP, custom fielded calendar events, subscriber reports, analytics, and more. While the Hobby account can demonstrate how Subscribe to Calendar works, the feature isn’t much use with the free-version restrictions in place.

Setting up a Subscribe to Calendar button is super simple. Before setting up the block on your Drupal site, you’ll need the Unique Key for your AddEvent calendar. From your Dashboard on AddEvent.com, click the menu options icon next to your calendar of choice (Hobby accounts only have one calendar) and click Calendar Page. Now copy the Unique Key for use on the block configuration below

AddEvent: Calendar Unique Key The Unique Key of your calendar used with the Subscribe to Calendar block type subscribes your users to entire AddEvent calendars.

On your Drupal site, navigate to Structure > Block layout (/admin/structure/block) and click Place block in the region of your choice. Choose the Subscribe to Calendar block type, then paste your Unique Key from above in the Data-ID field. You may want to place your Client ID in the License field like in previous configurations, to remove the AddEvent branding from the interface.

You can set up the Visibility settings however you wish, for my example I’ll stick with the /my-basic-page URL like before.

AddEvent module: Subscribe to calendar Subscribe to Calendar buttons add all of the events in an AddEvent calendar to your users calendar, and updates them regularly.

You’re all done! Now your users can subscribe to the calendar(s) you maintain on your AddEvent account. As you add and edit events on your AddEvent calendars, your subscribers’ calendars will be automatically updated. Very cool!

NOTE: Different calendar services synchronize at different speeds and some are very slow. Google Calendars, for example, can take up to 24 hours to synchronize changes from calendar subscriptions.

If you have suggestions, comments, or issues related to the AddEvent module, please let me know on the AddEvent module page! And feel free to leave questions or general comments below.

Oct 20 2020
Oct 20

I've recently been working on some quality of life updates for the Drupalize.Me codebase and various DevOps things. My goal is to pay down some of our technical debt, and generally make doing development on the site a more pleasant experience. In doing so I noticed that I was spending a lot of time waiting for MySQL imports -- that means others probably were too. I decided to see what I could do about it, and here's what I learned.

That MySQL table is HUGE!

The database for the site has a table named vidhist. That's short for video history. It has a record in it for every instance someone interacts with one of the videos on our site. We use the data to do things like show you which videos you've watched, show you a history of your usage over time, and to resume where you left off last time you played a specific video. Over time, this table has grown... and grown... and grown. As of this morning, there are a little over 5.7 million records in the table. And it's 2.8GiB in size!

Screenshot of Sequel Pro application showing statistics for vidhist table including 57 million rows and 2.8GiB size

That is all just fine -- except when you need to downsync a copy of the database to your localhost for development. Downloading the backup can take a while. The file is about 650Mb when compressed. Just importing all those records takes forever. I use DDEV-local for development. Pulling a copy of the database from Pantheon and importing it takes about 30 minutes! It's not as bad when exporting/importing locally. But you can imagine how working on testing an update hook could get quite tedious.

Output from ddev pull command with 35 minute timer.

I usually truncate the vidhist table on my localhost just so I don't have to deal with it. The data is important in production and for historical records, but is rarely necessary in development environments.

To get an idea of the impact this has, here are some of the other places this table can cause issues:

  • Downsync to local environment takes about 30 minutes
  • Cloning a database from one environment to another in Pantheon is slow; I don't have great metrics on this other than just watching the spinner for what seems like forever.
  • We use Tugboat.qa for automatic preview environments for every pull request. It's already quite fast to build a new preview despite the large DB size (about 2 minutes), but each preview instance is using about 8Gb of space, so we're often running into our limit. When the script runs to update our base preview image every night it takes about 18-20 minutes.
  • We occasionally use Pantheon's multidev feature, and building those environments takes just as long as the ddev pull command due in part to the large table.
  • We have an integration tests suite that takes about 11 minutes to run. Combined with the need to build a testing environment, it can take over 40 minutes for our tests to run.

Most of this is just robots doing work in the background -- so it's pretty easy to look the other way. But, there's a few places where this is really affecting productivity. So, I made a few changes that'll hopefully help in the future.

Clean up the database on Pantheon

Using Pantheon's Quicksilver hooks we execute the following script anytime the database is cloned from the live environment to any other environment:

<?php
/**
 * @file
 * Remove most of the records from the vidhist table to keep the size down.
 */

if (defined('PANTHEON_ENVIRONMENT') && (PANTHEON_ENVIRONMENT !== 'live')) {
  echo "Downsize the vidhist database...\n";

  define('DRUPAL_ROOT', $_SERVER['DOCUMENT_ROOT']);
  require_once DRUPAL_ROOT . '/includes/bootstrap.inc';
  drupal_bootstrap(DRUPAL_BOOTSTRAP_DATABASE);

  db_query('CREATE TABLE vidhist_backup AS SELECT * FROM vidhist WHERE updated > UNIX_TIMESTAMP(DATE_SUB(NOW(), INTERVAL 1 MONTH))');
  db_query('TRUNCATE vidhist');
  db_query('LOCK TABLE vidhist WRITE, vidhist_backup WRITE');
  db_query('INSERT INTO vidhist SELECT * FROM vidhist_backup');
  db_query('UNLOCK TABLES');
  db_query('DROP TABLE vidhist_backup');

  echo "Database downsize complete.\n";
}

This code is executed by Pantheon (see configuration file below) whenever the database is cloned and goes through these steps:

  • Verify that this is NOT the production environment
  • Bootstrap Drupal enough so that we can query the database
  • Run a series of queries that removes all but the last month's worth of data from the vidhist table. If you're curious, we run multiple queries like this instead of a single DELETE WHERE ... query because TRUNCATE is significantly faster. So we first create a temp table, then copy some data into it, then truncate the vidhist table, copy the temp data back into it, and delete the temp table.

We have a pantheon.yml similar to this which tells Pantheon where to find the script and when to run it.

# Pantheon config file.
api_version: 1

workflows:
  clone_database:
    after:
      - type: webphp
        description: Reduce size of the vidhist table in the DB.
        script: private/scripts/quicksilver/truncate_vidhist.php
  create_cloud_development_environment:
    after:
      - type: webphp
        description: Reduce size of the vidhist table in the DB.
        script: private/scripts/quicksilver/truncate_vidhist.php

As a result, the vidhist table on all non-production environments is a fraction of the original size. It's a relatively small change to make, but the impacts are huge.

Clone operations from non-production environments are significantly faster. And, since we configure DDEV to pull from the Pantheon dev environment by default when I do ddev pull on my local it's also much faster now. It's closer to 2 minutes instead of 30!

Output from ddev pull showing 2 minute timer.

This also helps reduce our disk usage on Tugboat.qa. Because we have Tugboat configured to pull the database and files from the Pantheon test environment, it too gets a smaller vidhist table. Our build time for previews is almost a full minute faster with previews now building in an average of 1 minute 11 seconds!

Tip: You can use this same technique to do things like sanitize sensitive data in your database so that it doesn't get copied to development environments.

An aside about Tugboat

I originally updated our Tugboat config.yml file to perform this cleanup of the database after pulling it down from Pantheon in an attempt to use less resources there. I later added the script for cleaning up the Pantheon DB above. It looked like this:

update:
    # Use the tugboat-specific Drupal settings
    - cp "${TUGBOAT_ROOT}/.tugboat/settings.local.php" "${DOCROOT}/sites/default/"
    - cp "${TUGBOAT_ROOT}/docroot/sites/default/default.settings_overrides.inc" "${DOCROOT}/sites/default/settings_overrides.inc"

    # Generate a unique hash_salt to secure the site
    - echo "\$settings['hash_salt'] = '$(openssl rand -hex 32)';" >> "${DOCROOT}/sites/default/settings.local.php"

    # Import and sanitize a database backup from Pantheon
    - terminus backup:get ${PANTHEON_SOURCE_SITE}.${PANTHEON_SOURCE_ENVIRONMENT} --to=/tmp/database.sql.gz --element=db
    - drush -r "${DOCROOT}" sql-drop -y
    - zcat /tmp/database.sql.gz | drush -r "${DOCROOT}" sql-cli
    - rm /tmp/database.sql.gz

    # Remove most of the records from the vidhist table.
    - drush -r "${DOCROOT}" sql-query "CREATE TABLE vidhist_backup AS SELECT * FROM vidhist WHERE updated > UNIX_TIMESTAMP(DATE_SUB(NOW(), INTERVAL 1 MONTH));"
    - drush -r "${DOCROOT}" sql-query "TRUNCATE vidhist;"
    - drush -r "${DOCROOT}" sql-query "LOCK TABLE vidhist WRITE, vidhist_backup WRITE;"
    - drush -r "${DOCROOT}" sql-query "INSERT INTO vidhist SELECT * FROM vidhist_backup;"
    - drush -r "${DOCROOT}" sql-query "UNLOCK TABLES;"
    - drush -r "${DOCROOT}" sql-query "DROP TABLE vidhist_backup;"

But, while writing this blog post I realized that's probably not necessary. Since Tugboat is pulling the database from the Pantheon test environment, not the live one, the table will have already been cleaned up. Which, also means updating the base preview in Tugboat is going to be significantly faster than I had originally thought. Just gotta go open a new PR...

Recap

I'm kind of embarrassed that it took me this long to address this issue. It's easy to say, "Meh, it's just a few minutes." But over time those minutes can really add up. Not to mention how frustrating it must be for someone trying to get started working on the site who isn't accustomed to going to make a cup of coffee while they wait for the DB to import.

I encourage you to occasionally step back and consider the everyday motions you go through without really thinking about them. There may be room for improvement.

Oct 20 2020
Oct 20

The internet was first built with a purpose of better communication for the military and scientists during the 1960s. Soon, the Web transformed into a revolutionary phenomenon around the 1990s and has been unstoppable ever since. Although the purpose for the web kept changing, one thing that remained constant was convenience. 

The need for convenience and effectiveness brought about many innovative ways to access the internet. Native mobile apps and Web apps are two such technologies that have made internet browsing easy and convenient. In this article, we will be talking about Progressive Web Apps and how you can implement it with Drupal using the Drupal Progressive Web App module. But before we dive into all of that, let’s look at the features of native and web apps and how progressive web apps actually fill the gap between the two. 

PWA-drupal-9

Features of Native Mobile Apps

  • They are platform-specific apps. Which also means that they need to be rebuilt for every new platform (iOS, Android). 
  • Need to be downloaded.
  • Usually super-fast.
  • Rich in features and functionalities. 
  • Can blend seamlessly into any device and feel like a part of it.
  • Can work offline.
  • They can access device data, device hardware and local file system easily.
  • More expensive to develop, maintain and upgrade.
  • Are pre-approved for security and can be downloaded at App stores. 
  • Hard for search-engines to crawl.

Features of Web Apps

  • Works independent of platform. All you need is a browser to access them. Supports most modern browsers. Does not need to be downloaded.
  • No particular SDK for developing them. Frontend is developed using either HTML, CSS, JavaScript and a LAMP or a MEAN stack for backend.
  • No need to upgrade. Lesser development and maintenance costs.
  • Although it needs authentication, security is an issue because it can be vulnerable for unauthorized access.
  • They don’t work offline and can be slower than native mobile apps.
  • They aren’t listed on App stores so discovering them may be harder.

What are Progressive Web Apps and how do they fill the gap?

So, in short, native web apps are highly capable but lack in terms of its reach. Whereas web apps have a wider reach but lack in capabilities. And that is where Progressive Web Apps come in to bridge the gaps.

Progressive Web Apps is an ideal combination of the benefits of native apps and web apps. Using modern web capabilities, Progressive web apps (PWA) can deliver app-like experiences to users. It combines features offered by most modern browsers with the benefits of mobile experiences. You can build native-app-like, extremely complex and installable apps. With Web Assembly being supported by most browsers now, PWAs can be built in languages of the developer’s choice, thus widely increasing the scope and flexibility of the functionalities it can offer.

Features of Progressive Web Apps

  • They are platform and device independent. Works beautifully on any browser.
  • They load fast and are extremely reliable (even with a low internet speed). Scrolling is very smooth and fluid.
  • Can work offline too.
  • Native app-like push notifications can be enabled.
  • Can access device hardware and data like native apps.
  • Shortcuts can be added on the user’s home screen (instead of downloading them)
  • No need for complex installations. Can share the URLs easily.
  • Responsive across all devices.
  • They are easier and faster to develop. Maintenance is easy as well.

Before talking about the PWA module in Drupal, let’s look at the minimum requirements to build a PWA -

  • Should be run over HTTPS. 
  • Should include a Service Worker – Service worker is a script (javascript) that runs over https on the browser and provides browser access. It provides the native app-like features like offline content delivery, push notifications, etc.
  • Should have a Web App Manifest – which is a JSON file containing metadata with information about the web app like the name, description, author and more. This is also useful for search engine optimization.

The PWA Drupal Module – How to make Progressive Web Apps with Drupal 9 (and 8)

The Drupal PWA module is easy to install comes with the Service Worker (for caching and other offline app-like capabilities) and Manifest.js that you can configure. You will however need to make sure you have SSL installed before you begin with the PWA installing process. If your requirement is extremely specific with tons of customizations, you can develop the PWA by using front end frameworks like Angular or React and customize your own Service worker. 

Installing the PWA Drupal 9 Module

With Drupal 7, installing the Progressive Web App Drupal module was as easy as downloading and enabling the module. You could generate the manifest.js file via a config form and validate it. However, in Drupal 9, we cannot integrate this functionality directly just by enabling the PWA module. The reason being it does not provide you an option to configure the manfest.js file. 

  1. Install the module by downloading and enabling the PWA Drupal module.Installing PWA Module
  2. For Drupal 9, apply this patch

  3. Once done, navigate to configuration -> PROGRESSIVE WEB APP -> PWA settings and add the required information.

PWA-configuration

PWA-configuration

Service Worker

URLs to cache - This is where you can specify the pages that you need to be made available even when offline. The URLs mentioned here will be cached -make sure you flush the cache whenever you make any updates here.
URLs to exclude – If you have pages that absolutely need to work only with the internet, mention them here.
Offline page – Display a personalized page to your users when they get offline and the page isn’t cached. 

Manifest.json

The Drupal manifest.json file is what allows users to add the PWA to their home screen. It contains configurations that you can modify to change how your PWA will behave – like the name, display name, background color, orientation and more. 

manifest-json

The file will be added to the head tag of your index page.
<link rel="manifest" href = "/manifest.json">
 

Manifest.json       Manifest.json

The below image shows the “Add to the home screen” option and an icon will be created on the home screen.

Progressive Web Apps
Oct 19 2020
Oct 19

“Development workflow aligns team members, keeps them engaged, and drives a balance between speed and quality of code changes.”

In today’s competitive environment, organizations are constantly engaged to create new ideas that can act as the leading front doors to building businesses. Starting a new web development project means a blank slate that comes along with the opportunity to try out new technologies. Over the past few years, the concept of developing a strong and influential workflow is no longer limited for the sake of process only. In other words, having a robust development workflow is looked at as an opportunity that can make a huge difference to the efficiency of your team as well as the quality of your product. 

However, when it comes to Drupal development workflow, things are not easy as it may look. That is to say, many Drupal-based organizations suffer from a standard process that needs to be adopted and implemented. Consequently, it leads to real problems, right from a chaotic approach to code submission to a review of the inability to determine where and why things inevitably break.

Therefore, this article is written to demystify the basics of Drupal development workflow to appeal to a wide audience including both work from home as well as work from office. Moreover, after reading this article, you will become familiar with the development techniques we employ to achieve the best possible results and with how we adapt to alterations during development.

Forming a standard development workflow vis-à-vis specific project’s requirements

While perfect planning may seem like the key to a worthy goal, there are chances that even the most perfect plan may require a change in the development workflow. Yes, you heard it right, change in development workflow strategy is not breaking news but a part of the job which every organization undergoes. Managing development workflow helps the organizations to stay flexible and responsive to change without slowing down the work at hand. 
Since different projects have a different workflow, it becomes important for businesses to implement, plan, and fine-tune the architecture and development practices in accordance with the project-specific need. Let's take a look at a few scenarios:

Microservices

The monolithic architecture is a traditional way of building applications. However, this workflow is harder to implement changes wherein the application is large and complex because of highly tight coupling. Any slight change in code affects the entire system which often makes the overall development process much longer. Well, in a situation like this, something like microservices architecture is required which has a different workflow than monolithic architecture. Organizations that require a collection of smaller independent units rather than a single unified unit must shift their development workflow to a microservices architecture wherein the entire application is divided into independent services, owned by small separate teams.  

Pattern Lab

Organizations that are design-focused may require a front-end framework or a completely different workflow that can offer a convenient and easy way to enforce component-driven design. Or let us envisage that you have a project where designing is the foremost thing for which you may essentially look for a refreshing way to create sophisticated designs. In both situations, the standard or existing workflow might not hold up and the development team has to look for a workflow that can serve as a hub for your design system. Born out of design principles, Pattern lab can be used in the aforementioned situations to create and maintain thoughtful UI designs. Development teams that are well-equipped with Pattern Lab create reusable components, thereby speeding up your team’s workflow, and further allowing you to save huge amounts of time and money in the process.

Decoupled Approach

In traditional Drupal websites, Drupal handles the front-end and the back-end functions single-handedly. As a matter of fact, in addition to being a robust content store, Drupal’s powerful frontend takes care of your site design, behavior, usability, and management. It’s perfect for creating impressive visual designs and cutting-edge interactivity. However, with consumer touchpoints like connected devices and wearables taking center-stages, organizations are looking for front-end technologies like React or Gatsbywhich can help them deliver the best digital experiences to potential clients. And in order to fulfill this need, organizations need to make necessary changes in their development workflow. This is because the Drupal development workflow that was applied for creating websites with traditional approaches may not be suitable for decoupled Drupal architecture. With separate teams working on the frontend and backend side of things, your standard development workflow would have to be tweaked in a way that both the teams are able to work parallelly.

Having the right project management tools

Project management refers to an umbrella term which is a compendium of the application of knowledge, skills, tools, and techniques that are required to meet the project requirements. The main intent of project management is to produce an end product that will affect some change for the benefit of the organization that had instigated the project. It is basically the initiation, planning, and control of a range of tasks that are required to deliver the end product using various tools. 

Project management tools are vast and can serve many different functions. For your consideration, we have listed down some of the most common project management tools which play a pivotal role in the Drupal development workflow. So, let’s give each one of them a quick look.

Confluence

Screenshot of confluence tool


Confluence being one of the most popular collaboration tools helps you create, collaborate, and organize all your work in one place. Irrespective of the team size and type including people with mission-critical projects to people looking for a space to build team culture, Confluence serves as a team workspace where knowledge and collaboration meet in a more open and authentic way. Organizations that utilize Confluence are able to make quick decisions, gain alignment, and accomplish more together. Some of the key features of Confluence include: 

Page: This is the place where your content lives. This feature allows you to create pages for almost anything, starting from project plans to meeting notes, troubleshooting guides, policies, and much more. 

Space: Pages that you create are stored in spaces. They are nothing but workspaces where you can easily collaborate on work and keep all your content organized. You can create as many or as few spaces as per the requirement. However, it is always suitable to group related content together in the same space.

Page Tree: This feature organizes space content with a hierarchical page tree in order to find work quickly and easily. It also nests pages under related spaces and pages to organize pages in just about any way. 

Bitbucket

Screenshot of bitbucket tool


Bitbucket can be called a real worthy competitor to GitHub which comes with different operating systems to provide support to the professional teams. Being a section of the Atlassian family along with tools like Confluence, Jira, etc, Bitbucket is made in such a manner that it provides complete support to the technical teams to explore the entire potential. Bitbucket offers organizations a central place to manage git repositories, collaborate on the source code, and guide through the development flow. A great way to extract maximum advantage of everything that Bitbucket offers is to integrate it with your task management software. The deployment of Bitbucket is made in three different options which include Bitbucket cloud, Bitbucket data center, and Bitbucket Server. Besides these aforementioned advantages that Bitbucket offers, it also allows user to use some awesome features that include:

  • Access control: This feature allows you to restrict access to your source code.
  • Workflow control: Using this feature, you can enforce a project or team workflow.
  • Pull requests: Can be used with in-line commenting for collaboration on code review.
  • Jira integration: This feature gives access to full development traceability.
  • Full Rest API: Provides easy access to building features custom to your workflow if they are not already available from our Marketplace.

Jira

Screenshot of Jira tool


Originally built to track and manage bugs in software development, Jira has now become a famous agile project management tool that helps teams to manage their work under a single roof. Products and applications built on the Jira platform help the team to perform various functions such as plan, assign, track, report, and manage work. If you wish to bring your team together for everything, right from agile software development and customer support to managing shopping lists and family chores, then Jira is your tool. Jira is a family of products providing several products and deployment options that are purpose-built for Software, IT, Business, Ops teams, and more.

Three products have been built on the Jira platform: Jira Software, Jira Service Desk, and Jira Core. Each of these products come with built-in templates for different use cases and integrates seamlessly, so teams across organizations can work better together.

Jira Software: Used to plan, track, and release world-class software.

Jira Service Desk: Used to give customers an easy way to ask for help and your agents a faster way to deliver it.

Jira Core: Used to manage the business projects including marketing campaigns, HR onboarding, approvals, and legal document reviews.

Implementing CI/CD Pipeline

A CI/CD Pipeline implementation, or Continuous Integration/Continuous Deployment, is referred to as the backbone of the modern DevOps environment. The CI/CD pipeline is held responsible to bridge the gap that exists between development and operations teams by automating the building, testing, and deployment of applications. 

In DevOps, a continuous and automated delivery cycle acts as a catalyst that makes fast and reliable delivery possible. As a result, there is a need for proper continuous integration and continuous delivery (CI/CD) tools. The marketplace is flooded with a wide variety of CI/CD tools that can be used to speed up delivery and ensure product quality. Some of the tools are given below:

OpenDevShop

With a front end built in Drupal (Devmaster) and a back-end built with Drush, Symfony, and Ansible, DevShop is a "cloud hosting" system intended for Drupal users which makes it easy to host, develop, test as well as update drupal sites. 

DevShop uses git to deploy the sites, thereby allowing you to create unlimited environments for each site. Moreover, it’s very easy to deploy any branch or tag to each environment using DevShop. Data including databases and files can be deployed between environments. Further, you can run the built-in hooks whenever code or data is deployed, or simply write your own.

Jenkins

Built with Java, Jenkins is an open-source automation server wherein the central build and continuous integration process takes place, regardless of the platform you are working with. With Jenkins, organizations can easily escalate the software development process by simply automating it. This particular powerful open-source server holds the potential to manage and control software delivery processes throughout the entire lifecycle, including build, document, test, package, stage, deployment, static code analysis, and much more. 

Checks during Pre-Code merge

There are certain CI/CD tools that are used by organizations as a strategy to help development teams integrate code easier and find bugs before they are released into actual production. 

-   Code Review 

It refers to a systematic examination to find and remove the vulnerabilities in the code that are often not readily apparent when compiled, such as memory leaks and buffer overflows. 

Example- Sonarqube, an automatic code review tool that is pretty helpful to detect bugs, vulnerabilities, and code smell in your code. 

- Drupal 9 Deprecation

Instead of working on Drupal 9 on its own git branch from the scratch, Drupal 9 was built in Drupal 8. Further, it involves the removal of all deprecated APIs and changing the remaining small number of deprecations to be deprecated for Drupal 10 instead. 

Example: Drupal Check, this tool allows you to run a standalone PHP executable from the command line and get a report of deprecated code, if used any.

- Security Checks 

It refers to the process which involves a Drupal-based site and gets a high-level overview of the site’s security posture to avoid future vulnerabilities or threats.

Example: Snyk, a security tool that developers use and love. It helps software-driven businesses to develop fast and stay secure. 

- Prod code performance reviews

It refers to the process wherein errors are detected that might have been produced during the development process and can potentially hinder workflow productivity. 

Example: Sentry, a leading SQL Server performance monitoring tool that helps developers to diagnose, fix, and optimize the performance of their code.  

Checks during Post-code merge

Just like pre-code merge, there are certain tools that act as a strategy to help development teams to examine and get an insight into the overall scenario after pre-code merge.

-  Performance 

It refers to the process of determining the speed, response time, stability, reliability, scalability, and resource usage under a workload.

Example: Sitespeed.io, a set of open-source tools that allows you to monitor and measure the performance of your website with ease.

-  Accessibility 

It refers to the process to ensure that the site built is usable by people with disabilities like hearing, color blindness, old age, and other disadvantaged groups. 

Example: Pa11y, a command-line interface with a job function to load web pages and highlights accessibility issues, if any. This tool is quite useful to run a one-off test against a web page.

- Warnings/Errors/Ngnix

The process to identify and configure the logging of a few warnings or errors that can further be used to debug your application or website.

Example: Jenkins Next Generation Warnings, a plugin used to collect compiler warnings or issues that are reported by static analysis tools and visualizes the results. 

- Periodic load tests on API level

It refers to a way that allows you to check whether your application is robust enough to handle the load you want it to handle before your users find that out for you. 

Example: k6, an open-source load testing tool to catch performance regression and problems at an earlier stage thereby allowing you to build resilient systems and robust applications.

Conclusion

To conclude, we hope this article has given you a good idea of what a workflow is and how you can use it in order to run your business. To be upfront, planning a development workflow for Drupal 8 projects takes a little bit of effort as well as time but it pays off considerable dividends in the near future. In other words, organizations with a good workflow have seemed to receive a part of the profit in terms of increased productivity, reduced stress, and a better quality of working life in general. All you need to do is give yourself some space to get started and make gradual improvements over time. Doing so will help you reap the possible benefits sooner rather than later. 

Want to know how to automate your own development workflow? Feel free to contact us at [email protected] and our industry experts will help you optimize your development workflow. 

Oct 17 2020
Oct 17
Drupal 8 will be released on November 19 | Wunderkraut

Coincidence?

We're ready to celebrate and build (even more) amazing Drupal 8 websites. 
On November 19 we'll put our Drupal 8 websites in the spotlight...be sure to come back and check out our website.

By

Michèle Weisz

Share

Want to know more?

Contact us today

or call us +32 (0)3 298 69 98

© 2015 Wunderkraut Benelux

Oct 17 2020
Oct 17
77 of us are going | Wunderkraut

Drupalcon 2015

People from across the globe who use, develop, design and support the Drupal platform will be brought together during a full week dedicated to networking, Drupal 8 and sharing and growing Drupal skills.

As we have active hiring plans we’ve decided that this year’s approach should have a focus on meeting people who might want to work for Wunderkraut and getting Drupal 8 out into the world.
As Signature Supporting Partner we wanted as much people as possible to attend the event. We managed to get 77 Wunderkrauts on the plane to Barcelona!  From Belgium alone we have an attendance of 17 people.
The majority of our developers will be participating in sprints (a get-together for focused development work on a Drupal project) giving all they got together with all other contributors at DrupalCon.

We look forward to an active DrupalCon week.  
If you're at DrupalCon and feel like talking to us. Just look for the folks with Wunderkraut carrot t-shirts or give Jo a call at his cell phone +32 476 945 176.

Share

Related Blog Posts

Want to know more?

Contact us today

or call us +32 (0)3 298 69 98

© 2015 Wunderkraut Benelux

Oct 17 2020
Oct 17
Watch our epic Drupal 8 promo video | Wunderkraut

How Wunderkraut feels about Drupal 8

Drupal 8 is coming and everyone is sprinting hard to get it over the finish line. To boost contributor morale we’ve made a motivational Drupal 8 video that will get them into the zone and tackling those last critical issues in no time.

[embedded content]

Share

Related Blog Posts

Want to know more?

Contact us today

or call us +32 (0)3 298 69 98

© 2015 Wunderkraut Benelux

Oct 17 2020
Oct 17

Once again Heritage day was a huge succes.

About 400 000 visitors visited Flanders monuments and heritage sites last Sunday.  The Open Monumentendag website received more than double the amount of last year's visitors.

Visitors to the website organised their day out by using the powerful search tool we built that allowed them to search for activities and sights at their desired location.  Not only could they search by location (province, zip code, city name, km range) but also by activity type, keywords, category and accessibility.  Each search request being added as a (removable) filter for finding the perfect activity.

By clicking on the heart icon, next to each activity, a favorite list was drawn up.  Ready for printing and taking along as route map.

Our support team monitored the website making sure visitors had a great digital experience for a good start to the day's activities.

Did you experience the ease of use of the Open Monumentendag website?  Are you curious about the know-how we applied for this project?  Read our Open Monumentendag case.

Oct 17 2020
Oct 17
Very proud to be a part of it | Wunderkraut

Breaking ground as Drupal's first Signature Supporting Partner

Drupal Association Executive Director Holly Ross is thrilled that Wunderkraut is joining as first and says: "Their support for the Association and the project is, and has always been, top-notch. This is another great expression of how much Wunderkraut believes in the incredible work our community does."

As Drupal Signature Supporting Partner we commit ourselves to advancing the Drupal project and empowering the Drupal community.  We're very proud to be a part of it as we enjoy contributing to the Drupal ecosystem (especially when we can be quircky and fun as CEO Vesa Palmu states).

Our contribution allowed the Drupal Association to:

  • Complete Drupal.org's D7 upgrade - now they can enhance new features
  • Hired a full engineering team committed to improving Drupal.org infrastructure
  • Set the roadmap for Drupal.org success.

First signaturepartner announcement in Drupal Newsletter

By

Michèle Weisz

Share

Related Blog Posts

Want to know more?

Contact us today

or call us +32 (0)3 298 69 98

© 2015 Wunderkraut Benelux

Oct 17 2020
Oct 17

But in this post I'd like to talk about one of the disadvantages that here at Wunderkraut we pay close attention to.

A consequence of the ability to build features in more than one way is that it's difficult to predict how different people interact (or want to interact) with them. As a result, companies end up delivering solutions to their clients that although seem perfect, turn out, in time, to be less than ideal and sometimes outright counterproductive. 

Great communication with the client and interest in their problems goes a long way towards minimising this effect. But sometimes clients realise that certain implementations are not perfect and could be made better. And when that happens, we are there to listen, adapt and reshape future solutions by taking into account these experiences. 

One such recent example involved the use of a certain WYSIWYG library from our toolkit on a client website. Content editors were initially happy with the implementation before they actually started using it to the full extent. Problems began to emerge, leading to editors spending way more time than they should have performing editing tasks. The client signalled this problem to us which we then proceed to correct by replacing said library. This resulted in our client becoming happier with the solution, much more productive and less frustrated with their experience on their site. 

We learned an important lesson in this process and we started using that new library on other sites as well. Polling our other clients on the performance of the new library revealed that indeed it was a good change to make. 

Oct 17 2020
Oct 17

A few years ago most of the requests started with : "Dear Wunderkraut, we want to build a new website and ... "  - nowadays we are addressed as "Dear Wunderkraut, we have x websites in Drupal and are very happy with that, but we are now looking for a reliable partner to support & host ... ".

By the year 2011 Drupal had been around for just about 10 years. It was growing and changing at a fast pace. More and more websites were being built with it. Increasing numbers of people were requesting help and support with their website. And though there were a number of companies flourishing in Drupal business, few considered specific Drupal support an interesting market segment. Throughout 2011 Wunderkraut Benelux (formerly known as Krimson) was tinkering with the idea of offering support, but it was only when Drupal newbie Jurgen Verhasselt arrived at the company in 2012 that the idea really took shape.

Before his arrival, six different people, all with different profiles, were handling customer support in a weekly rotation system. This worked poorly. A developer trying to get his own job done plus deal with a customer issue at the same time was getting neither job done properly. Tickets got lost or forgotten, customers felt frustrated and problems were not always fixed. We knew we could do better. The job required uninterrupted dedication and constant follow-up.

That’s where Jurgen came in the picture. After years of day job experience in the graphic sector and nights spent on Drupal he came to work at Wunderkraut and seized the opportunity to dedicate himself entirely to Drupal support. Within a couple of weeks his coworkers had handed over all their cases. They were relieved, he was excited! And most importantly, our customers were being assisted on a constant and reliable basis.

By the end of 2012 the first important change was brought about, i.e. to have Jurgen work closely with colleague Stijn Vanden Brande, our Sys Admin. This team of two ensured that many of the problems that arose could be solved extremely efficiently. Wunderkraut being the hosting party as well as the Drupal party means that no needless discussions with the hosting took place and moreover, the hosting environment was well-known. This meant we could find solutions with little loss of time, as we know that time is an important factor when a customer is under pressure to deliver.

In the course of 2013 our support system went from a well-meaning but improvised attempt to help customers in need to a fully qualified division within our company. What changed? We decided to classify customer support issues into: questions, incidents/problems and change requests and incorporated ITIL based best practices. In this way we created a dedicated Service Desk which acts as a Single Point of Contact after Warranty. This enabled us to offer clearly differing support models based on the diverse needs of our customers (more details about this here). In addition, we adopted customer support software and industry standard monitoring tools. We’ve been improving ever since, thanks to the large amount of input we receive from our trusted customers. Since 2013, Danny and Tim have joined our superb support squad and we’re looking to grow more in the months to come.

When customers call us for support we do quite a bit more than just fix the problem at hand. Foremostly, we listen carefully and double check everything to ensure that we understand him or her correctly. This helps to take the edge off the huge pressure our customer may be experiencing. After which, we have a list of do’s and don’t for valuable support.

  • Do a quick scan of possible causes by getting a clear understanding of the symptoms
  • Do look for the cause of course, but also assess possible quick-fixes and workarounds to give yourself time to solve the underlying issue
  • Do check if it’s a pebkac
  • and finally, do test everything within the realm of reason.

The most basic don’t that we swear by is:

  • never, ever apply changes to the foundation of a project.
  • Support never covers a problem that takes more than two days to fix. At that point we escalate to development.

We are so dedicated to offering superior support to customers that on explicit request, we cater to our customers’ customers. Needless to say, our commitment in support has yielded remarkable  results and plenty of customer satisfaction (which makes us happy, too)

Oct 17 2020
Oct 17

If your website is running Drupal 6, chances are it’s between 3 and 6 years old now, and once Drupal 8 comes out. Support for Drupal 6 will drop. Luckily the support window has recently been prolonged for another 3 months after Drupal 8 comes out. But still,  that leaves you only a small window of time to migrate to the latest and greatest. But why would you? 

There are many great things about Drupal 8 that will have something for everyone to love, but that should not be the only reason why you would need an upgrade. It is not the tool itself that will magically improve the traffic to your site, neither convert its users to start buying more stuff, it’s how you use the tool.  

So if your site is running Drupal 6 and hasn’t had large improvements in the last years it might be time to investigate if it needs a major overhaul to be up to par with the competition. If that’s the case, think about brand, concept, design, UX and all of that first to understand how your site should work and what it should look like, only then we can understand if a choice needs to be made to go for Drupal 7 or Drupal 8.  

If your site is still running well you might not even need to upgrade! Although community support for Drupal 6 will end a few months after Drupal 8 release, we will continue to support Drupal 6 sites and work with you to fix any security issues we encounter and collaborate with the Drupal Security Team to provide patches.

My rule of thumb is that if your site uses only core Drupal and a small set of contributed modules, it’s ok to build a new website on Drupal 8 once it comes out. But if you have a complex website running on many contributed and custom modules it might be better to wait a few months maybe a year until all becomes stable. 

Oct 17 2020
Oct 17

So how does customer journey mapping work?

In this somewhat simplified example, we map the customer journey of somebody signing up for an online course. If you want to follow along with your own use case, pick an important target audience and a customer journey that you know is problematic for the customer.

1. Plot the customer steps in the journey

customer journey map 1

Write down the series of steps a client takes to complete this journey. For example “requests brochure”, “receives brochure”, “visits the website for more information”, etc. Put each step on a coloured sticky note.

2. Define the interactions with your organisation

customer journey map 2

Next, for each step, determine which people and groups the customer interacts with, like the marketing department, copywriter and designer, customer service agent, etc. Do the same for all objects and systems that the client encounters, like the brochure, website and email messages. You’ve now mapped out all people, groups, systems and objects that the customer interacts with during this particular journey.

3. Draw the line

customer journey map 3

Draw a line under the sticky notes. Everything above the line is “on stage”, visible to your customers.

4. Map what happens behind the curtains

customer journey map 4

Now we’ll plot the backstage parts. Use sticky notes of a different color and collect the persons, groups, actions, objects and systems that support the on stage part of the journey. In this example these would be the marketing team that produces the prod brochure, the printer, the mail delivery partner, web site content team, IT departments, etc. This backstage part is usually more complex than the on stage part.

5. How do people feel about this?

Customer journey map 5

Now we get to the crucial part. Mark the parts that work well from the perspective of the person interacting with it with green dots. Mark the parts where people start to feel unhappy with yellow dots. Mark the parts where people get really frustrated with red. What you’ll probably see now is that your client starts to feel unhappy much sooner than employees or partners. It could well be that on the inside people are perfectly happy with how things work while the customer gets frustrated.

What does this give you?

Through this process you can immediately start discovering and solving customer experience issues because you now have:

  • A user centred perspective on your entire service/product offering
  • A good view on opportunities for innovation and improvement
  • Clarity about which parts of the organisation can be made responsible to produce those improvements
  • In a shareable format that is easy to understand

Mapping your customer journey is an important first step towards customer centred thinking and acting. The challenge is learning to see things from your customers perspective and that's exactly what a customer journey map enables you to do. Based on the opportunities you identified from the customer journey map, you’ll want to start integrating the multitude of digital channels, tools and technology already in use into a cohesive platform. In short: A platform for digital experience management! That's our topic for our next post.

Oct 17 2020
Oct 17

In combination with the FacetAPI module, which allows you to easily configure a block or a pane with facet links, we created a page displaying search results containing contact type content and a facets block on the left hand side to narrow down those results.

One of the struggles with FacetAPI are the URLs of the individual facets. While Drupal turns the ugly GET 'q' parameter into a clean URLs, FacetAPI just concatenates any extra query parameters which leads to Real Ugly Paths. The FacetAPI Pretty Paths module tries to change that by rewriting those into human friendly URLs.

Our challenge involved altering the paths generated by the facets, but with a slight twist.

Due to the projects architecture, we were forced to replace the full view mode of a node of the bundle type "contact" with a single search result based on the nid of the visited node. This was a cheap way to avoid duplicating functionality and wasting precious time. We used the CTools custom page manager to take over the node/% page and added a variant which is triggered by a selection rule based on the bundle type. The variant itself doesn't use the panels renderer but redirects the visitor to the Solr page passing the nid as an extra argument with the URL. This resulted in a path like this: /contacts?contact=1234.

With this snippet, the contact query parameter is passed to Solr which yields the exact result we need.

/**
 * Implements hook_apachesolr_query_alter().
 */
function myproject_apachesolr_query_alter($query) {
  if (!empty($_GET['contact'])) {
    $query->addFilter('entity_id', $_GET['contact']);
  }
}

The result page with our single search result still contains facets in a sidebar. Moreover, the URLs of those facets looked like this: /contacts?contact=1234&f[0]=im_field_myfield..... Now we faced a new problem. The ?contact=1234 part was conflicting with the rest of the search query. This resulted in an empty result page, whenever our single search result, node 1234, didn't match with the rest of the search query! So, we had to alter the paths of the individual facets, to make them look like this: /contacts?f[0]=im_field_myfield.

This is how I approached the problem.

If you look carefully in the API documentation, you won't find any hooks that allow you to directly alter the URLs of the facets. Gutting the FacetAPI module is quite daunting. I started looking for undocumented hooks, but quickly abandoned that approach. Then, I realised that FacetAPI Pretty Paths actually does what we wanted: alter the paths of the facets to make them look, well, pretty! I just had to figure out how it worked and emulate its behaviour in our own module.

Turns out that most of the facet generating functionality is contained in a set of adaptable, loosely coupled, extensible classes registered as CTools plugin handlers. Great! This means that I just had to find the relevant class and override those methods with our custom logic while extending.

Facet URLs are generated by classes extending the abstract FacetapiUrlProcessor class. The FacetapiUrlProcessorStandard extends and implements the base class and already does all of the heavy lifting, so I decided to take it from there. I just had to create a new class, implement the right methods and register it as a plugin. In the folder of my custom module, I created a new folder plugins/facetapi containing a new file called url_processor_myproject.inc. This is my class:

/**
 * @file
 * A custom URL processor for cancer.
 */

/**
 * Extension of FacetapiUrlProcessor.
 */
class FacetapiUrlProcessorMyProject extends FacetapiUrlProcessorStandard {

  /**
   * Overrides FacetapiUrlProcessorStandard::normalizeParams().
   *
   * Strips the "q" and "page" variables from the params array.
   * Custom: Strips the 'contact' variable from the params array too
   */
  public function normalizeParams(array $params, $filter_key = 'f') {
    return drupal_get_query_parameters($params, array('q', 'page', 'contact'));
  }

}

I registered my new URL Processor by implementing hook_facetapi_url_processors in the myproject.module file.

**
 * Implements hook_facetapi_url_processors().
 */
function myproject_facetapi_url_processors() {
  return array(
    'myproject' => array(
      'handler' => array(
        'label' => t('MyProject'),
        'class' => 'FacetapiUrlProcessorMyProject',
      ),
    ),
  );
}

I also included the .inc file in the myproject.info file:

files[] = plugins/facetapi/url_processor_myproject.inc

Now I had a new registered URL Processor handler. But I still needed to hook it up with the correct Solr searcher on which the FacetAPI relies to generate facets. hook_facetapi_searcher_info_alter allows you to override the searcher definition and tell the searcher to use your new custom URL processor rather than the standard URL processor. This is the implementation in myproject.module:

/**
 * Implements hook_facetapi_search_info().
 */
function myproject_facetapi_searcher_info_alter(array &$searcher_info) {
  foreach ($searcher_info as &$info) {
    $info['url processor'] = 'myproject';
  }
}

After clearing the cache, the correct path was generated per facet. Great! Of course, the paths still don't look pretty and contain those way too visible and way too ugly query parameters. We could enable the FacetAPI Pretty Path module, but by implementing our own URL processor, FacetAPI Pretty Paths will cause a conflict since the searcher uses either one or the other class. Not both. One way to solve this problem would be to extend the FacetapiUrlProcessorPrettyPaths class, since it is derived from the same FacetapiUrlProcessorStandard base class, and override its normalizeParams() method.

But that's another story.

Oct 16 2020
Oct 16

Our attempt and approach to fixing issues faced due to the cookie changes in Chrome 80 or above

There’s plenty of articles out there explaining what the changes are (https://blog.chromium.org/2020/02/samesite-cookie-changes-in-february.html), why they’ve been done (https://www.troyhunt.com/promiscuous-cookies-and-their-impending-death-via-the-samesite-policy) and how to ‘theoretically’ fix them with simple code examples, but we haven’t stumbled upon many articles explaining ‘practical’ solutions to apply to a Drupal site to actually fix the issues that arise due to the stricter cookie policies implemented since the Chrome 80 release.

We’ve faced two major issues until now with the changes in chrome:


1. Payment gateways failing to return back to the site correctly and people being ‘logged out’ or losing their checkout session when redirected from payment gateways
 
This is due to many gateways sharing the cookie with the site and if the cookie SameSite property is not set to ‘None’ (and Secure) then upon returning to the site Chrome kills that cookie and you are no longer ‘logged in’ or no longer on the ‘checkout’ path you were on beforehand. 
This means that any logic that would normally run upon returning from the payment gateway (ie. changing order status, sending emails, sending data to external CRMs etc) no longer gets triggered.


See Drupal Commerce issue for more information on the problem:
https://www.drupal.org/project/commerce/issues/3051241
 
It’s worth noting though that not all payment gateways faced this issue 
 
2. Some SSO logins not working correctly anymore due to the same issue with cookies not being shared in the third party context.

Depending on what PHP version you’re running there are two possible solutions we’ve found, however with PHP7.2 becoming end of life and hopefully many people updating to 7.3 (blog post coming soon on how we tackled the upgrade of more than 50 sites to 7.3) the easy first solution will be available to most of you. 

PHP >= 7.3

We’ve found that the simplest solution to overcome this issue without having to install any contrib modules or do any custom code is to add the following lines to the site’s settings.php file:
 

ini_set('session.cookie_secure', 1);
ini_set('session.cookie_samesite', 'None');

 
However the `session.cookie_samesite` directive is only available in PHP 7.3 or above (https://php.watch/articles/PHP-Samesite-cookies).
 
This means for older versions of PHP you’ll either have to install a contrib module or set the cookies yourself.
 
This shouldn’t be a problem for much longer as PHP 7.2 is end of life and people should be updating to 7.3 or above anyway.

PHP < 7.3

If you’re still running an older version of PHP then you could use this Drupal contributed module which basically updates the cookies to have the SameSite flag set to None:
https://www.drupal.org/project/cookie_samesite_support
 
The alternative is to create a custom solution similar to that of the module above.
 

Oct 16 2020
Oct 16

The concept of atomic design was introduced by Brad Frost and it helped accelerate the process of creating modular designs. The universe is made up of a set of elements, that are the building blocks of everything around us, and these are known to us as the periodic table of elements. All these elements have fixed properties that define each of them. The same way in which these elements combine together to form the universe, the design system is created by the combination of different elements of atomic design.

Let’s dive into these elements and understand the process!

The elements of Atomic design written on a blue background

Atoms

We have all read about the atoms in chemistry. They are the building blocks of matter. Every chemical element has different properties and they cannot be broken down without losing their meaning. Now, if we relate the same thing to our design system, the atoms are the foundation building blocks of the user interface. These atoms have basic HTML elements, for example, buttons, labels, spacings, etc that cannot be further broken down without suspending its functions.

Different elements of a search option

Molecules

In chemistry, molecules are a group of atoms attached together that can have different properties. Two molecules made of the same atoms behave differently and show distinct properties. In the same way, Molecules in the user interface are a group of elements working together. For example, if we put the label, button, and search input together, we will get the search form molecule. After they are combined together, they have a purpose and define the atoms that are put together. To construct a user interface, we assemble elements to form molecules like the search molecule.

Search bar

Organisms

Organisms are comparatively complex UI components. They form a distinct section of an interface. The organisms in the design system are a group of molecules. The search molecule that is created by a group of atoms can be combined with another molecule that creates complete page navigation to make an organism. 

The organisms can have similar or different molecule types. They can be a search form or a logo image, etc.

Search bar with logo framework

Templates 

The chemistry analogy ends here. You just read about the basic structure of a design system i.e. the atoms, molecules, and organisms. Now let's see how they can be used to create a consistent product.

At this point we see the design coming together to form pages. Templates articulate the design's content structure and place components into a layout. While crafting a design system, it is important to see how the components look and function when put together. You basically create a skeleton of a page. An important characteristic of templates is that they focus on the page’s content structure in spite of the final product.

Wireframe of a webpage

Pages

Pages show how the UI looks like when placed with everything in place.
This stage is the most important and concrete stage of all. We are able to understand and have a look at how the page looks when real content is applied and if everything is looking and functioning as planned.

Home page of a website

Why go for atomic design?

Atomic design involves breaking off a website into the elements that we just talked about and then forming a website. Now that we have understood them, let’s understand why atomic design should be used and how it makes our life easier.

Mixing and matching components

When components are broken down, it becomes easy for you to understand what part of the website can be reused and mixed to form molecules and organisms.

Creating a style guide

If you have created your site according to the atomic design guidelines, then the atoms and molecules created before the site is built can be a help as a basic style guide.

Consistent code

The chance of writing duplicate code is reduced with atomic design. It also becomes easy to understand which components are used for different parts of the website.

Understandable layout

The documentation of atoms, molecules, and organisms and where they are being used makes it easy to understand what each part of the code is representing. The best part is that it is much easier to explain the codebase to a new developer.

Quick prototyping

Making a list of elements before the website creation starts helps you mockup pages instantly by combining the elements of the page.

Fewer components

If the website creator has a list of all the atoms and molecules, he will more likely use the existing elements rather than creating new ones.

Easy update and removal

With one atom, molecule, or organism changing at a time, it is easier for any update to be done across all other instances of the site. In the same way, the removal of unwanted components becomes easy.

Implementation of atomic design

Getting started with atomic design needs you to understand every part of it and dividing into the basic elements that are, atoms, molecules, and organisms. You can always start by conducting an audit. This will help you look into discrepancies and areas that are lacking. This will make sure that you and your team are familiar with the structure of the website. 

After you have conducted the audit, look for a platform where you can build your design system. Take help from the developers to know more about which tools will be more efficient. Make sure that the tool that you have chosen is easily accessible by everyone in the team. Tools that will help: 

Now that you have a good understanding of the design system and its principles, you can start building your website. Start by the construction of components piece by piece and don’t forget to document what page every component is for. Read about web components and component-based development to know more.

Conclusion

The atomic design provides a clear methodology for creating a design system. It makes sure that we are explicit with our creations and also that our designs are more manageable, consistent, and that the user interface is faster than ever before. For a faster and more efficient user interface for your website, contact us at [email protected]  

Oct 15 2020
Oct 15

Until recently, content management systems essentially fell into one of two camps:

  • Intuitive, easy-to-create and manage SAAS solutions, such as WIX and Squarespace, or
  • Flexible and scalable solutions, such as Drupal, for websites with complex data models and a depth of content.  

Quantum leaps in functionality that leverage component-based web design systems have broken down the wall between these two camps. This gap can now be bridged within Drupal, allowing for drag and drop content management capabilities. 

Promet’s strategy for achieving this new paradigm incorporates two open-source solutions: Emulsify, a component-driven Drupal theme, and Provus, our Drupal kickstart  that packages commonly used features into reusable components for a drag-and-drop editing experience. 

During Drupal GovCon last month, Aaron Couch and I delivered a presentation on Drag and Drop Content Management Systems for Government Websites. The types of requirements that we addressed during this presentation are, of course, not unique to government agencies. 

Key among them: 

  • Improve digital services across the board, 
  • Push more transactions and interactions online for purposes of reducing costs and ensuring agility,
  • Meet increasingly high expectations among citizens and customers,
  • Communicate changes quickly, and 
  • Enhance efficiencies by leveraging the potential of “Create Once Publish Everywhere” (COPE). 

 

Foundations of Component-Based Theming

This new world of drag-and-drop possibilities is built upon the realization that despite differences in content modeling, the pages of most websites consist of various combinations of the same “things” -- media galleries, lists of content, cards, search, maps, and social media.  We call these things “components.”

With Promet's new approach to component-driven theming, new potential has emerged for customizable content models that are built with easily “themable” and reusable components, that are then stored in a library that serves as an efficient starting point for projects. 
 

Fueled by Emulsify, Layout Builder, and Provus

While there are a number of component-based themes for Drupal, we’ve singled out the Emulsify® design system as a Drupal starter theme that gives us a huge lift in building reusable components. Emulsify functions as an example of Atomic Design, which refers to a methodology for envisioning the site as a collection of components, based on graduated building blocks from Atoms, to Molecules, to Organisms, to Templates, to Pages. 

This capability combines both a starter component library and Storybook, which is a tool for building user interface components and supports both Twig and React. Storybook can be turned on from within the Emulsify theme, resulting in a highly efficient new workflow for projects. For more on Storybook, check out a recent Drupal GovCon presentation entitled, Component Driven Theming with Storybook, that Promet’s Aaron Couch participated in, along with Emulsify contributors Brian Lewis and Evan Willhite.

Drupal Layout Builder is a drag-and-drop page-building tool in Drupal Core that allows content editors to build out sections of the site. Layout Builder can be implemented as a no-code, site-building tool that retains the features of Drupal, such as flexible content model, revisioning, and custom user permissions and workflows. 

Developed by Promet, Provus provides a kickstart to Drupal site building, packaging commonly used features such as calendaring, FAQs, people, news, and blogs with a drag-and-drop content editing experience. Provus also offers a large array of site-building components like galleries, maps, content lists that can be added and arranged by non-technical users, enabling non-technical users to build and edit a site within an inherently flexible content model. Provus is open source and we welcome the Drupal community’s contributions to the project.

Beyond Traditional Drupal Theming

Having identified that component-based theming tools are key to next-level efficiencies in website building, our next step was to single out an optimal approach for delivering reusable components. 

Traditional Drupal theming includes css and javascript selectors that are intertwined with their context, connecting them to the backend implementation. The result of this “theme for the page,” approach is assets that can’t be reused across projects. 

With a component-based approach, however, selectors are isolated and can be reused. 

Let’s take an example of creating front-end assets for a typical carousel component. In the example below, the assets are tied to the Drupal implementation and combined with other elements. They are not only tied to a paragraph, but the specific name of that paragraph. Reusing this is difficult if there are any changes to the data model. We've identified that a flexible data model is what we want to build.

Traditional Drupal Theming

Traditional Drupal Theming

Traditional Drupal Theming

 

Component-Based Solution

With a component-based approach, on the other hand, all of the assets including the js, css, and template are grouped together. As indicated below, the selectors only reference the elements in the template, so they are isolated within a coherent whole.

component based solution

Making component-based theming work in Drupal, can simply be a matter of embedding. We are able to leverage Twig’s ability to embed or include templates to render the component. If the backend implementation changes or we want to move it to another project the component itself is not changed. 

Below is an example of embedding the previous component. In this case, changing the block type or backend implementation simply requires embedding the same component.

Component based solution in Drupal

Drag-and-Drop Content Management Systems represent a new level of advantages and possibilities for development teams, and even more so for clients. At Promet Source, we’re excited about the approach to component-based theming that we’ve developed, and looking forward to the conversations and amazing new solutions that lie ahead for clients. 

Interested in learning more about the potential for drag-and-drop content management systems and what this can mean for you? Contact us today.

Oct 15 2020
Oct 15

Websites have become a necessity in the 21st century. From gathering traffic on them to creating a buzz about its brand and resultantly leading to a higher sales figure, websites can do it all. However, not all websites are able to do that. A successful website is often considered to be one with the best user experience and performance. A low bar at these may just be the doom of the site developers. And when you combine the impressiveness of the site’s UX and performance with its overall aesthetic and product quality, success is almost guaranteed.

So, the process of building a website has to be done with the utmost care and expertise because doing a half-hearted job at first and regretting it later would not be prudent for anyone. The build and functionality of the website entirely depends on its developers and the CMS at work. Choosing the right architectural approach should be the first priority for all business owners and site builders. 

Getting into the architectural aspect makes me highlight the two variants that are the most popular amongst Drupal websites, the monolithic or the traditional architecture and the decoupled architecture. This article will accentuate all the areas of these two and discuss their aptness for different business needs.. 

Introducing the Contenders: Monolithic and Decoupled Drupal Architectures

Drupal is a pretty versatile content management system, it has the capability of building your websites for you, taking care of each and every aspect of the same. That should sum up Drupal’s capabilities, yet it doesn’t. With its different forms of architectural options available, Drupal has become the embodiment of versatility. The choice of deciding between Monolithic and Decoupled forms of Drupal architecture is proof of just that.

Starting with Monolithic architecture, in the terms of a software engineer, it means a single software that has the potential of handling every aspect of work done on it or with it. A monolithic architecture will always be single-tiered, wherein the UI and the code for data access are merged into one platform. And that is what Monolithic Drupal architecture or the traditional Drupal architecture means. The coupled Drupal is well-equipped to make a website up and running, both in terms of the front-end and the back-end. You would not need to rely on anything else, if taking the Monolithic approach. You would have all the Drupal resources in your corner, as your project would be entirely nestled inside it.

Drupal is regarded highly in the community for its back-end capabilities, its front end deserves equal regard. Using Drupal’s many behaviours, themes and templates, you can easily create impressive visual designs that would also provide state-of-the-art interactivity to your presentation layer that can be customised on the go. And there is more, any problem with your site’s design, behaviour, usability and management can easily be tackled by Drupal’s frontend aspects. So, it won't be wrong to presume that the traditional Drupal is a versatile software easing the process of building web applications. 

However, if you want to tap into the other frontend technologies, then there is a possibility for that as well, with the use of Decoupled Drupal architecture.

In simple terms, decoupled Drupal architecture separates the front end from the back end. The Decoupled Drupal separates the UI from the presentation aspect. Although the UI and the content management would be provided by Drupal, you would have free reigns over the presentation layer. You would not be bound to use any of the themes and templates. This approach mainly needs Drupal to act like its content repository. Although it is a relatively new trend in the CMS, many all over the Drupal community have taken it up. It has been gaining more and more traction because of the freedom Decoupled architecture provides to its users offering the advantage of capitalising on the many front-end technologies available outside of Drupal. Learn everything about Decoupled Drupal here.

Two rectangular representations are shown to highlight the difference between traditional Drupal architecture and decoupled Drupal architecture.


In essence, both Monolithic and Decoupled approaches can be deemed similar in the sense that they have the back-end Drupal technologies. However, when the front-end or the presentation part comes to play, they start to seem vastly different. 

Different Ways of Leveraging Decoupled Drupal Architecture

Now that we know how the traditional and Decoupled Drupal architecture are different on the foundational level, let us delve deeper into the different ways of decoupling Drupal architecture.

Drupal was founded as a monolithic software performing every aspect needed for website development. This architecture has total control over a project, in terms of its presentation, all the visual elements along with in-place editing and layout management as well as in terms of data. In an attempt to not sound like a broken record, I would add that Drupal in its traditional sense is still the most used version of the software primarily because of the control it gives to the editors.

Coming to the Decoupled architecture, the story changes. Like I have already told you, the Decoupled approach mainly relies on Drupal for its backend capabilities, the front-end may or may not entirely become independent of Drupal.There are two broad categories of this architectural approach.

The Drupal logo is in the center with four dialogue boxes that are describing the four approaches of Drupal architecture.


Progressively Decoupled Drupal Architecture

A major motivation behind taking up the Decoupled approach is the developer’s wish to use more JavaScript. However, the use of JavaScript does not necessarily mean that you have to give up Drupal’s front-end action. With the progressive approach you can do both. You get to keep Drupal’s front-end and add the JavaScript framework of your choice to it. JavaScript framework can be used for a block, a component or an entire page’s body. 

The layout of a progressively decoupled Drupal architecture is shown to depict its many heads.


However, there is one thing to consider in this approach, more use of JavaScript on a page will make Drupal’s administrative abilities less in control. Read more about progressively decoupled Drupal here.

Fully Decoupled Drupal Architecture

In a fully decoupled Drupal application, you see a full separation of the presentation layer and other elements of Drupal.What this means is that the client’s framework becomes a server side pre-renderer, which ultimately leads to uninterrupted workflow. The CMS is a data provider in this instance. And an API layer acts as the connection between the two, retrieving data from the back end and providing it to the front end. It has to be understood that many Drupal features like in-place editing will lose their functionality or rather become unavailable in this approach. However, you will be able to exert a great deal of control over the presentation layer with other available technologies, React, Angular and Gatsby being a few of them.

You can also go for fully decoupled static sites. To tackle the increased complexities of JavaScript without affecting the performance of a site, static sites were introduced. For this approach to work, a static site generator like Gatsby is a great tool to use, which will take data out of Drupal and generate a site for you. So, Drupal basically becomes a data source throughout the process. 

Being API-first CMS: Drupal Web Services

Like the name suggests, being API-first means building an API before building a web application, which is basically created around the former. For the Decoupled applications to work effectively, Drupal has to be equipped with a top-knotch API that would essentially be the thread holding the front and backend together. The Drupal community has done a great job in this matter. 

The provision of pretty impressive web services that have been years in the making are proof of that. REST API can be considered as the paradigm here, however, for the applications that do not communicate using it, Drupal has other web services modules as well. Since Drupal was able to provide these and transcend the degree of functionality provided by REST API, it is all set to become the best open-source API-first CMS.

Primarily, the Decoupled Drupal has been known to use any one of the three APIs I have mentioned below. I would like to add that since the Monolithic approach does not need an API to provide a connection, these would not be relevant while using it.

REST API 

REST or Representational State Transfer API is one of the most popular ways to let the HTTP client get the responses it wants from the content repository, which is Drupal. Drupal  provides this web service out-of-the-box that includes RESTful Web Services, Serialisation, HAL, and HTTP Basic Authentication. With its proficiency at executing HTTP requests and responses, it easily aids in operating the data managed by Drupal, be it comments or nodes or maybe even users. 

A diagram represents the way an HTTP request and response works through an REST API in the Decoupled Drupal application.


In a simpler sense, REST API is like a mediator along with the HTTP client because they help the front and backend to work harmoniously on any framework they like. 

JSON API 

JSON:API is gaining momentum as the go to mediator in the Decoupling process of Drupal. A reason for the same is the fact that it is the better version of REST API. It offers a relatively high degree of productivity in comparison to REST API with minimum requests to the server and providing related resources without asking. It is phenomenal at filtering, pagination and its sorting capabilities along with caching use are almost impeccable. You tell me how can it not be the go to option here?

GraphQL

GraphQL is a query language that was developed by Facebook. It is used for custom queries to retrieve data from the backend with a single request. The GraphQL Drupal module works on the same principle and even lets you update and/or delete content or an entire configuration. Moreover, GraphQL has the ability to act as a base-line for the generation of custom coded schema or its extension. I find this one detail to be really helpful when the plugins are to be used to form a sub-module. 

Check out more alternatives on the modules available in the decoupled Drupal ecosystem here

Front-end Technologies

Apart from the different APIs used in the Decoupled Drupal architecture, the languages used are also quite distinct from the Monolithic version of the software. It is because the traditional Drupal does not have them as its core. Although it has its own JavaScript Library, the JS Framework is out of its jurisdiction. And this is one of the major reasons for seeking the Decoupled approach.
 
Let us have a look at some of these.

JavaScript 

A lion’s share of frontends are designed with a JavaScript framework at its core. Since JS offers a higher degree of interactivity, it often acts as a wiser choice. There are three popular frameworks at use which are;

React 

React is known to deliver the most amazing digital experiences when it is combined with Drupal. With big names like Facebook, Reddit and Airbnb as its users, it does not come as a surprise. React has the ability to split a code in varying components, which is immensely helpful in debugging and enhancing the reusability of the code. If you want to produce results that are both SEO friendly and very responsive, then React is the perfect option for you.  

Angular 

Angular was initially introduced by Google in 2009, but the 2016 re-released version was entirely rewritten to make the development of mobile and web applications very easy. This is especially true because it uses HTML in the simplest form to clearly define user interfaces, eventually making web applications a trifecta, interactive, functional and extremely hard to break. Perhaps, this is why The Guardian and Gmail are proud users of Angular for their frontend aspects.

Vue 

Vue is a JavaScript framework which is primarily used for building UIs and single-page applications. Its best implementation is seen in creating the view layer of a web application. Since Vue is quite lightweight, it makes it highly adaptable. It can also integrate itself with other libraries and tools to get whatever is desired in an application. The combination of Drupal and Vue is indeed a powerful one, as the former control over the backend and the latter’s client side handling make the creation of large-scale applications a walk in the park. 

Static Site Generator 

A Static Site Generator or an SSG is a tool that is used to build static websites which have a pre-existing set of input files. It is often regarded as a middle ground, wherein a hand-coded static site is used with an CMS. There is one particular SSG that deserves to be mentioned.

Gatsby 

An open source framework that is based on React, with every advantage of the JAMstack (JavaScript, API and Markup), be it performance, scalability or security. This is the simple definition of Gatsby, impressive, isn’t it? It would be suffice to say that Gatsby is jam-packed with features, the best of which is of course, its ability to pre-generate pages, making it much faster than the dynamic sites. You will actually be able to build a complete website faster than many will be able to create just a prototype.

Metalsmith

Metalsmith is one of the simplest SSGs to be used and it is pluggable. It can effectively produce static build files by extracting information from the source files, manipulating it and writing it to files in a destination directory. These indeed seem to be a lot of files. From grouping files to translating templates, it can perform numerous manipulations, all using its plugins and all consecutively. Not only does the use of plugins simplify things, but it also provides the user the liberty of choosing and implementing only the ones that he needs.

Tome 

Tome  stores your information in static JSON files, which are generated from static HTML form your very own Drupal site. With Tome, you can choose from the large number of themes and modules found in Drupal, create content and simply push the static HTML to production and your work is done. The fact that it allows you to use Drupal without any security or performance worries is another bonus.

All of these are regarded as the top-notch front end technologies for web building and you may very well know that nothing can beat Drupal at its back end game, so with Decoupled Drupal you get to have the best of both worlds, the worlds inside and outside of Drupal.

Conclusion 

The fundamental difference between Monolithic and Decoupled architecture can be detected with the degree of usage of Drupal’s entire software. There is no right and wrong approach amongst these two, when choosing Drupal for your project. Your choice will come down to the needs of your project and what your intentions are for the same. For instance, you might not want to create JavaScript interactions for your project, so you could use Drupal as is, which is the Monolithic version. 

On the contrary, if your project mandates the use of JS or static site generators, the traditional approach will not cut it for you. Decoupled Drupal has to be taken up and there is no harm in that as well. There are so many big names, like the Weather.com, NBC.comand Warner Music, who have been using the Decoupled architecture and they have experienced great results. 

Whatever approach you may think you want has to be aligned with the needs of your project. And when they do, trust me, whichever approach you will choose, it would be a winner for you.

Oct 15 2020
Oct 15

Goose, the load testing software created by Tag1 CEO Jeremy Andrews has had a number of improvements since its creation. One of the most significant improvements is the addition of Gaggles.

A Gaggle is a distributed load test, made up of 1 Manager process and 1 or more Worker processes. The comparable concept in Locust is a Swarm, and it's critical for Locust as Python can only make use of a single core: you have to spin up a Swarm to utilize multiple cores with Locust. With Goose, a single process can use all available cores, so the use case is slightly different.

As we discussed in a previous post, Goose Attack: A Locust-inspired Load Testing Tool In Rust, building Goose in Rust increases the scalability of your testing structure. Building Goose in this language enabled the quick creation of safe and performant Gaggles. Gaggles allow you to horizontally scale your load tests, preparing your web site to really take flight.

Distributing workers

Goose is very powerful and fast. Now, it’s so CPU-efficient it is easier to saturate the bandwisth on a network interface; you'll likely need multiple Workers to scale beyond a 1G network interface.

In our Tag1 Team Talk, Introducing Goose, a highly scalable load testing framework written in Rust, we introduced the concept of Workers.

In a Gaggle, each Worker does the same thing as a standalone process with one difference: after the Worker's parent thread aggregates all metrics, it then sends these metrics to the Manager process. The Manager process aggregates these metrics together.

By default, Goose runs as a single process that consists of:

  • a parent thread
  • a thread per simulated "user" (GooseUser), each executing one GooseTaskSet (which is made up of one or more GooseTasks)
  • a Logger thread if metrics logging is enabled with --metrics-file
  • a Throttle thread if limiting the maximum requests per second with --throttle-requests

Distributed in area, not just in number

Sometimes it's useful to generate loads from different places, clouds, and systems. With a Gaggle, you can spin up VMs in multiple clouds, running one or more Workers in each, coordinated by a Manager running anywhere. Typically, the Manager runs on the same VM as one of the Workers -- but it can run anywhere as long as the Workers and Manager have network access to each other. For additional information, see the Goose documentation on creating a distributed load test.

Summarized metrics

While metrics are usually collected on the Workers, a Gaggle summarizes all metrics from its tests by sending them to the Manager. A Goose load test is a custom application written in Rust, and instead of (or in addition to) printing the summary metrics table, it can return the entire structure so your load test application can do anything it wants with them. You can use these load tests and metrics to track the performance of your web application over time.

GooseUser threads track details about every request made and about every task run. The actual load test metrics are defined in the Goose metrics struct. Workers can send metrics to the Manager, which aggregates it all together and summarizes at the end of the load test.

By default Goose displays the metrics collected during the ramp-up period of starting a load test (ie, first there's 1 user, then 2, then 3, etc, until it starts all N users in the test) -- by default it then flushes all metrics and starts a timer so going forward it's only collecting metrics for all simulated users running at the same time. By default it then displays "running metrics" every 15 seconds for the duration of the load test. Finally, when the test finishes (by running a configurable amount of time, or by hitting ctrl-c on the Manager or any Worker) it displays the final metrics.

The parent process merges like information together; instead of tracking each "GET /" request made, it tracks how many of that kind of request was made, how fast the quickest was, how slow the slowest was, total time spent, total requests made, times succeeded, times failed, etc. It stores them in the GooseTaskMetric and GooseRequest structures. Individual GooseUser threads collect metrics instantly, and push them up to the parent thread. The parent thread displays real-time metrics for the running test every 15 seconds.

If the Metrics Log is enabled (by adding --metrics-file=), Goose can record statistics about all requests made by the load test to a metrics file. Currently, each Goose Worker needs to retain their own Metrics log and then they could be manually merged together -- the Manager can not store a Metrics log (if you are interested in the Manager storing the log, follow this issue.) The metrics collected or displayed are the same whether stand alone or in a Gaggle. The documentation on Running the Goose load test explains (and shows) most of the metrics that are collected.

Automated tests confirm that Goose is perfectly tracking actual metrics: the test counts how many requests the web server sees, and confirms it precisely matches the number of requests Goose believes it made, even when split across many Gaggle Workers.

Gaggles in use

During development, Jeremy spun up a Gaggle with 100 Workers. The coordination of the Gaggle itself has no measurable performance impact on the load test because it happens in an additional non-blocking thread. For example, if we start a 5-Worker Gaggle and simulate 1,000 Users, the Manager process splits those users among the Workers: each receives 200 users. From that point on, each Worker is doing its job independently of each other: they're essentially just 5 different stand alone Goose instances applying a load test and collecting metrics.

Communication between the Workers and the Manager is handled in a separate, non-blocking thread. The Manager waits for each Worker to send metrics to it, and aggregates them, showing running metrics and final metrics. For specifics on how the Workers and Manager communicate, read the technical details.

Once a Worker gets its list of users to simulate it tells the Manager "I'm ready". When all Workers tell the Manager they're ready, the Manager sends a broadcast message "Start!" and from that point on Workers aren't ever blocked by the Manager.

Verified consistency of Workers

When running in a Gaggle, each copy of Goose, meaning the Manager and all the Workers, must run from the same compiled code base. We briefly explored trying to push the load test from the Manager to simpler Workers, but Workers run compiled code, making this incredibly non-trivial.

Running different versions of the load test on different Workers can have unexpected side-effects, from panics to inaccurate results. To ensure consistency, Goose performs a checksum to confirm Workers are all running the same load test. Goose includes a --no-hash-check option, to disable this feature, but leaving it enabled is strongly recommended unless you truly know what you're doing.

Taking flight

Using Gaggles with Goose can significantly increase the effectiveness of your load tests.

While standalone Goose can make full use of all cores on a single server, Gaggles can be distributed across a group of servers in a single data center, or across a wide geographic region. Configured properly, they can easily and effectively use whatever resources you have available. Goose tracks the metrics you need in response to those Gaggles, helping you now, and in the future.

Photo by Ian Cumming on Unsplash

Oct 15 2020
Oct 15
Serpentinen im Wald

An agency is selected, the goals are agreed upon and the work on the website is commissioned - then it's time to work on the many to-dos. But how do we as a project team always keep an overview and can plan the next steps together? There are task management tools for this purpose. As an issue tracking system, they manage requirements and upcoming tasks for collaborative work.

At undpaul we like to use Jira Software from Atlassian for requirements management. Jira is very comprehensive and thus offers many possibilities for individualization, but is still intuitive to use. With Jira complex processes can be mapped, but also in easy steps an already very helpful tool for joint project work can be created.

Jira configuration in our web projects

Every project and every customer is different. This means that the configuration in Jira is also individual and different in each of our projects. However, a certain basic framework (basic configuration) has proven itself for our Drupal projects.

Manage different requirements with Issues

All requirements are collected in Jira as so-called Issues. We use different issue types to structure the different requirements.

  • A story describes a new feature or function for the website. Since we work according to Scrum, the story (in Scrum "User Story") focuses on the end-user perspective and mainly includes the functional acceptance criteria.
  • A task describes a rather technical task that is necessary for the project but has no direct visibility to the end users.
  • A bug describes an unexpected, incorrect behavior on the website, especially for functions that have already been implemented correctly in previous requirements.
  • An epic basically represents a large user story and collects stories and tasks that belong to one topic.

Each issue passes through different statuses during editing. In our Jira projects, we use the statuses mainly to differentiate the degree of completion and the different areas of responsibility. An issue usually passes through the following statuses: In planning - To Do - Needs Info - In progress - Testing - Customer acceptance - Done. The workflow in Jira allows us to control the order in which the issues pass through these statuses.

Views for different purposes

The Jira backlog contains all issues of the project that still need to be worked on and do not have the status "done". It gives us an overview of all tasks still to be done. In the backlog our customers sort the requirements by priority and together as a project team we plan upcoming sprints.

Furthermore, we use two different boards, i.e. different views on the issues. 

The Sprint Board is our working tool in the active development phase, the Sprint. During the sprint period, the development team works its way through all issues on the Sprint Board and pushes them from left to right through the statuses until they are completed. Our customers are directly involved in the development work with the Sprint Board. This is how all communication runs through the Jira comments and issues in the status Customer Acceptance is a work package for the customers.

We use the SLA Board for customers with an active maintenance contract. It is an ongoing collection of issues that are errors on the website or hinder the website. Our customers create these issues themselves in their Jira project, we receive a push notification and can start working on them.

Setting up a free Jira instance

Some of our customers already use Jira within their own company and can add us to their instance for the web project. For other customers, we set up a new Jira instance for our web project. This is easier and (up to a certain point) cheaper than expected: the cloud version is available for up to 10 users for free. In most of our projects these 10 users are sufficient.

Setting up a free Jira project is quite simple. Follow the steps below and in 5 minutes the new project will be ready.

  1. Log in to the Atlassian website with a user account.
  2. Enter a name for the instance.
  3. Invite other users. This is optional and can be done later.
  4. The next step is to create the actual Jira project. Several different Jira projects can be created in one Jira instance.
  5. As a basis for a project we choose the Scrum template. Within the project different boards can be created later.
  6. Enter the name and key for the project and then it is already created.

You can read more about this in the more detailed instructions of Atlassian

Multiple Jira instances at a glance

As an agency, we manage various customer projects in parallel - usually each with its own Jira instance. Not losing the overview can be quite a challenge. 

We found the best solution for us in the Jira Watchtower App. It visualizes issues from different Jira instances in one view. Thus, we have clearly arranged all our project tasks in one undpaul internal board.

Oct 14 2020
Oct 14

Our normally scheduled call to chat about all things Drupal and nonprofits will happen TOMORROW, Thursday, October 15, at 1pm ET / 10am PT. (Convert to your local time zone.)

No set agenda this month, so we'll have plenty of time to discuss whatever Drupal-related thoughts are on your mind. 

All nonprofit Drupal devs and users, regardless of experience level, are always welcome on this call.

Feel free to share your thoughts and discussion points ahead of time in our collaborative Google doc: https://nten.org/drupal/notes

This free call is sponsored by NTEN.org and open to everyone.

View notes of previous months' calls.

Oct 13 2020
Oct 13

The virtual sector has become increasingly prominent today. From trading shares to buying groceries, every single aspect of our life can be fulfilled in the virtual world. And do you know what lies at the heart of this world? Since it is online, you must be thinking that a website has to be at the core. And you are right to think so. A website is the foundation of every online business, from blogs to e-commerce, every virtual service provider needs a website.

Now, just building a fully functional website is not enough. You have to keep updating it every now and then, simply because what was trending two years ago is considered antiquated today, and we do not want your website to be regarded as out of fashion. 

When making the decision of revamping your website, you have to consider something very carefully. And that is the difference between website refresh and website redesign. If you want minor changes to be done in your website, like changes relating to its appearance, you would call it freshening the website. On the other hand, if you want to change your website completely, you would need to redesign it. This would transform the makeup of your website, from the content to the appearance and sometimes even the code. It is like converting a hatchback into a limousine, even though it would be the same car, it would be tremendously different.

So, if you are planning to redesign your website, you have landed in the right place. This article would accentuate everything related to redesigning a website; let us begin with the reasons for the same.

The Need for Website Redesign

There is a pink and green background with a clock depicting the need for website redesign has come along with the reasons.


Website redesigning is a giant undertaking that can be both costly and time consuming, so you can’t just do it on a whim, there have to be reasons for the same. Understanding those reasons is as important as the actual undertaking. 

I have compiled a list of common issues faced by website developers and operators that have compelled them to overhaul their websites, so that you can get an answer for the important question, that is, when should websites be redesigned. If you are experiencing more than a couple of them, consider it as a sign to say goodbye to your current website. 

Expansion of the company

Every business is in the market for growth. This growth means that they would enhance the products and services that they provide at one point or even start a whole new department. A website is like the gateway for the customer to know your services, and if it isn’t doing that, you will potentially lose your customers to your competitors. So, remember that, growth is always followed by your website’s redesign.
Changing market trends

If there is one thing that is fickle in this world, that would be the market trends. These change faster than light making your head spin with it. One day the gaudy and eccentric website are all the rage, and the next, minimalists reign online. The point is that your website has to align with the market trends, especially in terms of design. 

Unresponsive to some users 

By responsive, it is meant that your website can easily be accessed through any device, be it a smartphone, a tablet or a desktop. Sadly, there are a number of websites that are inaccessible through a smartphone and that just about destroys your leads. Adaptability or responsiveness can be achieved through redesigning.

Poor conversion rates 

The primary purpose of a website is to generate sales, if it is not doing that, then what is the point? Your customer base is basically decided by your website and how compelling it is for them. The more interest it generates in the customers, the higher the chance is that they would make a purchase. Having too complicated text packed with corporate jargon, not having enough CTAs, lacking in the aesthetic aspect; all of these will contribute to less conversions from your visitors. And the only way to overcome this is redesigning.

Complicated user interface 

A good website is the one that allows the user to easily maneuver around it and get the full experience of what the developers wanted to provide. If that is the case, then your website lacks in the functionality part and everything on it is as good as useless. The solution here is to simplify the user experience and redesigning is the route to go.

Revitalise your brand

Brand image is very crucial because it is how customers would perceive you and your brand. It is up to your brand image to make the customers think happy thoughts or get all woeful. Coca-cola is always associated with happiness and joy, because that is the image they have built. A website helps a great deal in that. Now, if you think that your brand’s perception is not aligning with its core, you would need to redesign it to make the alignment happen.

Now, that you know why websites need to be redesigned, let us get into the process of doing it.

The Website Redesign Strategy 

There is a bulb on a dark background personifying an idea for website redesign and on the left the strategy for the same is written in points.


Redesigning a website is not something that will happen in a day. It needs time and resources to be put to good use. The process would be extensive, you would need to be aware about a lot of things, ensure everything works for you and aligns with your vision for the future of the website. Even the slightest of mistakes can become tragic for your website. That is why, a holistic approach has to be taken into consideration and I am here with the website redesign tips you would need for that.

Analyse the expense and its bearing on you 

The first step towards website redesigning commences when the budget for the same has been decided. The financial implications of website redesigning have to be considered, you cannot go wayward with the money; that would not be a wise choice.

Your website will be your first impression on a consumer and you would want it to be impressive. For that to happen, you would need financial resources that are substantial. Don’t take up redesigning until you have the resources, because doing a half-hearted job is just going to be a waste and you won’t be winning any website awards for that. So, analyse the expenditure, set aside the needed amount and delve right into it.  

Inspect the existing website

Once the money is ready, the process officially commences. And it does so with the past and its inspection. By past, I meant your current website, which would become your past after the redesigning. Your website, as it may be, would have done something for you, that is why you had it for as long as you have had it. 

Micro analyse every CTA, every image, every click that you received and how often that was. From bounce rates to time spent on a page, from the number of visitors to the number of sales, inspect everything and that too thoroughly, do not leave anything uninspected. I am emphasising on everything again, because it is crucial. This is the beginning and a successful beginning results in a successful end result.

If you want help in doing all of that, Google Analytics can sort you out. Once you have done all of this, you need to take note of the things that work and the things that don’t. The advantages would move further with the redesigning, while the disadvantages will be worked on until they are no longer a disadvantage. 

You must be wondering why all of this is necessary. Why not just start afresh? The problem with starting from scratch is that you might end up doing the same things again, which would only result in wasted time, effort and money. The inspection of the current website would not only give you a direction and a blueprint of the redesign, but it would also save the extra effort. So, why not?

Establish priorities to make goals

After the inspection is done, you would have a fair idea as to where you stand with your website. Knowing the key areas that are hampering your conversion rates or even your visibility on the web, helps you understand and establish priorities for the redesigning.

For instance, your content could be amazingly good, engaging and informative, however, the way it is presented makes it seem boring at best. Therefore, changing the content should not be a priority, rather its presentation should be. 

So, prioritising the areas that need to be focused on from the get go is an important step in website redesigning. A reason for its importance is that prioritising helps in making the design goals. 

Once you know what you need to focus more on, you could start working on that first and get better results. It is like knowing your weaknesses and working on improving them. I was weak in science in school, pathetic really, so I put in an extra effort by prioritising and started seeing the results. In website redesigning, there won’t be any mysterious scientific concepts, but you could endeavour to elevate your visitor count, enhance domain authority or improve your SEO rankings and eventually increase your sales. All of this is achieved by prioritising your goal. You can’t simply take on everything at once, you need preparation and this is it.

Create a unique brand image 

If you have a website, you are a brand. The question is whether your brand is appealing enough to customers to pull money out of their pockets and buy from you? The answer lies in the image you create for your brand. A good way to create an impactful persona for your brand is your website and redesigning the same can easily do that. 

The first thing that a customer would see on your website should effectively depict what you are about. The user should not have to decode it, trust me, he would go to your competitors website instead. And do I have to say that we do not want that?

A website's homepage is showing a lot of attractive offers, which is should be an aim in website redesign strategy.Source: Amazon

This is Amazon’s home page, and it should be considered the benchmark in branding. I would not leave this website until I have scanned through the “Top picks for your home.” With so few words Amazon has portrayed all that it is about and engaged the user. 

I know every website cannot be like this, but the conciseness can be achieved, it has to be. That is the core of branding. Your redesign strategy has to ensure that your brand image works in your favour rather than against it, make it unique, make it enticing and make it worthwhile for the customer.

Assess the competitors 

You may be at war with your competitors, but that does not mean that you cannot learn from them. Assessing the ways your competitor’s websites look, the kind of services they provide, how functional they are can help you a great deal. You must be wondering why. It is because you and your competitors are in the same market, catering for the same customers, whose needs and wants are the same; apart from this their likes and dislikes would also be the same. So , if they like something that your competitors are providing, you can make a change. What could be the harm here?

Cater to your audience more than yourself 

A website’s success entirely depends on its users, they are the ones who can make or break it. So, making a website keeping in mind the end-user’s thought process and needs can make a difference between the success and failure of a website. 

Obviously, your website is not going to be for everyone, it has to have a target audience or a buyer persona. Is it meant for the retired or the middle-aged men or maybe the adolescents, you have to decide. 

For instance, an anti-ageing cream’s audience is going to be over 25, it could be the working women who stress too much at work that they have developed wrinkles at 30 or it could be the stay-at-home mothers who have very little time for skin care or both. If your website and its design effectively justifies their needs, a conversion is almost guaranteed, and that is all we want. 

Make the website responsive to everyone 

Today, websites are not only accessed through laptops and desktops, people do it from any device that has a screen and an internet connection, especially smartphones. So, seeing something like “this website is incompatible with your device” is not going to generate many leads for you.

Mobile friendliness is crucial for websites, they need to be made responsive to reconfigure itself to the screen size of the device, which is browsing it.

Focus on the software

Up until now, we have covered everything that could be redesigned on your site, now let us talk about the makeup of your website a little as well. A Content Management Software or a CMS is the way to go here. From developing the site to publishing regular content on it, the CMS would do just about anything you could want. Drupal, one of the leading CMSs in the market share, can be a magnificent choice. Be it government websites or private companies or even sports stars wanting to create their online space, Drupal provides for all.

Journalism websites need a lot more flexibility in content than any other category. Since Drupal is the embodiment of flexibility and provides just the right set of core features and contributed modules, choosing it becomes a no brainer. Earth Journalism Network’s website is the prime example of remarkable improvements that are possible with website redesigning. OpenSense Labs Team migrated the old site, which was running on a Python-based CMS, to Drupal 8. This ultimately proved to be a remarkable move as creating and managing content turned out to be super efficacious. Access the full case study of Earth Journal Network to know more.

The Website Redesigning Approach: ESR

The web would keep on evolving at a pace that is faster than your redesigning abilities. You might think that is a bad omen, but it isn’t, not completely. Yes, the web keeps on evolving, but its evolution does not mean that your website would become completely obsolete. There would be a lot of things still working for you since the last redesign and the things that do not, you can simply update them.

All of this becomes a walk in the park with Evolutionary Site Redesign or ESR. This is an approach that continually tests certain problematic areas of your websites, giving you feedback about their performance.

A line graph is showing the correct strategy to be implemented during website redesign.Source: Wider Funnel

It is not only a faster approach to redesigning, but also becomes extremely helpful in reducing the risk as you would be making incremental improvements consistently, instead of a complete overhaul. The result is a better user experience for your visitors and everybody stays happy. That is why ESR is often regarded as the right website redesign approach.

The Mistakes to Avoid 

Now that you know all about things that you need to address, it is now time to know what to avoid in the website redesign strategy. These are mistakes that are more common than you would think and these mistakes can become quite costly for your website. 

Nondescript scope 

Scope of work, that is both doable and well-defined, sets the pace for any project. It is like a roadmap that will tell you what the next destination is and whether to take a right or a left turn to reach there without much hassle. Taking on a website redesign project is a huge step, so you have to clear, explicitly clear, as to what you want to do and how it would be done. 

The scope also lays emphasis on accountability, as it clearly allocates duties to be done by members of the team. So, when something doesn’t happen, you would know where to point the finger. Without a well-defined scope, the smooth sailing of the redesign project is not possible.

Not emphasising on the content 

The content on your website is probably one of its most important aspects. Think of it like the voice of your website, since it clearly speaks for you and your brand. So, ensuring that the content is well-written, engaging, simplified and SEO friendly at the same time should be a priority. You can hire professionals to write it for you. The thing is a high quality content would need time from your end, so not getting a head start on it is not the way to go.

Biting off more than you can chew 

Another mistake that often happens in a redesign project is that the owners overreach. They try to do everything possible to make the website better than the existing. Although this should not be a mistake, the timeline of the project may disagree. The completion of a project in its designated time is what makes it worthwhile. So, when you keep on adding additional tasks on an ongoing scope, you will just be stretching yourself for time and the project would only get delayed. 

This does not mean that you cannot add too many features in your website, you certainly can, but you should try adding those in phases. Doing everything at  once and doing one thing at a time is different and let me tell you, the latter almost always succeeds.

Compromising on one aspect for the other 

Another pitfall that is experienced in the redesigning projects is the compromise. In the race to finish a project, the developers start compromising on one aspect of the website for the other, and this disrupts the overall redesign. For instance, you cannot just focus on the aesthetics of your website, its functionality and technical aspects need equal consideration. The user might come to your website and even stay awhile because of the aesthetics, but if your website won’t deliver in terms of functionality no one would stay longer than awhile and conversions would seem far fetched.

The Right Partner for Website Redesign

Like I have already mentioned a few times, website redesigning is a colossal undertaking, so it needs the support of a team to make it successful. Many websites have taken up the redesigning endeavour on their own, but only because they have a comprehensive team that can perform all the redesigning duties including a developer, a designer, a marketing strategist, an SEO expert, copywriters, to name a few. If you have such a team, take up the redesigning yourself. If not, you should consider a website redesign agency

Here are a few pointers to help you;

What is its niche?

Before checking anything else, you have to find out which area does the organisation excel at. Is it an ad agency, marketing agency or does it specialise in technical aspects? Knowing all of this will help narrow down your search for the right partner, since the speciality of the agency has to be in accordance with your business. A high-profile public enterprise can go for an ad agency that deals with celebrities.

What approach does the agency use?

The agency’s approach towards tackling redesigning is very crucial, because that will set the pace for the website’s future growth. So, if an agency is focusing on enhancing  the user experience as its core, you should hire it without a second thought.

How good is it performance-wise?

Ensuring that the redesigning agency has had a good track-record in its past projects is also necessary. To do so, you can simply check the testimonials on their website. This way you can predict how well it would tackle your redesigning.

Can it perform an audit?

An audit is very valuable when taking up redesigning a website. So, you must go for an agency that would perform a thorough audit, both for the content and the architecture of the website, before working on the changes. 

Do they know your target audience?

A website redesign won’t do you much good, if it is not targeted at your potential customers and compelling them to buy from you. So, an agency that has knowledge or will acquire the knowledge about your target customers is going to be the right fit for you.

How extensive are its services?

Lastly, the range of services a redesign agency provides will be the last stepping stone in your decision. It cannot just provide a marketing strategy or just implement the SEO techniques, the services have to be all inclusive. 

All of these questions need to be addressed before you take up a partnership with a redesigning agency. 

Website Redesigning- Not a One-Time Gig 

Now, you are all set to make the redesigning decision. However, I would like you to know one more thing. Website redesign never ends, it is a perpetual project that would keep returning every few years or as early as your resources allow. Trends keep changing in the virtual world, and your website must change with them, barring the fact that it is responsive or not. If you have a bigger establishment with resources in abundance, you can redesign at least once a year. On the other hand, if resources are scarce, you can wait as long as half a decade, provided that your website is responsive.

Redesigning is similar to changing your car, you have to do it every few years. Driving the same car for a decade would not seem appealing or a sign or growth, what do you think?

Oct 13 2020
Oct 13

Today, business process automation is not just considered by bigger organizations but by SMBs too. And why not? When you can have a process that can do your work while reducing your overheads in the long run, why wouldn’t you choose one? Business process automation (BPA) is a process that uses a software program or platform instead of human intervention to improve your cost and workflow efficiency. It can be implemented in various stages/processes of your organization like Marketing, Sales, Project Management, or any other process that needs automation. Drupal 8 and Drupal 9 offers many modules and distributions that can help you automate your business process easily. Let’s look at some of them in this article.

module for automation

What can a Business Process Automation do for your Organization?

•    Can streamline your processes. Put an end to that disorder. Manage and monitor your processes from a centralized business automation solution.
•    Know exactly who has done what. When the entire process is streamlined and transparent you are enforcing accountability.
•    Minimize errors. To err is human indeed. When you eliminate more work from humans, you can also expect a lot less to go wrong in a process.
•    Reduce costs. Yes – the most important benefit of implementing a business process automation tool. When errors and human hours are reduced, your costs can drastically decrease.

How Drupal makes Business Automation easier?

Drupal powers millions of websites for organizations ranging from SMBs to very large enterprises. Every one of which already use or will eventually need a business process automation solution. Being an open-source content management system, Drupal allows for easy customized integrations to meet every organization’s requirements. Drupal has modules and distributions that can help organizations to integrate their websites with many popular and not-so-popular BPA solutions. The integration is so seamless that it feels like you’re working with a single solution. For companies that have very unique specifications, a custom integration can be built on Drupal because of its open-source framework. Let us look at some of the top Drupal 8 and Drupal 9 modules and distributions that can integrate with different business automation solutions available in the market.

Marketo MA

Marketo is one of the most popular digital marketing tools that can take care of your lead management, email marketing, consumer marketing and mobile marketing needs. Marketo users swear by its simplicity, ease of use and setting up and claim that it is a great tool for SMBs. The Drupal 8 Marketo MA module enables you to add marketing tracking capabilities to your Drupal website. Lead data can also be captured during user registration via webform integrations. You will require to have an active Marketo account and of course, an installation of Drupal 8 (or 9). 

Salesforce Suite

Claiming to be the world’s number one Customer Relationship Management (CRM) solution, Salesforce allows you to efficiently organize and manipulate your customer data using cloud computing. Right from sales and marketing to customer service, Salesforce enables organizations to put customers at the center of everything you do and satisfying them. With the Drupal Salesforce Suite module, you can synchronize Drupal entities like users, nodes, etc. with Salesforce CRM’s objects like contacts, organizations, etc. You can export Drupal data into your Salesforce CRM and import Salesforce data into your Drupal website effectively. Any updates can be performed in real time or asynchronously.

Maestro

Have a workflow and need a perfect solution to automate it? Drupal’s Maestro module lets you automate your workflow processes with a visual flowchart that you can drag, drop, and connect. Sure, it can be used in content workflow processes too to track and moderate content revisions. But it mainly shines in more complicated business processes and use cases. Maestro can be extended to add custom functions and logic to suit every business structure. It integrates well with other significant Drupal features like Webforms, content types, triggers, rules, and more.

Maestro module   Drupal Maestro Module

RedHen CRM

A very humble and flexible CRM with functionality for handling your contacts, organizations and relationships between them, Redhen CRM was initially built for common non-profit needs. Redhen CRM is a Drupal based CRM and a completely open-source one which makes it easy to mold into your organization’s requirements. It also allows for some modern functionalities like engagement tracking, customizable donation forms, etc. It not just behaves like you have integrated your website with a business automation tool; you can actually integrate Redhen CRM with other CRMs for extended functionalities like Salesforce, Blackbaud, etc. This Drupal module is compatible with Drupal 8 but does not support Drupal 9 yet.

MailChimp

Did you know that about 1 billion emails are sent through MailChimp every day? MailChimp is an email marketing service through which you can send marketing and informational emails to your opted-in customers. You can create multiple email campaigns, use their email campaign designer, email templates, data analysis, generate reports, segment audiences and much more with MailChimp. With the Drupal MailChimp module, your users can choose to opt-into (or out of) any email list they want to and you can generate and send email campaigns right from your Drupal website. The Mailchimp Drupal module supports Drupal 7 and 8 but is not compatible with Drupal 9.

mailchimp

Drupal PM (Project Management) Module

This can be termed as a distribution rather than just a module as it consists of a suite of modules that help in generating the various components it serves. The Drupal Project Management module offers components like organizations, tasks, teams, projects, tickets, expenses, notes and time tracking. These components allow easy integration with other Drupal modules as well. It serves as a great work planning and organizing tool for organizations. 

Drupal Pm Module   Drupal PM Module

Hubspot 

An inbound marketing tool, Hubspot is an easy to use automation software that lets you attract visitors to your website, track and convert leads to customers. It also allows for easy integration with other automation tools like CRMs, email marketing tools, analytics tools, etc. You can also use it as your email marketing, social media publishing and ad-tracking tool too. The Drupal 8 Hubspot integration module integrates with Webform and Hubspot API and submits Webforms to Hubspot’s lead management system or email marketing campaigns. This module supports Drupal 8 but not Drupal 9 as of now.

Hubspot Drupal Module   Hubspot Drupal Module
Oct 13 2020
Oct 13

The Drupal Association would like to congratulate our newest elected board member:

Pedro Cambra.

Pedro Cambra FernandezPedro Cambra is a Drupal developer and consultant with extensive experience working with Drupal projects. He has worked in many different industries, including large organisations such as United Nations, non-for-profits such as Cancer Research UK or Médecins Sans Frontières and he also has a strong background working with large E-commerce integrations.

He currently works at Cambrico, a small Drupal shop he co-founded.

Pedro contributes to the Drupal project and community with a number of popular contributed modules and has helped organise events in Spain, Japan and the UK. He has been involved in the organisation of several Drupalcons and, in 2012, Pedro was elected by the Drupal community as director for the Drupal Association for two year term.

We are all looking forward to working with you, Pedro.

Thank you to all 2020 candidates

On behalf of all the staff and board of the Drupal Association, and I’m sure the rest of the Drupal community, I would like to thank all of those people who stood for election this year. It truly is a big commitment to contribution and one to be applauded. We wish you well for 2020 and hope to see you back in 2021!

Detailed Voting Results

There were 10 candidates in contention for the single vacancy among the two elected seats on the Board.

920 voters cast their ballots out of a pool of 3209 eligible voters (28.7%).

Under Approval Voting, each voter can give a vote to one or more candidates. The final total of votes was as follows:

Pedro Cambra Fernandez 360 Mike Herchel 359 Adam Bergstein 341 Surabhi Gokte 286 Imre Gmelig Meijling 263 Alejandro Moreno 241 Samson Goddy 129 Shodipo Ayomide 122 Jim Jagielski 108 Esaya Jokonya 102

I’m sure we will all want to send our congratulations!

What’s next

The new term of the Drupal Association board starts November 1st. In the coming weeks, we will publish an update from the board with information introducing the 2020-2021 directors, updates for 2021 including strategic goals, and opportunities for the community to connect with the Board.  
 

Oct 13 2020
Oct 13

Distributions and modules for an eLearning website on Drupal

In our list, you will find only those profiles (a website draft with dummy content) and modules that do not have a lock-status or “outdated” status, they are still supported by the community.

Opigno LMS

Opigno is a Drupal-based distro allowing to turn a CMS into an LMS.

Opigno LMS is fully compliant with SCORM 1.2, SCORM 2004 v3 и xAPI. It integrates with the JavaScript technology H5P, making it possible to create rich interactive training content.

Opigno Messaging and Opigno Forum modules provide opportunities for debate among students.

The Opigno instructor-led Trainings module is needed to simulate the presence of a teacher and to monitor attendance and rate students.

The system has its application store where apps can be downloaded and installed without having to update LMS.

BigBlueButton API

Since 2007, the BigBlueButton open-source system has been used in distance learning for video conferences. Teachers and students can share pictures, videos, PDFs, Word documents, show presentations, communicate in chat rooms, and even "raise a hand" if one wants to speak up. BigBlueButton API was released in 2010. It can be integrated with many systems, including LMS Moodle, the project management system Redmine, CMS Drupal, Wordpress, and Joomla.

Opigno Webex App

The WebEx is another conference service belonging to the Cisco organization. Developers give organizers of the instructional process the ability to create classes schedule and send invites to students by including Webex as an add-on tool in Opigno using the Opigno Webex App module.

Course

Course allows you to build a course with any number of steps and consisting of any content entities, that is an item of data that includes text, images, attachments, etc.

Course credit is used to give students the record book.

Quiz

One of the ways that teachers and students can monitor the progress and effectiveness of the curriculum - is to take tests and surveys. Quiz allows you to create multiple choice questions and save the results in a database. Progress and results can be displayed both during and after them.

Certificate

The presentation of a diploma or certificate confirms the completion of the course, so this is a mandatory functionality for distance learning websites. The Certificate module Interface helps to format such documents as a PDF file, which is facilitated by HTML templates and integration with WYSIWYG.

Open Digital Badging

The digital badges on the student's profile are proof of their success during the learning phase. The Mozilla Open Badges program is responsible for creating badges, and the Open Digital Badging module is a provider for Drupal websites. If the website works on Opigno, then the Opigno Mozilla Open Badges App will help to befriend one and Mozilla Open Badges.

Social Login и Social Share

Registering through Facebook, Twitter, Linkedin, other social media networks, and sharing content and learning achievements through them is already the default option for most websites. Social Login and Social Share are solving these problems on a Drupal website.

Oct 12 2020
Oct 12

After a couple of weeks of coding and testing, I've tagged a first alpha release of the ActivityPub module for Drupal! It implements the protocol so that you can communicate with other Drupal sites or platforms which support ActivityPub. Remote accounts on for example Mastodon or Pixelfed can follow any user on a Drupal site now and read content, like posts and reply from their platform. It's only the tip of the iceberg of what's possible with AP, but the main focus at this point is discovery of content and users on remote platforms and performing typical social responses (reply, favorite, announce).

The core of the implementation uses Drupal plugins to map fields and content types to activity types. Being in alpha means that the interfaces will most likely change as bugs are fixed and new features will be implemented, but I'll document those when I tag a new version. For more information, installation and configuration, check the README which I will continue to update as well.

Main features

  • Enable ActivityPub per user, discovery via the Webfinger module
  • Map Activity types and properties to content types and create posts to send out to the Fediverse.
  • Create comments from Create activities from remote users to content
  • Accept follow requests, Undo (follow), Outbox, Inbox and followers endpoints
  • Send posts via drush or cron, with HttpSignature for authorization

Follow me!

Have an account on Mastodon, Pixelfed or Pleroma? Then you can follow me via @[email protected]. Discovery probably works on most other platforms as well, but I haven't interacted with those yet, and I hope other people will download and start testing with those as well.

ActivityPub for Drupal: first alpha release! If you are on a federated platform, you can now follow me via [email protected] :) #drupal #activitypub https://realize.be/blog/activitypub-drupal

Oct 12 2020
Oct 12

Take the time to audit existing content and content types. This is a great time to find and alleviate pain points in the content management process by combining, re-working, or removing content types.

This is a crucial exercise for getting buy-in and alignment from all of the project stakeholders, from editorial, business, and engineering. This detailed spreadsheet will surface important migration considerations that would otherwise have gone unnoticed until development was well underway. 

Plan ahead. Seriously — most issues we have seen pop up from a Drupal 8 build could have been worked out early on with proper planning. One important maxim of web development to never forget: The later you identify an issue, the more expensive it is to fix. In software, it is easy to save hours of planning with weeks of coding.

Ask why 

Good engineers keep asking about the what until all of the ambiguity is gone and they have a complete mental model of the problem. Good engineers are driven to build things the right way, with craft, and with care. The difference between good engineers and great engineers is that they take the time to stop and ask why.

Use this migration as an opportunity to reevaluate your site's technical architecture. One of the worst outcomes of a migration is moving over all of the existing bad architecture to a new platform. After you understand how something works, be sure to ask why it works that way.

We recently worked with a client that was upgrading a Drupal 7 site that featured a crucial distributor finder directory experience. This map-driven experience is powered by a list of several thousand distributors stored in a third-party, priority tool with limited functionality. In the technical planning, we talked through how the data integration worked, who managed the data, and what the current state of the data was. Because the platform lacked support for custom fields, we were hamstrung in terms of how the new experience could work in Drupal 8.

However, halfway through one of our team members asked, “Why do you store the data in this tool?” And of course, it turns out, no one on the client team has any idea why this tool was being used. And only a few people use the tool on their team. We were aware the client was also using HubSpot for some other use cases. So we asked, “Why not bring this data into HubSpot, simplify the integration work, since your site already has to integrate with HubSpot, and use custom fields to track the data better so the experience can be improved?” That opened the floodgates to all kinds of great possibilities, and what ended up being a far- improved distributor- finding experience. 

Here’s the point: Get out of your headspace. Go offsite. Use giant sticky notes or a whiteboard. Think about the user journeys and desired business outcomes first. Think about what will help get the client team promoted. And don't be too focused on the tech. Part of being a zealous advocate for your client's technology is to be the catalyst of perspective.

Don’t forget you are building a content management system…to manage content

Conduct an informal internal user test and watch how content editors are using the current site. Have the content editors record their screens while they are asked to do the tasks that they normally do. See if there is any confusion, or if content editors have trained themselves to workaround bad authoring experiences. Ask content editors about the most frustrating part of using this website, or what's on their website wishlist. The experience will be illuminating.

Use this information to inform how you rebuild your authoring experience. As you plan this out, go back to these content editors and to review your plans and make sure nothing was missed. You should absolutely do this for your website end users too, but content editors are often an afterthought.

Perform a technical audit

Are there features that aren't used, or features that have ballooned to the point they're used too widely? Is there code that's been plaguing developers for years, or code no one has been brave enough to touch? Are there security issues or performance concerns? Flushing out these issues and formulating an approach to solve them at the start of the project will help ensure a smooth update process by making sure the right issues are being addressed, and aren’t carried over into the new codebase.

Keep your eyes on Drupal 9

Drupal 9 is right around the corner. Unlike previous Drupal upgrades, upgrading from Drupal 8 to 9 should be a non-event. If it isn’t, you are likely to blame. Be sure to make good Drupal 8 architectural decisions that won’t complicate a future move to Drupal 9. Our Drupal 9 Readiness Guide is a great place to start.

Oct 09 2020
Oct 09
Start:  2020-10-22 18:30 - 20:00 America/Los_Angeles Organizers:  rlhawk jcost mortona2k Event type:  User group meeting

We will meet at 6:30 pm on Thursday, October 22. Note that this is the fourth Thursday of the month, not the usual third

Agenda

  • Introductions
  • Announcements
  • Drupal news
  • Reports about what we liked and learned at BADCamp
  • Demos and discussions about debugging with XDebug

Location

Online; the URL for the meeting will be shared in the SeaDUG Slack workspace and with anyone who signs up for the event.

Oct 09 2020
Oct 09

For the Drupalize.Me site we have a functional/integration test suite that's built on the Nightwatch.js test runner. We use this to automate testing of critical paths like "Can a user purchase a membership?" as well as various edge-case features that we have a tendency to forget exist -- like "What happens when the owner of a group account adds a user to their membership and that user already has a Drupalize.Me account with an active subscription?"

For the last few months, running these tests has been a manual process that either Blake or I would do our on our localhost before doing a release. We used to run these automatically using a combination of Jenkins and Pantheon's MultiDev. But when we switched to using Tugboat instead of MultiDev to build previews for pull-requests, that integration fell to the wayside and eventually we just turned it off because it was failing more often than it was working.

Aside: The Drupalize.Me site has existed since 2010, and has gone through numerous rounds of accumulating and then paying off technical debt. We once used SVN for version control. Our test suite has gone from non-existent, to Behat, then Casper, then back to Behat, and then to Nightwatch. Our CI relies primarily on duct tape and bubble gum. It's both the curse, and the joy, of working on a code base for such a long time.

I recently decided it was time to get these tests running automatically again. Could I do so using GitHub actions? I have a bunch of experience with other CI tools, but this was my first time really diving into either of these in their current form. Here's what I ended up with.

  • We use Tugboat.qa to build preview environments for every pull-request. These are a clone of the site with changes from the pull-request applied. This gives us a URL that we can use to run our tests against.
  • We use GitHub Actions to spin up a robot that'll execute our tests suite against the URL provided by Tugboat and report back to the pull request.

Setting up Tugboat to build a preview for every pull request

We use a fairly cookie-cutter Tugboat configuration for building preview environments that Blake set up and I mostly just looked at and thought to myself, "Hey, this actually looks pretty straightforward!" The setup:

  • Has an Apache/PHP service with Terminus and Drush installed, and a MySQL service
  • Pulls a copy of the database from Pantheon as needed
  • Reverts features, updates the database, and clears the cache each time a pull request is updated
  • Most importantly, it has a web-accessible URL for each pull request

Here's what our .tugboat/config.yml looks like with a few unrelated things removed to keep it shorter:

services:
  php:

    # Use PHP 7.2 with Apache
    image: tugboatqa/php:7.2-apache
    default: true

    # Wait until the mysql service is done building
    depends: mysql

    commands:

      # Commands that set up the basic preview infrastructure
      init:

        # Install prerequisite packages
        - apt-get update
        - apt-get install -y default-mysql-client

        # Install opcache and enable mod-rewrite
        - docker-php-ext-install opcache
        - a2enmod headers rewrite

        # Install drush 8.*
        - composer --no-ansi global require drush/drush:8.*
        - ln -sf ~/.composer/vendor/bin/drush /usr/local/bin/drush

        # Install the latest version of terminus
        - wget -O /tmp/installer.phar https://raw.githubusercontent.com/pantheon-systems/terminus-installer/master/builds/installer.phar
        - php /tmp/installer.phar install

        # Link the document root to the expected path.
        - ln -snf "${TUGBOAT_ROOT}/web" "${DOCROOT}"

        # Authenticate to terminus. Note this command uses a Tugboat environment
        # variable named PANTHEON_MACHINE_TOKEN
        - terminus auth:login --machine-token=${PANTHEON_MACHINE_TOKEN}

      # Commands that import files, databases,  or other assets. When an
      # existing preview is refreshed, the build workflow starts here,
      # skipping the init step, because the results of that step will
      # already be present.
      update:

        # Use the tugboat-specific Drupal settings
        - cp "${TUGBOAT_ROOT}/.tugboat/settings.local.php" "${DOCROOT}/sites/default/"
        - cp "${TUGBOAT_ROOT}/docroot/sites/default/default.settings_overrides.inc" "${DOCROOT}/sites/default/settings_overrides.inc"

        # Generate a unique hash_salt to secure the site
        - echo "\$settings['hash_salt'] = '$(openssl rand -hex 32)';" >> "${DOCROOT}/sites/default/settings.local.php"

        # Import and sanitize a database backup from Pantheon
        - terminus backup:get ${PANTHEON_SOURCE_SITE}.${PANTHEON_SOURCE_ENVIRONMENT} --to=/tmp/database.sql.gz --element=db
        - drush -r "${DOCROOT}" sql-drop -y
        - zcat /tmp/database.sql.gz | drush -r "${DOCROOT}" sql-cli
        - rm /tmp/database.sql.gz

        # Configure stage_file_proxy module.
        - drush -r "${DOCROOT}" updb -y
        - drush -r "${DOCROOT}" fra --force -y
        - drush -r "${DOCROOT}" cc all
        - drush -r "${DOCROOT}" pm-download stage_file_proxy
        - drush -r "${DOCROOT}" pm-enable --yes stage_file_proxy
        - drush -r "${DOCROOT}" variable-set stage_file_proxy_origin "https://drupalize.me"

      # Commands that build the site. This is where you would add things
      # like feature reverts or any other drush commands required to
      # set up or configure the site. When a preview is built from a
      # base preview, the build workflow starts here, skipping the init
      # and update steps, because the results of those are inherited
      # from the base preview.
      build:
        - drush -r "${DOCROOT}" cc all
        - drush -r "${DOCROOT}" updb -y
        - drush -r "${DOCROOT}" fra --force -y
        - drush -r "${DOCROOT}" scr private/scripts/quicksilver/recurly_dummy_accounts.php

        # Clean up temp files used during the build
        - rm -rf /tmp/* /var/tmp/*

  # What to call the service hosting MySQL. This name also acts as the
  # hostname to access the service by from the php service.
  mysql:
    image: tugboatqa/mysql:5

In order to get Tugboat to ping GitHub whenever a preview becomes ready for use, make sure you enable the Set Pull Request Deployment Status feature in Tugboat's Repository Settings.

Screenshot of Tugboat UI with with checkbox for github deployment status notifications checked.

Run tests with GitHub Actions

Over in GitHub Actions, we want to run our tests and add a status message to the relevant commit. To do this we need to know when the Tugboat preview is done building and ready to start testing, and then spin up a Node.js image, install all our Nightwatch.js dependencies, and then run our test suite.

We use the following .github/workflows/nightwatch.yml configuration to do that:

name: Nightwatch tests
on: deployment_status

jobs:
  run-tests:
    # Only run after a successful Tugboat deployment.
    if: github.event.deployment_status.state == 'success'
    name: Run Nightwatch tests against Tugboat
    runs-on: ubuntu-latest
    steps:
      # Set an initial commit status message to indicate that the tests are
      # running.
      - name: set pending status
        uses: actions/[email protected]
        with:
          github-token: ${{secrets.GITHUB_TOKEN}}
          debug: true
          script: |
            return github.repos.createCommitStatus({
              owner: context.repo.owner,
              repo: context.repo.repo,
              sha: context.sha,
              state: 'pending',
              context: 'Nightwatch.js tests',
              description: 'Running tests',
              target_url: "https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}"
            });

      - uses: actions/[email protected]
      - uses: actions/[email protected]
        with:
          node-version: '12'

      # This is required because the environment_url param that Tugboat uses
      # to tell us where the preview is located isn't supported unless you
      # specify the custom Accept header when getting the deployment_status,
      # and GitHub actions doesn't do that by default. So instead we have to
      # load the status object manually and get the data we need.
      # https://developer.github.com/changes/2016-04-06-deployment-and-deployment-status-enhancements/
      - name: get deployment status
        id: get-status-env
        uses: actions/[email protected]
        with:
          github-token: ${{secrets.GITHUB_TOKEN}}
          result-encoding: string
          script: |
            const result = await github.repos.getDeploymentStatus({
              owner: context.repo.owner,
              repo: context.repo.repo,
              deployment_id: context.payload.deployment.id,
              status_id: context.payload.deployment_status.id,
              headers: {
                'Accept': 'application/vnd.github.ant-man-preview+json'
              },
            });
            console.log(result);
            return result.data.environment_url;
      - name: echo tugboat preview url
        run: |
          echo ${{ steps.get-status-env.outputs.result }}
          # The first time you hit a Tugboat URL it can take a while to load, so
          # we visit it once here to prime it. Otherwise the very first test
          # will often timeout.
          curl ${{ steps.get-status-env.outputs.result }}

      - name: run npm install
        working-directory: tests/nightwatch
        run: npm ci

      - name: run nightwatch tests
                # Even if the tests fail, we want the job to keep running so we can set the
                # commit status and save any artifacts.
        continue-on-error: true
        working-directory: tests/nightwatch
        env:
          TUGBOAT_DEPLOY_ENVIRONMENT_URL: ${{ steps.get-status-env.outputs.result }}
        run: npm run test

      # Update the commit status with a fail or success.
      - name: tests pass - set status
        if: ${{ success() }}
        uses: actions/[email protected]
        with:
          github-token: ${{secrets.GITHUB_TOKEN}}
          script: |
            return github.repos.createCommitStatus({
              owner: context.repo.owner,
              repo: context.repo.repo,
              sha: context.sha,
              state: "success",
              context: 'Nightwatch.js tests',
              target_url: "https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}"
            });
      - name: job failed - set status
        if: ${{ failure() }} || ${{ cancelled() }}
        uses: actions/[email protected]
        with:
          github-token: ${{secrets.GITHUB_TOKEN}}
          script: |
            return github.repos.createCommitStatus({
              owner: context.repo.owner,
              repo: context.repo.repo,
              sha: context.sha,
              state: "error",
              context: 'Nightwatch.js tests',
              target_url: "https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}"
            });

            # If the tests fail we take a screenshot of the failed step, and then
        # those get uploaded as artifacts with the result of this workflow.
      - name: archive testing artifacts
        uses: actions/[email protected]
        with:
          name: screenshots
          path: tests/nightwatch/screenshots
          if-no-files-found: ignore

The one maybe abnormal thing I had to do to get this working is use the actions/[email protected] action to manually query the GitHub API for information about the Tugboat deployment. There's a good chance that there is a better way to do this -- so if you know what it is, please let me know.

The reason is that Tugboat sets the public URL of a preview in the deployment.environment_url property. But, this property is currently hidden behind a feature flag in the API. It isn't present in the deployment object that your GitHub workflow receives. So in order to get the URL that we want to run tests against, I make a query to the GitHub API with the Accept: application/vnd.github.ant-man-preview+json header. There are other actions you can use to update the status of a commit that are a little cleaner syntax, but this workflow is already using actions/[email protected] so for consistency I used that to set a commit status as well.

This Debugging with tmate action was super helpful when debugging the GitHub Workflow. It allows you to open a terminal connection to the instance where your workflow is executing and poke around.

Our nightwatch.config.js looks like the following. Note the use of the Tugboat URL we retrieved and set as an environment variable in the workflow above, process.env.TUGBOAT_DEPLOY_ENVIRONMENT_URL. Also note the configuration that enables taking a screenshot whenever a test fails.

module.exports = {
  "src_folders": [
    "tests"// Where you are storing your Nightwatch tests
  ],
  "output_folder": "./reports", // reports (test outcome) output by nightwatch
  "custom_commands_path": "./custom-commands",
  "webdriver": {
    "server_path": "node_modules/.bin/chromedriver",
    "cli_args": [
      "--verbose"
    ],
    "port": 9515,
    "timeout_options": {
      "timeout": 60000,
      "retry_attempts": 3
    },
    "start_process": true
  },
  "test_settings": {
    "default": {
      'launch_url': 'http://dme.ddev.site',
      "default_path_prefix": "",
      "persist_globals": true,
      "desiredCapabilities" : {
        "browserName" : "chrome",
        "javascriptEnabled": true,
        "acceptSslCerts" : true,
        "chromeOptions" : {
          // Remove --headless if you want to watch the browser execute these
          // tests in real time.
          "args" : ["--no-sandbox", "--headless"]
        }
      },
      "screenshots": {
        "enabled": false, // if you want to keep screenshots
        "path": './screenshots' // save screenshots here
      },
      "globals": {
        "waitForConditionTimeout": 20000 // sometimes internet is slow so wait.
      }
    },
    // Run tests using GitHub actions against Tugboat.
    "test" : {
      "launch_url" : process.env.TUGBOAT_DEPLOY_ENVIRONMENT_URL,
      // Take screenshots when something fails.
      "screenshots": {
        "enabled": true,
        "path": './screenshots',
        "on_failure": true,
        "on_error": true
      }
    }
  }
};

Finally, to tie it all together, the GitHub workflow runs npm run test which maps to this command:

./node_modules/.bin/nightwatch --config nightwatch.config.js --env test --skiptags solr

That launches the test runner and starts executing the test suite. Ta-da!

Is this even the right way?

While working on this I've found myself struggling to figure out the best approach to all this. And while this works, I'm still not convinced it's the best way.

Here's the problem: I can't run the tests until Tugboat has finished building the preview -- so I need to somehow know when that's done.

For this approach I get around this by enabling deployment_status notifications in Tugboat, listening for them in my GitHub workflow using on: deployment_status, and then executing the test suite when I get a "success" notification. One downside of this approach is that in the GitHub UI the "Checks" tab for the PR will always be blank. In order for a workflow to log its results to the Checks tab, it needs to be triggered via a push or pull_request event. I can still set a commit status, which in turn will allow for a green check or red x on the pull request. But navigating to view the results is less awesome.

This approach allows for a pretty vanilla Tugboat setup.

It seems like an alternative would be to disable Tugboat's option to automatically build a preview for a PR. Instead, we'd use a GitHub workflow with an on: [push, pull_request] configuration that uses the Tugboat CLI to ask Tugboat to build a preview, wait for the URL, and then run the tests. This would allow for better integration with the GitHub UI, but require more scripting to take care of a lot of things that Tugboat already handles. I would need to not only build the preview via the CLI, but also update it and delete it at the appropriate times.

I do think that much of the Tugboat scripting here would be pretty generic, and I could probably write the workflow to manage Tugboat previews via GitHub Actions once and mostly copy/paste it in the future.

Yet another approach would be to not use GitHub Actions at all, and instead run the tests via Tugboat. Then use the GitHub Checks API to report back to GitHub about the status of a PR and log the results into the "Checks" tab. However, this looks like a lot of code and would probably be better if it could be included into Tugboat in a more generic way. Something like a "run my tests" command, and a way to parse the standard jUnit output, and log the results to GitHub. ...Or maybe just bypass the Checks UI all together and instead have Tugboat provide a UI for viewing test results.

I might explore these other options further in the future. But for now... it's working, so don't touch it! Like I said earlier -- it's all duct tape and bubble gum.

Recap

Terminal showing git log --oneline output and a whole list of commit messages that say 'Testing ....'

It took a while to figure this all out, and to debug the various issues on remote machines. But in the end, I'm happy with where things ended up. More than anything, I love having robots run the tests for me once again.

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web