Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Nov 26 2020
Nov 26
77 of us are going | Wunderkraut

Drupalcon 2015

People from across the globe who use, develop, design and support the Drupal platform will be brought together during a full week dedicated to networking, Drupal 8 and sharing and growing Drupal skills.

As we have active hiring plans we’ve decided that this year’s approach should have a focus on meeting people who might want to work for Wunderkraut and getting Drupal 8 out into the world.
As Signature Supporting Partner we wanted as much people as possible to attend the event. We managed to get 77 Wunderkrauts on the plane to Barcelona!  From Belgium alone we have an attendance of 17 people.
The majority of our developers will be participating in sprints (a get-together for focused development work on a Drupal project) giving all they got together with all other contributors at DrupalCon.

We look forward to an active DrupalCon week.  
If you're at DrupalCon and feel like talking to us. Just look for the folks with Wunderkraut carrot t-shirts or give Jo a call at his cell phone +32 476 945 176.

Share

Related Blog Posts

Want to know more?

Contact us today

or call us +32 (0)3 298 69 98

© 2015 Wunderkraut Benelux

Nov 26 2020
Nov 26
Watch our epic Drupal 8 promo video | Wunderkraut

How Wunderkraut feels about Drupal 8

Drupal 8 is coming and everyone is sprinting hard to get it over the finish line. To boost contributor morale we’ve made a motivational Drupal 8 video that will get them into the zone and tackling those last critical issues in no time.

[embedded content]

Share

Related Blog Posts

Want to know more?

Contact us today

or call us +32 (0)3 298 69 98

© 2015 Wunderkraut Benelux

Nov 26 2020
Nov 26

Once again Heritage day was a huge succes.

About 400 000 visitors visited Flanders monuments and heritage sites last Sunday.  The Open Monumentendag website received more than double the amount of last year's visitors.

Visitors to the website organised their day out by using the powerful search tool we built that allowed them to search for activities and sights at their desired location.  Not only could they search by location (province, zip code, city name, km range) but also by activity type, keywords, category and accessibility.  Each search request being added as a (removable) filter for finding the perfect activity.

By clicking on the heart icon, next to each activity, a favorite list was drawn up.  Ready for printing and taking along as route map.

Our support team monitored the website making sure visitors had a great digital experience for a good start to the day's activities.

Did you experience the ease of use of the Open Monumentendag website?  Are you curious about the know-how we applied for this project?  Read our Open Monumentendag case.

Nov 26 2020
Nov 26
Very proud to be a part of it | Wunderkraut

Breaking ground as Drupal's first Signature Supporting Partner

Drupal Association Executive Director Holly Ross is thrilled that Wunderkraut is joining as first and says: "Their support for the Association and the project is, and has always been, top-notch. This is another great expression of how much Wunderkraut believes in the incredible work our community does."

As Drupal Signature Supporting Partner we commit ourselves to advancing the Drupal project and empowering the Drupal community.  We're very proud to be a part of it as we enjoy contributing to the Drupal ecosystem (especially when we can be quircky and fun as CEO Vesa Palmu states).

Our contribution allowed the Drupal Association to:

  • Complete Drupal.org's D7 upgrade - now they can enhance new features
  • Hired a full engineering team committed to improving Drupal.org infrastructure
  • Set the roadmap for Drupal.org success.

First signaturepartner announcement in Drupal Newsletter

By

Michèle Weisz

Share

Related Blog Posts

Want to know more?

Contact us today

or call us +32 (0)3 298 69 98

© 2015 Wunderkraut Benelux

Nov 26 2020
Nov 26

But in this post I'd like to talk about one of the disadvantages that here at Wunderkraut we pay close attention to.

A consequence of the ability to build features in more than one way is that it's difficult to predict how different people interact (or want to interact) with them. As a result, companies end up delivering solutions to their clients that although seem perfect, turn out, in time, to be less than ideal and sometimes outright counterproductive. 

Great communication with the client and interest in their problems goes a long way towards minimising this effect. But sometimes clients realise that certain implementations are not perfect and could be made better. And when that happens, we are there to listen, adapt and reshape future solutions by taking into account these experiences. 

One such recent example involved the use of a certain WYSIWYG library from our toolkit on a client website. Content editors were initially happy with the implementation before they actually started using it to the full extent. Problems began to emerge, leading to editors spending way more time than they should have performing editing tasks. The client signalled this problem to us which we then proceed to correct by replacing said library. This resulted in our client becoming happier with the solution, much more productive and less frustrated with their experience on their site. 

We learned an important lesson in this process and we started using that new library on other sites as well. Polling our other clients on the performance of the new library revealed that indeed it was a good change to make. 

Nov 26 2020
Nov 26

A few years ago most of the requests started with : "Dear Wunderkraut, we want to build a new website and ... "  - nowadays we are addressed as "Dear Wunderkraut, we have x websites in Drupal and are very happy with that, but we are now looking for a reliable partner to support & host ... ".

By the year 2011 Drupal had been around for just about 10 years. It was growing and changing at a fast pace. More and more websites were being built with it. Increasing numbers of people were requesting help and support with their website. And though there were a number of companies flourishing in Drupal business, few considered specific Drupal support an interesting market segment. Throughout 2011 Wunderkraut Benelux (formerly known as Krimson) was tinkering with the idea of offering support, but it was only when Drupal newbie Jurgen Verhasselt arrived at the company in 2012 that the idea really took shape.

Before his arrival, six different people, all with different profiles, were handling customer support in a weekly rotation system. This worked poorly. A developer trying to get his own job done plus deal with a customer issue at the same time was getting neither job done properly. Tickets got lost or forgotten, customers felt frustrated and problems were not always fixed. We knew we could do better. The job required uninterrupted dedication and constant follow-up.

That’s where Jurgen came in the picture. After years of day job experience in the graphic sector and nights spent on Drupal he came to work at Wunderkraut and seized the opportunity to dedicate himself entirely to Drupal support. Within a couple of weeks his coworkers had handed over all their cases. They were relieved, he was excited! And most importantly, our customers were being assisted on a constant and reliable basis.

By the end of 2012 the first important change was brought about, i.e. to have Jurgen work closely with colleague Stijn Vanden Brande, our Sys Admin. This team of two ensured that many of the problems that arose could be solved extremely efficiently. Wunderkraut being the hosting party as well as the Drupal party means that no needless discussions with the hosting took place and moreover, the hosting environment was well-known. This meant we could find solutions with little loss of time, as we know that time is an important factor when a customer is under pressure to deliver.

In the course of 2013 our support system went from a well-meaning but improvised attempt to help customers in need to a fully qualified division within our company. What changed? We decided to classify customer support issues into: questions, incidents/problems and change requests and incorporated ITIL based best practices. In this way we created a dedicated Service Desk which acts as a Single Point of Contact after Warranty. This enabled us to offer clearly differing support models based on the diverse needs of our customers (more details about this here). In addition, we adopted customer support software and industry standard monitoring tools. We’ve been improving ever since, thanks to the large amount of input we receive from our trusted customers. Since 2013, Danny and Tim have joined our superb support squad and we’re looking to grow more in the months to come.

When customers call us for support we do quite a bit more than just fix the problem at hand. Foremostly, we listen carefully and double check everything to ensure that we understand him or her correctly. This helps to take the edge off the huge pressure our customer may be experiencing. After which, we have a list of do’s and don’t for valuable support.

  • Do a quick scan of possible causes by getting a clear understanding of the symptoms
  • Do look for the cause of course, but also assess possible quick-fixes and workarounds to give yourself time to solve the underlying issue
  • Do check if it’s a pebkac
  • and finally, do test everything within the realm of reason.

The most basic don’t that we swear by is:

  • never, ever apply changes to the foundation of a project.
  • Support never covers a problem that takes more than two days to fix. At that point we escalate to development.

We are so dedicated to offering superior support to customers that on explicit request, we cater to our customers’ customers. Needless to say, our commitment in support has yielded remarkable  results and plenty of customer satisfaction (which makes us happy, too)

Nov 26 2020
Nov 26

If your website is running Drupal 6, chances are it’s between 3 and 6 years old now, and once Drupal 8 comes out. Support for Drupal 6 will drop. Luckily the support window has recently been prolonged for another 3 months after Drupal 8 comes out. But still,  that leaves you only a small window of time to migrate to the latest and greatest. But why would you? 

There are many great things about Drupal 8 that will have something for everyone to love, but that should not be the only reason why you would need an upgrade. It is not the tool itself that will magically improve the traffic to your site, neither convert its users to start buying more stuff, it’s how you use the tool.  

So if your site is running Drupal 6 and hasn’t had large improvements in the last years it might be time to investigate if it needs a major overhaul to be up to par with the competition. If that’s the case, think about brand, concept, design, UX and all of that first to understand how your site should work and what it should look like, only then we can understand if a choice needs to be made to go for Drupal 7 or Drupal 8.  

If your site is still running well you might not even need to upgrade! Although community support for Drupal 6 will end a few months after Drupal 8 release, we will continue to support Drupal 6 sites and work with you to fix any security issues we encounter and collaborate with the Drupal Security Team to provide patches.

My rule of thumb is that if your site uses only core Drupal and a small set of contributed modules, it’s ok to build a new website on Drupal 8 once it comes out. But if you have a complex website running on many contributed and custom modules it might be better to wait a few months maybe a year until all becomes stable. 

Nov 26 2020
Nov 26

So how does customer journey mapping work?

In this somewhat simplified example, we map the customer journey of somebody signing up for an online course. If you want to follow along with your own use case, pick an important target audience and a customer journey that you know is problematic for the customer.

1. Plot the customer steps in the journey

customer journey map 1

Write down the series of steps a client takes to complete this journey. For example “requests brochure”, “receives brochure”, “visits the website for more information”, etc. Put each step on a coloured sticky note.

2. Define the interactions with your organisation

customer journey map 2

Next, for each step, determine which people and groups the customer interacts with, like the marketing department, copywriter and designer, customer service agent, etc. Do the same for all objects and systems that the client encounters, like the brochure, website and email messages. You’ve now mapped out all people, groups, systems and objects that the customer interacts with during this particular journey.

3. Draw the line

customer journey map 3

Draw a line under the sticky notes. Everything above the line is “on stage”, visible to your customers.

4. Map what happens behind the curtains

customer journey map 4

Now we’ll plot the backstage parts. Use sticky notes of a different color and collect the persons, groups, actions, objects and systems that support the on stage part of the journey. In this example these would be the marketing team that produces the prod brochure, the printer, the mail delivery partner, web site content team, IT departments, etc. This backstage part is usually more complex than the on stage part.

5. How do people feel about this?

Customer journey map 5

Now we get to the crucial part. Mark the parts that work well from the perspective of the person interacting with it with green dots. Mark the parts where people start to feel unhappy with yellow dots. Mark the parts where people get really frustrated with red. What you’ll probably see now is that your client starts to feel unhappy much sooner than employees or partners. It could well be that on the inside people are perfectly happy with how things work while the customer gets frustrated.

What does this give you?

Through this process you can immediately start discovering and solving customer experience issues because you now have:

  • A user centred perspective on your entire service/product offering
  • A good view on opportunities for innovation and improvement
  • Clarity about which parts of the organisation can be made responsible to produce those improvements
  • In a shareable format that is easy to understand

Mapping your customer journey is an important first step towards customer centred thinking and acting. The challenge is learning to see things from your customers perspective and that's exactly what a customer journey map enables you to do. Based on the opportunities you identified from the customer journey map, you’ll want to start integrating the multitude of digital channels, tools and technology already in use into a cohesive platform. In short: A platform for digital experience management! That's our topic for our next post.

Nov 26 2020
Nov 26

In combination with the FacetAPI module, which allows you to easily configure a block or a pane with facet links, we created a page displaying search results containing contact type content and a facets block on the left hand side to narrow down those results.

One of the struggles with FacetAPI are the URLs of the individual facets. While Drupal turns the ugly GET 'q' parameter into a clean URLs, FacetAPI just concatenates any extra query parameters which leads to Real Ugly Paths. The FacetAPI Pretty Paths module tries to change that by rewriting those into human friendly URLs.

Our challenge involved altering the paths generated by the facets, but with a slight twist.

Due to the projects architecture, we were forced to replace the full view mode of a node of the bundle type "contact" with a single search result based on the nid of the visited node. This was a cheap way to avoid duplicating functionality and wasting precious time. We used the CTools custom page manager to take over the node/% page and added a variant which is triggered by a selection rule based on the bundle type. The variant itself doesn't use the panels renderer but redirects the visitor to the Solr page passing the nid as an extra argument with the URL. This resulted in a path like this: /contacts?contact=1234.

With this snippet, the contact query parameter is passed to Solr which yields the exact result we need.

/**
 * Implements hook_apachesolr_query_alter().
 */
function myproject_apachesolr_query_alter($query) {
  if (!empty($_GET['contact'])) {
    $query->addFilter('entity_id', $_GET['contact']);
  }
}

The result page with our single search result still contains facets in a sidebar. Moreover, the URLs of those facets looked like this: /contacts?contact=1234&f[0]=im_field_myfield..... Now we faced a new problem. The ?contact=1234 part was conflicting with the rest of the search query. This resulted in an empty result page, whenever our single search result, node 1234, didn't match with the rest of the search query! So, we had to alter the paths of the individual facets, to make them look like this: /contacts?f[0]=im_field_myfield.

This is how I approached the problem.

If you look carefully in the API documentation, you won't find any hooks that allow you to directly alter the URLs of the facets. Gutting the FacetAPI module is quite daunting. I started looking for undocumented hooks, but quickly abandoned that approach. Then, I realised that FacetAPI Pretty Paths actually does what we wanted: alter the paths of the facets to make them look, well, pretty! I just had to figure out how it worked and emulate its behaviour in our own module.

Turns out that most of the facet generating functionality is contained in a set of adaptable, loosely coupled, extensible classes registered as CTools plugin handlers. Great! This means that I just had to find the relevant class and override those methods with our custom logic while extending.

Facet URLs are generated by classes extending the abstract FacetapiUrlProcessor class. The FacetapiUrlProcessorStandard extends and implements the base class and already does all of the heavy lifting, so I decided to take it from there. I just had to create a new class, implement the right methods and register it as a plugin. In the folder of my custom module, I created a new folder plugins/facetapi containing a new file called url_processor_myproject.inc. This is my class:

/**
 * @file
 * A custom URL processor for cancer.
 */

/**
 * Extension of FacetapiUrlProcessor.
 */
class FacetapiUrlProcessorMyProject extends FacetapiUrlProcessorStandard {

  /**
   * Overrides FacetapiUrlProcessorStandard::normalizeParams().
   *
   * Strips the "q" and "page" variables from the params array.
   * Custom: Strips the 'contact' variable from the params array too
   */
  public function normalizeParams(array $params, $filter_key = 'f') {
    return drupal_get_query_parameters($params, array('q', 'page', 'contact'));
  }

}

I registered my new URL Processor by implementing hook_facetapi_url_processors in the myproject.module file.

**
 * Implements hook_facetapi_url_processors().
 */
function myproject_facetapi_url_processors() {
  return array(
    'myproject' => array(
      'handler' => array(
        'label' => t('MyProject'),
        'class' => 'FacetapiUrlProcessorMyProject',
      ),
    ),
  );
}

I also included the .inc file in the myproject.info file:

files[] = plugins/facetapi/url_processor_myproject.inc

Now I had a new registered URL Processor handler. But I still needed to hook it up with the correct Solr searcher on which the FacetAPI relies to generate facets. hook_facetapi_searcher_info_alter allows you to override the searcher definition and tell the searcher to use your new custom URL processor rather than the standard URL processor. This is the implementation in myproject.module:

/**
 * Implements hook_facetapi_search_info().
 */
function myproject_facetapi_searcher_info_alter(array &$searcher_info) {
  foreach ($searcher_info as &$info) {
    $info['url processor'] = 'myproject';
  }
}

After clearing the cache, the correct path was generated per facet. Great! Of course, the paths still don't look pretty and contain those way too visible and way too ugly query parameters. We could enable the FacetAPI Pretty Path module, but by implementing our own URL processor, FacetAPI Pretty Paths will cause a conflict since the searcher uses either one or the other class. Not both. One way to solve this problem would be to extend the FacetapiUrlProcessorPrettyPaths class, since it is derived from the same FacetapiUrlProcessorStandard base class, and override its normalizeParams() method.

But that's another story.

Nov 25 2020
Nov 25

The Drupal project uses the PEAR Archive_Tar library. The PEAR Archive_Tar library has released a security update that impacts Drupal. For more information please see:

Multiple vulnerabilities are possible if Drupal is configured to allow .tar, .tar.gz, .bz2, or .tlz file uploads and processes them.

To mitigate this issue, prevent untrusted users from uploading .tar, .tar.gz, .bz2, or .tlz files.

This is a different issue than SA-CORE-2019-012. Similar configuration changes may mitigate the problem until you are able to patch.

Nov 25 2020
Nov 25

In my first blog post, I introduced the Custom Elements module: A solution for soft-decoupled Drupal! Following that, I want to put more emphasize on the frontend side of the stack in this blog post:

Selecting a frontend framework

With the backend producing custom elements markup, we need a frontend framework that is able to work with that. While there are many frameworks that play nice with custom elements, we want to pick a well-established solution with a large user base. That way, it's more likely that the framework stays relevant and well maintained on the long term. Also, finding more integrated libraries, learning materials or developers knowing the framework becomes much easier.

According to The State of JavaScript 2019: Front End Frameworks the top 3 frameworks developers have used are React, Vue.js and Angular:

  • Angular has been not liked much by developers that used it and won't be used again by the majority of them (see survey). Besides that, it's provided as a whole MVC framework with lots of things potentially not needed here. So not the best fit.

  • React has a very strong user base, but does not play that well with custom elements. While there are solutions like react-slot, they are not as common and well maintained. Finally, personally I don't like the "Everything is Javascript" approach so much.

  • Vue.js on the other hand, comes with template parsing built-in that is able to render custom elements data easily. Like React, it utilizes a virtual DOM and is well adopted and continuously growing. Finally, the Vue.js single file components are following a template based, web-standard oriented approach.

  • Since web components build upon custom elements, they seem to be the perfect fit. However, they are not, since they are not well adopted and server-side rendering web components is not a well solved issue (yet?).

Vue.js

Thus, Vue.js turns out to be the ideal candidate for our decoupled Drupal stack built upon custom elements. Moreover, I like it for the following:

  • It's approachable and easy to use. It has great documentation and sticks to web standard oriented notations.

  • It comes with a set of incrementable adoptable companion libraries for further needs, like routing or state management.

  • It's fast! It only weighs 20kB gzipped and scores well in rendering benchmarks.

  • Just like Drupal, it's a community backed open-source project, not depending on a single large corporation. Check out the Vue.js team and its sponsors.

Nuxt.js

Once one decides for Vue.js, one quickly stumbles over Nuxt.js - the intuitive Vue framework. It comes with a set of useful conventions - inspirated by Next.js - that make it very easy and enjoyable to get started with development. It provides all necessary setup for Javascript transpilation, CSS processing and improves performance by handling route-based automatic code-spliting or link prefetching. Thankfully, it's following Vue.js principles, thus it emphasizes the ease of use and provides great documentation.

Finally, it's built upon a powerful, modular architecture and allows providing features as re-usable Nuxt.js modules - what makes it a great addition to Drupal. While the number of modules is nowhere comparable to Drupal, there are many useful modules available, like PWA generation, Google Analytics or Tag manager integration, or the usual frontend/CSS framework integrations with Bootstrap, Material UI or Tailwind CSS. Check out the module directory at modules.nuxtjs.org for more.

Nuxt.js deployment options

One of the nice features of Nuxt.js is that it's really easy to deploy your project using various ways, all from the same code-base - just by a different deployment configuration. The options are:

  • Static generation
    Generate a static site and leverage the Jamstack approach for dynamic functionality and content previews.

  • Server Side Rendering
    Render the markup using a Node.js server or Serverless functions.

  • Single Page Application

    Just serve a pre-built Javascript application to users and let the user's browser take over rendering.

That way, one can choose the best suiting deployment target per project. Small sites without a large number of pages could be very easily statically generated and benefit from fast, cheap and secure hosting; e.g. via providers like Netlify, Amazon S3 or Github pages. If SEO is not required, running as a Single Page application does away the need for re-building after content changes and allows for highly dynamic, user-specific content and app-like UIs.

More demanding projects, requiring SEO and having a large number of content, can use server side rendering with any Node.js compatible hosting provider like Platform.sh or Heroku. Alternative options would be specialized hosting providers like Vercel, AWS amplify or deploying via the Serverless framework to various serverless cloud providers.

The frontproxy approach

Finally, I'd like to mention we also developed a custom, more advanced approach that becomes possible with custom elements markup: The lupus-frontproxy is a small PHP-based script that serves content as custom element markup combined with a pre-generated app shell (site header and footer). That way, large, content-heavy sites can easily run without a node.js server driving the frontend, while still providing decent SEO based upon the custom element markup delivered (Google just ignores custom elements and indexes the contained markup). However, with the rise of easy and affordable hosting options for server side rendering, such a custom built, uncommon solution seems unnecessary and not really worth the efforts.

Summing up

Nuxt.js is a great framework that makes it really easy to build a Vue.js based frontend application that renders the custom elements markup provided by the Drupal backend. Since each custom element is mapped to a Vue.js component, it's the perfect fit for building up component-based frontends, while keeping the editorial controls in the backend.

Thanks to its flexible deployments options, we can leverage static generation for smaller sites and use server-side rendering for larger, more content-heavy sites.

Following up

Since there are so many details more to talk about, I'll be following up with further blog posts in the coming weeks, covering the following topics:

  • Authentication & related architecture variants. Custom Elements markup & json formats.

  • A developer introduction: Creating components, adding Custom Element processors, the relationship to Drupal's render API, Custom routes and optimizing cache metadata.

  • Handling blocks & layout builder, content previews, forms, caching & performance optimizations.

Finally, I'm going to talk more about this stack at the Drupalcon Europe 2020 in my session "Custom Elements: An alternate Render API for decoupled Drupal" at December 08 - 09:15 - so mark your calendars!

Wolfgang Ziegler

On drupal.org:

fago

Drupal evangelist since 2005, Drupal Core contributor & contrib maintainer, TU-Vienna graduate (Information & Knowledge Management)

Nov 25 2020
Nov 25

DrupalCon Europe 2020 is just around the corner and we’re super excited to attend, speak and contribute to this year's virtual event for all things Drupal.

This year's event is chockablock with great ways to expand your skills, network with the community and contribute your talents to making Drupal even better. Everyone is welcome and all experience levels are encouraged to contribute. 

Here’s what our developers are sharing this year. 

Tuesday, December 08


Check out Nick O'Sullivan’s talk Decoupled Translations with Drupal and learn how to utilise Drupal’s exceptional multilingual features to produce a fully localised Gatsby site — 13:30 to 13:50 CET

Wednesday, December 09


Join Dan Lemon for his panel discussion and workshop – Retrospective: How did the COVID-19 Crisis Affect Client Relationships and What Can We Take Out of It?. This interactive workshop facilitated by a team of agile coaches will use the retrospective format so we can share and learn from our experiences with the COVID-19 crisis. We will celebrate achievements and collect insights on what we can learn and improve for the future – 4:15 to 5:15 CET

Thursday, December 10
 

These days, more and more often digital web solutions face the challenge of becoming out of date and suffering technical debt that stymies innovation. Break the cycle and catch Philipp Melab in the panel discussion on Sustainable Practices for Building and Maintaining the Open Web — 9:15 to 9:55 CET

Learn how to take an already existing Drupal site and quickly add CypressIO and start writing end to end tests in Fran Garcia-Linares’ talk Add End to End Tests to Your Drupal Site in one hour with CypressIO — 10:35 to 10:55 CET

Then join Dan Lemon’s Building a Platform to Bring People Together to Celebrate Drupal for the story of how CelebrateDrupal.org came to be — 11:30 to 12:10 CET

Friday, December 11
 

Make sure to set aside some time and join us for Friday’s Contribution Day, an open-source get-together for focused collaborative work. Contributions range from organizing events and designing a user interface, to translating text and verifying bugs. There will be opportunities to help out all throughout the conference, but Friday is a dedicated contribution day. 

Don’t forget to check out the full schedule of 119 sessions, four in-depth interactive workshops, four keynotes, 42 BoFs, a myriad of contribution topics, fun social events and more.  

Register today! We’ll see you there!
 

Nov 25 2020
Nov 25

Building websites that are completely mistake proof is often considered to be a massive undertaking, which many-many times is not properly executed. Since there are so many parameters to fulfil and so many corners to oversee, mistakes tend to happen. You may not even realise that you have done something wrong in the development process, it could be much much later when you actually undergo a website audit that you come across the mistake that has been made.

Drupal websites are equally prone to mistakes. Despite the CMS being one of the best, there are still occurrences when things go wrong and the impact is felt on the engagement, conversions and consequently, the sales.

To not let that happen, I have compiled a list of mistakes that you need to steer clear of when building or working on a Drupal website. These are errors and oversights that many developers and content authors have experienced first-hand and you can certainly try to learn from their mistakes.

So here are the most common mistakes witnessed on Drupal websites.

A pencil is shown after having erased an error on the extreme left and the mistakes to avoid in Drupal are written in bullets in the rest of the space on a white background.


Where can you go wrong with the content? 

A good website is often considered to be the one with outstanding content, since that is what keeps the audience engaged and leads to conversion. Therefore, content is crucial for a website, both for the front-end and the back-end, so content should be one of the priorities in the website designing process. Due to this fact, there are a number of areas where the developers can go wrong with the content. 

The first common mistake witnesses with the architecture of content is using too many content types that you actually do not use. The unused content types are just going to burden your database and I am certain, you would not want an additional table in your database for three content types that you do not even use. Having content types with no nodes will have the same effect. Performing an inventory will help you get the mistake resolved.

Moving on from the unused to the used, content structures are extremely valuable for your editors who are going to fill them up and if they end up confused, what will be the point of it all? Standardising your content types is going to help you a great deal. 

  • Strike off the content types that are similar to each other, like news and articles;
  • Do not add new fields for every content type;
  • And most importantly, plan a structure prior to it, winging it may not work with your website content.

Content types have an effect on the performance of your website as well. So, if you do not want to drain the performance of your site by adding unnecessary complexity, you would be wise to avoid these mistakes.

What about your display mismanagement?

After content comes its display and Drupal is best in that game. With many different features available for use, Drupal can make your display game strong as well, you capitalise it wisely.

Creating a view for every list is both impractical and a waste of time. Focus on reusing Views as much as possible along with parameter based rendering.

Do you use PHP code in your database? If so, avoid doing that, instead you must write all the codes in the modules or themes itself.

Planning, optimisation and segregation are essentially the building blocks of a great website display. 

  • Planning to render the content types you need;
  • Optimising the Views you already have;
  • And segregating logic from presentation.

These three would have a visible effect on the display architecture.

What aspects of functionality can make your site lag behind?

The functionality of a Drupal site depends on the number of modules used and the way they interact with each other. Your code and how much of it you use is a major contributor in that. 

The most common mistake in this sense is the ignorance of code standards. This becomes more of a problem when there are more than one developers and everyone is using a different type of code. In such a situation, not only would the standard be lost,  but it would also become difficult for a developer to understand the other’s code. Therefore, the adherence to Drupal’s Coding Standards becomes a great help to uniformalise the code and make the functionality a breeze. 

Another obstacle in functionality are unorganised patches. Code modifications and bug fixes mandates the implementation of patches, however, they become a problem whenever there is an update. You can forget all about re-apply the patch or forget to change it in accordance with the new version. This can very well affect the functionality of your website, so organising them is essential. 

Having too many modules, too many roles and too much custom code, despite there being contrib modules for the same is bound to affect the functionality as well. Evaluate and re-evaluate your site for its needs to overcome these functionality hindrances.

Is your performance and scalability not up to the bar?

User Experience is directly proportional to the performance of your website; the more streamlined the performance is, the richer would the UX be. 

Here are three scenarios that can impact your performance in all the wrong ways.

  • The foremost is improper JS/CSS aggregation settings. This is supposed to combine and compress JS and CSS files from the modules in the HTML, leading to lesser load times and higher performance. And you will be saying goodbye to that with the improper aggregation.
  • The next mistake is that of inundating your site with too many modules. Drupal may have numerous modules to offer, but you can’t be using too many of them. Doing so would only slow you down and hamper your security as well. Only keep the modules that you would be using, messing up your code, performance, overheads and security is simply not worth it.
  • A sound cache strategy also goes a long way in performance enhancement. Caching too soon, caching at lower levels and not knowing what and when to cache all contribute in a lowered performance.

Drupal websites can be scaled by millions of users within seconds and that is what makes these sites great. Drupal offers many modules to enhance the performance and scalability, Blazy, Content Delivery Network and Server Scaling, being just a few of them. Not installing these could be deemed as a mistake.

Are you facing possible security breaches?

Security has become one of the major concerns amongst website builders. Therefore, protecting your business from the menace of hackers and all the havoc they can cause is paramount. 

You must have your security measures in place, however, there still may be certain areas where you may have become complacent and that just gives the break the hackers need. 

  • Primarily, you need to keep your website updated, all the core and contrib modules, despite you using or not using them. Updating a module would mean that Drupal’s security protocols are being updated with them and you make yourself secure with that. You cannot have your projects falling behind on various levels of Drupal’s security advisories.
  • Now, you can install the “ Update Manager” module to keep yourself updated. The “Available Updates” will give you a friendly reminder of applying the available security updates.
  • Next on the list of security blunders is not giving the Input Filters in Drupal their due importance. You might have configured the full HTML Input Format to every user or you might have completely disabled the HTML filtering. Both of these instances can give malicious code to enter your website and you know what happens then.
  • Continuing on similar lines, many sites also configure their servers improperly leading to unwanted access to them. On some occasions, servers are seen displaying their version numbers, which is like giving an open invitation to hackers. Server configuration and permissions should be a priority for every site builder.
  • It is also important to ensure that all the users accessing your site by logging into it are the ones you want. By implementing a password policy, removing old users and updating staff roles, you will be taking a step towards better security.
  • User roles are quite important in running a website, however, they can become overused quite quickly too, which not only slows down your website, but if they are misconfigured, it can lead to major security breaches.

Drupal has proven to be one of the best CMSs in terms of its security measures, it has you covered from every corner, but only if you let it. From granting secure access to providing granular user access control along with database encryption and preventing malicious data entry, Drupal will keep your site protected, provided you let it.

Have you made any infrastructural oversights?

The infrastructure of your website is decided by the stacks you have, which includes the server, database and the software layers like Varnish. Going into development with a firm plan for your infrastructure is the only way to go, an oversight in this area can be quite damaging. 

The common mistakes in this area are;

  • The size of the website’s stack is extremely large or extremely small.
  • Not preparing for growth by consistently checking the logs for error and the identification of the weaklings.
  • Having an ideal sized server, but not configuring it properly, which can make the traffic forego Varnish.
  • Allowing remote connections to the server can make the website more vulnerable.

Misconfiguration can be avoided by simply using the right tools for it. MySQLTuner is one amongst many, its performance suggestions help in improving the infrastructure as well.

Are you following your maintenance goals?

Maintenance of a website starts after the development is done and continues for the entirety of the life of the website. Considering this fact, you have to be very diligent with the maintenance process as making the wrong moves can be brutal.

Here are some of these wrong moves.

  • Not following Drupal updates is a more common mistake than you would think. By doing this, you are going to be hampering security and making your site vulnerable to Drupalgeddon attacks.
  • On the contrary, there are also times when we update the modules, but we forget to remove the older versions. This too happens a lot of the time and can cause many problems.
  • It is not just the modules that need to be updated, the development environment should also be up-to-date and friendly for testing.
  • Then there is the code, which is not using the Version Control System like Git, even the deployment of files should come directly from there. Also, using it, but not leaving messages for other developers related to the changes made can lead to chaos. It is, thereby important to always keep the VCS repository clean.

The crucial aspects of maintenance is time and consistency. When you do it timely, only then would it become a worthy practice. The review and assessment of your architecture in timely intervals along with all the logs, be it Apache or Drupal is a great headstart for the maintenance process.

Are you universally accessible?

Websites today need to transcend the parameters that used to confine them and their audience in the past. The sites and applications need to be built on a foundation that would make them fit for each and every user. Drupal has worked for the same, it aims to make websites accessible to all, including people with disabilities, by making itself an all-accessible tool. 

Web accessibility has become of the essence today, and persons with disabilities are at the core of it. Websites need to be designed keeping in mind their needs, be it a broken hand or a temporary sight loss. It isn't just me who believes this, World Wide Web Consortium’s guidelines agree with me as well. W3C two sets of guidelines are ardently followed by Drupal and your website should do the same, thus, support and foster inclusion. 

Despite its importance, many developers tend to overlook the accessibility features and that is where they go so very wrong. 

  • Not focusing on balanced contrast levels that can work under sunlight;
  • Not incorporating a form validation error verbiage to aid the visually impaired; 
  • Not using buttons over CTAs.

These may seem like minor errors to you, but they can go a long way in making your Drupal site accessible to everyone.

Is your site SEO friendly?

Being SEO friendly is almost as important as building a website. Search engines bring in big numbers of traffic, so optimising them is crucial; yet we tend to forget to fine-tune the SEO details and focus on all the other aspects. At the end of the day, a website is an online business and business cannot survive without its clients and being SEO friendly is the way to go. Going wrong in this area can be extremely detrimental.

Look at the following aspects of a website and see how many you are and aren’t doing. 

Are your URLs user friendly?
Are your images small in size, with filled out alt texts?
Are you making your paragraphs short to make the text easy to scan through?
Are you using robots.txt for pages that you do not want crawled?
Are you creating an XML roadmap to help Google easily understand the structure of your website?
Are you researching your keywords?
Are you adding internal links to make your less popular pages gain attention through the more popular ones?

A positive answer to all of these means that your SEO game is strong and a contrary answer would let you know your mistakes.

To avoid the contrary from happening, Drupal provides a number of modules to help you capitalise on the SEO front. The SEO checklist module is a proof of that as it helps you by ensuring that you are following through on the latest SEO practices. Then there are the modules that aid your URL precision, like Redirect, Pathauo and Easy Breadcrumbs.From easing the process tags to helping in communication with search engines to providing the essentials for editing, Drupal has all the right SEO modules in its corner and not using these would be a colossal mistake on your part. Read our blog, The Ultimate Drupal SEO Guide 2020, to know more about these. 

Can being multilingual pose a problem for you? 

Today, languages that are regionally spoken have started getting more prominence than ever before, especially in the international community. A french website would not be successful in India, if it is in French, not many people speak that language, so it would have to be in a locally accepted language. Being multilingual also opens the doors for many mistakes to occur. 

  • Using the same URL for all of your multilingual websites; 
  • Not giving the user a chance to avoid a redirect to the international website;
  • Using an automated translator, instead of actually hiring content authors fluent in the language;
  • Foregoing to translate the imbedded parts of the site like meta tags and descriptions;
  • Not focusing on the foreign market trends and the keywords appropriate to it;
  • And lastly, not writing the content in accordance with the local language and dialects. You can’t be calling ice lollies popsicles sticks in India.

You have to be totally attuned with the language of the region that you have followed for the multilingual project to work.

Is having a multisite presence worth it?

Depending on your business and its needs, having multiple sites can be a good solution for you. However, managing then can become a bit of a hassle and often lead to big blunders. 

Some examples of such blunders are;

  • Traffic is one of the major concerns here. Running multiple sites means you have one codebase and many sites on it, so if one is inundated with traffic, all of them could slow down as a result.
  • A mistake in the syntax of one site could mean a mistake in the syntax of all.
  • Updates become a headache as well. For Drupal sites, you have to run an update.php in order to update the site and doing that on multiple sites is going to bring on the headache.
  • And finally, if you do not use Aegir, you are going to regret going multisite.

Is your Decoupled Drupal approach the right one?

Drupal offers an impressive front-end technology to make your presentation layer as good as you want, yet it does not include all the front end technologies there are on the market. Taking advantage of JavaScript and Static Site Generator would mean to decouple Drupal and separating the front-end from it. Even if you want to take on decoupling, it may not want to take on. The decoupled Drupal can bring more drawbacks then. 

  • If you wish to capitalise Drupal’s in-built features, Decoupling would be a mistake, since you would end up parting with them.
  • If your front-end requirements and Drupal’s front-end capabilities are aligned, taking on Decoupling would only be an unnecessary effort on your part.
  • If you do not have either the budget or resources to tap into the hottest technologies, then even if you want them it is not going to be fruitful.
  • If you are only publishing content at one place, you would have no need for decoupling.

For a detailed explanation, read our blog, When to Move From Monolithic to Decoupled Drupal Architecture.

Finally, what about web hosting, are you doing it the right way?

Web hosting services that provide your website its own space on the internet are pretty common. There are far too many web hosts to count, yet the decision to choose one is not easy at all, since there are too many considerations to keep in mind. 

Some of the common mistakes to avoid which signing on a web host are;

  • Testing web hosts is not uncommon, it is the right way to know whether they are valuable. However, testing on your website that is primarily bringing in the traffic could be unwise, especially if you are using a free service. Therefore, not registering with a different party can be colossal.
  • Another mistake is trusting too easily without knowing the host for too long. Therefore, not partnering with one that has a long trial could be a mistake. The longer the trial period on offer is, the more reliable the host is going to be.
  • Taking on a web host is a huge commitment, so you have to be sure that you are in the good. Not doing your due diligence before the commitment is not the right way, comparing the pricing and features along with checking if they have blacklisted IPs.
  • Not tracking your hosting uptime and speed can also be a problem. Also not checking what guarantees for uptime are provided by the hosts for the same would not be wise. If there is a lapse between the guaranteed and actual uptime, keeping a track would give you the opportunity to ask for compensation.
  • Lastly, you cannot afford to not have a backup of your site and that too regularly. You will only have the recent version of your files and assets, if you back them up.

The Bottom Line 

Every aspect of your website is important, consequently, you have to be mindful of them all. If you are not, mistakes will happen and they will cost you your site’s performance, its security and your potential customers and sales. In order to keep that from happening, you have to avoid all of the aforementioned mistakes and ensure that your website is impeccably built and maintained on all platforms. 

Nov 24 2020
Nov 24

Thanksgiving this year will be different from any we’ve ever experienced, but then again, the same could be said for pretty much every aspect of 2020.

At Promet Source, we attract talent from all over North America and the world, so we had a bit of a head start navigating remote work requirements. We were still mindful though, every day, of the many ways that Covid-19 was having an impact on our teams and our clients. 

During a normal year, we look forward to the opportunity to connect during conferences, events, and client engagements, but as was the case for all of you, we had to count on Zoom calls and technology to do a lot of the heavy lifting. 

We made a point, whenever possible to fill in the gaps, show appreciation, and exercise empathy. More so than ever before, we worked on cultivating the best that is within us, emphasizing truths such as this one: 

Doing good holds the power to transform us on the inside, and then ripple out in ever-expanding circles that positively impact the world at large.  
--Shari Arison

Little surprises and humor went a long way this year. During the height of the pandemic, we designed a t-shirt to encapsulate how we were all feeling about being stuck at home during quarantine while continuing to work remotely. The shirt simulated a concert t-shirt boasting on the front: “Promet Source World Tour - Remote Edition”. The joke, of course, is the location for every tour date listed was “HOME.” We sent this t-shirt out to our staff and what happened, as a result, was quite unexpected.
 

Promet 2020 t-shirt

Promet's 2020 Company t-shirt to commemorate remote working during the pandemic and resulting quarantine.

Promet team Zoom photo

Our team members who received their shirts started taking selfies of themselves and Slacking them to the greater team. So we called a quick Zoom meeting and snapped a screenshot of our North American team to suffice for our pandemic team picture during a time we couldn't get together for an actual group photo. 

Promet Philipeans team

Before we knew it, our worldwide teams started taking pictures of themselves at their desks and in scenic locations and created a new image of them and sent it to our headquarters located in Chicago to share with the rest of Promet.

Something that started out as a generic company t-shirt sent smiles across the world and back during a time when it was most needed to bring teams together and unite everyone during trying times.

As we plan for a Thanksgiving holiday that’s likely to have a lot of empty seats, our commitment to each other and to you is that we will continue to do good, stay connected, and practice the power of gratitude.

A happy and healthy Thanksgiving to you, from all of us at Promet Source


 

Nov 24 2020
Nov 24

Do you know the version number of the browser you’re using? What about the operating system version you’re using? Chances are, you have no idea what the current version you’re using of any major software is. And that’s the way it should be.

Tips for upgrading to Drupal 9

  • Make sure your version of Drupal is up-to-date.

  • Use Upgrade Status to see if your website is ready for upgrading

  • If you have custom code, use Drupal Rector to identify code that needs to be updated

Throughout the software industry, there is a movement to more frequent, easier, updates. The reason behind more frequent updates is that everyone tends to keep software up to date and more secure, the easier and more frequent those updates come. Soon, you may not know the major or minor version of your website software, just that it is up-to-date and secure, which is the ultimate goal of any update, upgrade or version release.

What version of Drupal am I running?

Chances are if you’re using Drupal, you are using version 7 or 8. In June 2020, version 9 of Drupal was released and the last minor version of Drupal 8 was released. Both version 8 and 9 contain the same features and code. The only difference is that Drupal 9, or D9 as it is referred to in the industry, removed deprecated code and updated third-party dependencies.

(Source: Drupal.org - How to prepare for Drupal 9)

The image above shows the timeline for Drupal 8 and its version releases since 2019. The upgrade cycle in version 8 for minor releases was established to be a single release roughly twice a year. Now that Drupal 9 has been released, there will be an end-date for support for Drupal 8, but that is not scheduled until November 2, 2021. In fact, the upgrade from 8 to 9 is so painless, version 10 will be released in 2022, likely to even less fanfare, as it will also be the same as the most recent version 9.

Time to upgrade? Let our Drupal experts help!Upgrading between all minor versions of Drupal, including the jump to version 9 is advised, but is a much simpler process than version upgrades have been in the past. See what Drupal.org has to say about it here. However, if your website was recently created or released since 2016, it’s likely that you’re on Drupal 8, and the upgrade should be extremely straightforward and relatively painless.

If you find yourself on version 7 of Drupal, you can absolutely upgrade straight to D9 and skip D8 altogether. The rebuild would likely take the form of an entirely new website, but the benefits of going right to D9 are two-fold: avoiding end of life issues for D8 in 2022 and jumping on a platform that will enable to you go headless, adopt the best media, personalization, translation, and content creation tool that open source has to offer.

Why migrate to Drupal 9?

Running end-of-life platforms come with a set of problems that over time, will end up costing your company time and money. When official support for D7 runs out in 2022 (the same time as D8), security updates, bug fixes and creation of new modules for those versions will also go away. This opens your system to being more vulnerable to cyber attacks, potential downtime and a lack of up-to-date features that your customers would expect from your web presence.

Jumping right into a new build with D9, you benefit from having the long-term official support from the Drupal community including security releases that help protect your website from various vulnerabilities. D9 also removes outdated code and uses the new version of PHP, which is important in terms of security.

Other reasons to upgrade to D9, from Drupal.org:

  • Easiest upgrade in a decade, and a commitment to easy upgrades in the future. Never replatform again.
  • Continuous innovation, cutting-edge new features, reaching new digital channels while strengthening our foundation in content management.
  • Dramatically easier tools, intuitive solutions that empower business users to take advantage of the power of Drupal.

Currently, we are on the cusp of the first minor release of Drupal 9, which is planned before the end of the year. Most large ecosystem modules in Drupal have complete support for Drupal 9, including Drupal Commerce version 2.

Tips for Upgrading to Drupal 9

  • Make sure your version of Drupal is up-to-date.
  • Use Upgrade Status to see if your website is ready for upgrading (this module will flag modules that need to be upgraded to Drupal 9).
  • If you have custom code, you can use Drupal Rector to identify code that needs to be updated and, in many cases, can be automatically upgraded.

Still not sure that the upgrade to Drupal 9 is right for your organization? Have questions about the best way to handle upgrading your Drupal site? Our team is here to help you answer those kinds of questions. Check out our Drupal and Drupal Commerce Development page for more details on our services, or contact our experts to get started on your Drupal upgrade today.

Need more information on upgrading to Drupal 9? Contact our Drupal experts today to get your project started >

Josh has been creating websites since 2000, has been a Drupal community member since 2008, and is currently the Team Lead of the Investment Division at Acro Media. He spends most of his free time playing board games and yelling at his cat with his wife.

Nov 24 2020
Nov 24

Much has been spoken about the importance and benefits of migrating to Drupal 8. Drupal migration is the most significant part of a development workflow. We need to ensure that content gets transferred seamlessly without losing out on any critical user information and data. Check this complete guide for a successful Drupal 7 to Drupal 8 migration.  

There are several ways to migrate to Drupal 8 using various sources. We have already written about how to migrate from CSV to Drupal 8. Other sources include SQL, XML, JSON, etc. In this blog, we will be discussing about migrating to Drupal 8 using SQL as a source.

Migratiom

Why SQL?

While choosing the data source largely depends on the source of the existing data required to be migrated, some of the other common reasons for choosing SQL source for a Drupal 8 migration are:  

  • It is easy to write queries to get the required data by connecting to the database.
  • Migrating the data from one server to another will be quicker than any other method.
  • Reduces the usage of many contributed modules.
  • No need of views data export which is used for exporting views CSV data from Drupal 7 sites.
  • We will not need the Views password field which is used for migrating sensitive information (passwords in hash), since we are using DB query.
  • Migrate source CSV module is also not needed since we are using SQL source.

Let the Migration Process begin!

In this blog, we will be migrating users to a Drupal 8 site. Following are the steps we will take to migrate to Drupal 8 -
1.    Create a custom module for the migration.
2.    Reference the source database.
3.    Define the migration YML and map the identified Drupal fields.
4.    Create a source plugin for migration.
5.    Process single value, multi-value and address fields.
6.    Run migration using the drush command line tool.

Step 1: Create a custom module for the Drupal 8 migration.

First, let’s create a custom module as you create in Drupal 8. Check this detailed blog for creating custom modules in Drupal 8. Here we are creating a module called “company employee migrate”. The module structure is as follows:

custom module

company_employee_migrate.info.yml : Consists of basic information regarding the module and its dependencies.

company_employee_migrate.install : This will be used for writing PHP scripts that should trigger while installing and uninstalling the module. In our case, we are deleting the migration configuration when the module is uninstalled. See the below screenshot for the script.

Source database

company_employee_migrate.module : This will be used for defining the general hooks for the site. These are the initial things needed for the module. We will be explaining the remaining in the next steps.

Step 2: Reference the source database 

Next, we need to set up the source database from where we are extracting the data. Update the settings.php file for your Drupal site by moving to webroot -> sites -> default - > settings.php. 

Add the new database connection below the default connection as shown in the below screenshot. “migrate” is key for the source database. 

Drupal Module

Step 3: Define the migration YML and map the Drupal fields.

Now, we need to identify the fields which we want to migrate and map those fields in the migration yml. In this example we are migrating User Id, User Name, Password, Email, Status, Created Timestamp, First Name, Last Name, Job Role, Mailing Address etc.

After the identification of fields, we need to define the migration at migrate_plus.migration.company_employee.yml. Now let’s have a closer look at migration yml and their mappings.
 

Migration YML and Map

id: Unique id for the migration yml.

label: Display name for the migration.

migration_group: Migration group name.

source:  Annotation Name of the migration source plugin. This will be defined in the @MigrateSource annotation in src/Plugin/migrate/source/CompanyEmployee.php.

destination: Name of migration destination plugin. In this case, it's entity:user since we are migrating user entity.

process: In process, we will be mapping the Drupal fields with the source fields. The left-side values are field machine names and the right-side values are the field names which we pass from the source plugin.

migration_dependencies: This is an optional key. This will be used only if there is any dependency on other migrations.

Step 4: Create a source plugin for migration

Migration source plugin is the heart of the SQL migration. Below is the detailed explanation for the source plugin. 

  • The source plugin can be created at src/Plugin/migrate/source/CompanyEmployee.php 
  • The namespace will be Drupal\company_employee_migrate\Plugin\migrate\source.
  • The @MigrateSource annotations include the source plugin id that we map in the migration definition.
  • Here we are extending the Abstract SqlBase class provided by the core migrate module.

See the below screenshot for reference.

source plugin for migration

The source plugin must implement the following three methods –

query() : This will return the source fields data by connecting to the source database. See the below screenshot which will return the field data where the user id is greater than 0 and user mail is ending with “@phyins.com”.

source plugin for migration


fields() : This method will return the available fields in the source. Below, I have added the code snippet for the list of available fields along with the baseFields().

basefields

getIds() : This method will return a unique ID for the source row. The below code will return a user ID of type integer since uid will be unique for each user.
 

Drupal Module

Apart from these above-mentioned methods we also have:

prepareRow() : This method will be called once for each row. We are loading the data here from different tables and processes according to the requirement. Any property we create using $row->setSourceProperty will be available in the process step. We will be using the Drupal 8 DB query to get the data in prepareRow().

baseFields() : This will contain an array of basic fields from “users_field_data” that can be used by the query() method. Find the code in the below image.

drupal 8 migrate

Step 5: Process single value, multi-value and address fields.

In Drupal, we have different types of fields and processing some fields can get a little tricky for developers while migrating content. I have added the code snippets for some fields below:

Single value fields : These fields include text fields, boolean fields, email, etc. Find the code snippet below for getting a single value field; here the field_first_name can be set as below.

Process single value


Multi Value fields: These fields include user roles, job roles, multi value select fields, checkboxes, etc. For a multi value field we need to return an associative array. Find below the code snippet for the “roles” field.

Migration


Address Fields: Migrating the address field provided by the address module will be little different from migrating other fields. We need to preprocess the queried data into an associative array with the proper keys as shown below for field_mailing_address.

Migration


Now all set to run the migration by installing the company_employee_migrate module in the Drupal 8 site.

Step 6: Run the migration using drush command line tool

Finally, we are ready for our SQL migration. To run the migration, we need to install drush (if you are not already using it).

List of useful drush commands for Drupal migration:

drush migrate-status : This will return the status of migration with details.

migrtion using drush

drush migrate-import migration_id : This will import the data from source to the site.

drush migration

drush migrate-reset-status migration_id : If the execution of the script has stopped or paused, the migration status will display as “Importing”. This command will reset the migration status to “Idle” so that we can proceed with the migration import.

drush migration

drush migrate-rollback migration_id : This will be used for rolling back to its previous state.

drush migration
Nov 24 2020
Nov 24

(Available as freelancer)

Joris Snoek

Business Consultant
/ Drupal Developer

Last months we had more than 300 people testing the alpha-3 version of OpenLucius: a social productivity platform -build into a Drupal distro. We interviewed them and soon came to the conclusion that the base layouts needed big improvements. It was received as 'mhew..', we agreed. So we went to work and released alpha-4 today. We implemented a complete new base theme from scratch: clean, lean, fast and Bootstrap 4 based. Goal is to leave all the room for custom branding and other design needs.

Below you'll find some screenshots and short explanation of the alpha-4 version, I intentionally kept the screens quite low resolution: it's best to experience the real deal in the live test environment, or go to the project page on drupal.org and install it yourself.

Below you'll find some screenshots and short explanation of the alpha-4 version, I intentionally kept the screens quite low resolution: it's best to experience the real deal / try it for yourself.

So yeah, we revamped everything:

  1. Overall layout and navigation;
  2. Activity streams;
  3. Social posts;
  4. Messages;
  5. Document management (files and folders);
  6. Likes;
  7. Comments;
  8. Chatrooms (where are they?).

1. Overall layout and navigation

  • All markup was rebuild from the ground up, based on Bootstrap 4. This way we got rid of all the cluttery html and designs, it's now as Bootstrap-native as humanly possible.
  • The main navigation went from the left sidebar to the top of the page, this also leaves room for extensions / extra menu items and new goodies in the future.
  • The basic color scheme is now minimalistic: black and white with primary and secondary 'action colors'.
  • Also, the platform is much more responsive now: easy of use on mobile devices improved beyond compare.

  1. Main navigation, including a 'Groups' drop-down (with activity badges);
  2. Quick add button;
  3. User area;
  4. Group name / page title;
  5. Sections within groups (including activity badges);
  6. Action buttons.

2. Activity streams

  • This is the default homepage, where you'll find all activity from all your groups in one stream;
  • All activity is now clearly bundled per day, per group. Instead of one big blurred stream of everything together as in previous versions the case was;
  • In the right sidebar you'll find all your groups, including activity badges;
  • (Every group now has a dedicated activity stream);
  • (Every user now has a dedicated activity stream).

3. Social posts

  • Clean new design, with great new image experience;
  • Comments and likes (AJAX based) build into freshly styled components.

4. Messages

  • The new screens for message lists -and message detail pages;
  • With comments and likes;
  • And likes on comments :) ;
  • The message as well as the comments can be enriched with file attachments.

5. Document management (files and folders)

  • The layout for 'Docs & Files' is also completely rebuild;
  • It's now based on DataTables, that brings all goodies like: instant search, sorting and a paging.
  • The table is responsive now;
  • It's still possible to manage folders to organise your files.

6. Likes

  • Inline with other UI improvements, the like area is now also nice and clean.

7. Comments

  • Also inline with other UI improvements, the comment area including attachments is improved.

8. What about the chatrooms / channels per group?

Well, in previous versions this was integrated in the activity streams. That wasn't received very good, so we split them up. In this alpha-4 release the activity stream is fully working and restyled as you can see above.

The realtime chat will be back in a future release within a dedicated section for it. For now it's 1 step back, a next release it will be 2 steps forward for the chatrooms / channels. The activity streams already made that 2 steps forward because of this approach.

Wrap up

Alright, there you have it for a major design update for OpenLucius: a social productivity platform -built into a Drupal distro. You can download and install it yourself open source: check out the project page on Drupal.org. If you'd like to try it this instant, go to the product page and hit the 'try now' button.

Always looking forward to new feedback.

Nov 24 2020
Nov 24

Maybe you have banged your head against the wall trying to figure out why if you add an Ajax button (or any other element) inside a table, it just doesn’t work. I have.

drupal ajax form tables

I was building a complex form that needed to render some table rows, nicely formatted and have some operations buttons to the right to edit/delete the rows. All this via Ajax. You know when you estimate things and you go like: yeah, simple form, we render table, add buttons, Ajax, replace with text fields, Save, done. Right? Wrong. You render the table, put the Ajax buttons in the last column and BAM! Hours later, you wanna punch someone. When Drupal renders tables, it doesn’t process the #ajax definition if you pass an element in the column data key.

Well, here’s a neat little trick to help you out in this case: #pre_render.

What we can do is add our buttons outside the table and use a #pre_render callback to move the buttons back into the table where we want them. Because by that time, the form is processed and Drupal doesn’t really care where the buttons are. As long as everything else is correct as well.

So here’s what a very basic buildForm() method can look like. Remember, it doesn’t do anything just ensures we can get our Ajax callback triggered.

 /**
   * {@inheritdoc}
   */
  public function buildForm(array $form, FormStateInterface $form_state) {
    $form['#id'] = $form['#id'] ?? Html::getId('test');

    $rows = [];

    $row = [
      $this->t('Row label'),
      []
    ];

    $rows[] = $row;

    $form['buttons'] = [
      [
        '#type' => 'button',
        '#value' => $this->t('Edit'),
        '#submit' => [
          [$this, 'editButtonSubmit'],
        ],
        '#executes_submit_callback' => TRUE,
        // Hardcoding for now as we have only one row.
        '#edit' => 0,
        '#ajax' => [
          'callback' => [$this, 'ajaxCallback'],
          'wrapper' => $form['#id'],
        ]
      ],
    ];

    $form['table'] = [
      '#type' => 'table',
      '#rows' => $rows,
      '#header' => [$this->t('Title'), $this->t('Operations')],
    ];

    $form['#pre_render'] = [
      [$this, 'preRenderForm'],
    ];

    return $form;
  }

First, we ensure we have an ID on our form so we have something to replace via Ajax. Then we create a row with two columns: a simple text and an empty column (where the button should go, in fact).

Outside the form, we create a series of buttons (1 in this case), matching literally the rows in the table. So here I hardcode the crap out of things but you’d probably loop the same loop as for generating the rows. On top of the regular Ajax shizzle, we also add a submit callback just so we can properly capture which button gets pressed. This is so that on form rebuild, we can do something with it (up to you to do that).

Finally, we have the table element and a general form pre_render callback defined.

And here are the two referenced callback methods:

 /**
   * {@inheritdoc}
   */
  public function editButtonSubmit(array &$form, FormStateInterface $form_state) {
    $element = $form_state->getTriggeringElement();
    $form_state->set('edit', $element['#edit']);
    $form_state->setRebuild();
  }

  /**
   * Prerender callback for the form.
   *
   * Moves the buttons into the table.
   *
   * @param array $form
   *   The form.
   *
   * @return array
   *   The form.
   */
  public function preRenderForm(array $form) {
    foreach (Element::children($form['buttons']) as $child) {
      // The 1 is the cell number where we insert the button.
      $form['table']['#rows'][$child][1] = [
        'data' => $form['buttons'][$child]
      ];
      unset($form['buttons'][$child]);
    }

    return $form;
  }

First we have the submit callback which stores information about the button that was pressed, as well as rebuilds the form. This allows us to manipulate the form however we want in the rebuild. And second, we have a very simple loop of the declared buttons which we move into the table. And that’s it.

Of course, our form should implement Drupal\Core\Security\TrustedCallbackInterface and its method trustedCallbacks() so Drupal knows our pre_render callback is secure:

 /**
   * {@inheritdoc}
   */
  public static function trustedCallbacks() {
    return ['preRenderForm'];
  }

And that’s pretty much it. Now the Edit button will trigger the Ajax, rebuild the form and you are able to repurpose the row to show something else: perhaps a textfield to change the hardcoded label we did? Up to you.

Hope this helps.

Nov 24 2020
Nov 24

Acquia Engage is over, but its great lessons and content live on. You can access all of the Engage session recordings on the event site. Here are our top six can’t-miss sessions to catch up on if you couldn’t make it to Engage:

General Session #1

Hear from the leadership of Acquia about all of the new product announcements, features, and strategies for the Acquia Open DXP today and what is planned for the next year. If you are invested in Acquia or are looking to move to the Acquia platform, this session will help you skate to where the puck will be.

Exciting stuff!

General Session #3: King Arthur Baking Company

It was great to see our client and everyone’s favorite flour brand King Arthur featured in General Session #3 (around minute 39). Hear brand web and marketing leads talk about how they worked with Acquia and Third and Grove to fuel 2,000% increase in sales.

Black Girls Code

You need to hear the remarkable story of Black Girls Code directly from founder Kimberly Bryant in General Session #4 (starting around minute 36). It’s an inspiring story, and it makes us quite proud that our partner, Acquia, supports this program.

We also have serious eyeglass envy over Kimberly’s lenses.

DXP Track Session: Leveraging Drupal 8 and Acquia to Boost B2B Lead Generation and Marketing Velocity

Ahem, we might be a little biased, but this session by our CEO and Acquia Engage award winner CloudHealth Tech by VMware has a bunch of actionable insights to turn your site into a B2b lead generation machine. No good deed goes unpunished in B2B demand generation: Next quarter will always require more.

It’s a must-watch for any marketing leader at a B2B tech company.

COVID Has Redefined Scale. Learn How to Adapt, and Prepare for What’s Next

Learn from a real Acquia Drupal organization user at scale: Charles Schawb. Terrific lessons for dealing with scale and developer agility during the global COVID-19 pandemic.

Building it Better: Best Practices for Site Studio Architecture

On the workshop front, we wanted to call out this session on Site Studio (formerly called Acquia Cohesion). Site Studio is a major differentiator for using Drupal on Acquia, and a leading platform in the rise of the low-code CMS page editors. Attend this workshop to learn best practices on using Site Studio.

Nov 23 2020
Nov 23

Do you own an existing drupal.org project that does not yet have a Drupal 9 compatible release? This week would be a good time to take that step and make a Drupal 9 compatible release! I am paying for two tickets to DrupalCon Europe for new Drupal 9 compatible releases. Read on for exact rules!

DrupalCon Europe is in two weeks already! December 8-11, 2020. It offers 4 keynotes, including the Driesnote and the Drupal Core Initiative Leads Keynote (that I help coordinate), 119 sessions in five different content tracks, 4 workshops, interest group discussions, and networking. These are all included in the 250 EUR (no VAT) ticket. I would love to see you there!

Looking at Drupal 9 compatibility data, while 83% of the top 1000 projects by usage already have a release, the others do not. If we look at all the projects, 43% of them have a Drupal 9 compatible release. This is way better than with Drupal 8 was at the same time, but we can still do better! 22.5% of all projects (2177 projects in total) need an info.yml file change and a release, no other changes required. There is a good chance one of those are yours!

The rules of this giveaway are the following:

  1. The participating project must have existed before this week.
  2. The project must have its first Drupal 9 compatible release this week, before end of Friday.
  3. Selection from eligible projects is random.
  4. The lead maintainer on the winning two projects will pick who gets the ticket.
  5. In case of a pass, I draw another project.
  6. I will be using my existing script from the #DrupalCares campaign to track the newly Drupal 9 compatible projects.
  7. The script uses the official drupal.org releases dump and takes dates of releases from there.

I'll keep this list up to date throughout the week with who is in the running:

November 23

November 24

Nov 23 2020
Nov 23

You must be familiar with words like intermediary, middleman and mediator. What do these words mean? Could they possibly denote a job profile? I think they can. An intermediary, a middleman and a mediator, all constitute a connection between two parties, they provide a line of communication that makes one access the other with ease; like a store manager, he allows the consumer to access the products of the manufacturer. Without him, sales would be pretty difficult to do. 

Bringing the conversation back to the topic at hand, an API is essentially an intermediary, a middleman or a mediator. The Application Programming Interface provides access to your users and clients to the information they are seeking from you. The information provider uses an API to hand the information over to the information user through a set of definitions and protocols. 

In decoupled Drupal, the API layer provides the connection between the separated front-end and back-end layers. Without it, decoupling would not be possible, since there won’t be a way to transmit content from the backend to the presentation layer. 

There are quite a few APIs that perform impressive functions, but today we would only be enunciating REST API. So, let’s delve right in.

What makes REST API important?

The REST: API logo is displayed in the centre.


REST API, RESTful API or Representational State Transfer is built on the constraints of the REST architecture. It is renowned to make development easy by supporting HTTP methods, handling errors along with other RESTful conventions. Since REST capitalises on HTTP, there isn’t a need for installing a separate library or software to capitalise REST’s design, making development all the more easy.

A representation of the state of the resource is transferred to the requestor, when REST API is used to make a request. It could be done in numerous formats, namely JSON, HTML, XLT or your plain old text, through HTTP. 

If you asked me what the best things about REST is, I would have to say it is its flexibility. The REST API is designed to be flexible because it is not tied to certain methods and resources. Therefore, it can handle multiple types of calls, transform structurally and return data formats as well. Such versatility makes REST competent to provide for all the diverse needs your consumers may have. 

REST cannot be defined as a protocol or a standard, rather a set of architectural principles would be a more accurate description. These principles make an API become a RESTful API. So, let us understand these constraints to understand REST API better. 

The Segregated Client and Server 

This principle states that the client and the server are to be independent of each other leading to more opportunities for growth and efficiency. Since the separation of concerns would allow a mobile app to make changes without those changes affecting the server and vice-versa, the organisation would grow far more quickly and efficiently.

The Independent Calls

A call made using REST API is just that, one call; it has all the potential data for completing a task in and by itself. If a REST API has to be dependent on the data stored on a server for each individual call, it would not be very effective in its job. It has all the necessary data with itself, making REST API very reliable. This principle is known as the state of being stateless.

The Cacheable Data 

Now you would think that since REST API stores such massive amounts of data in itself, it would increase your overheads. However, this is a misconception. REST was built to work with cache, meaning it can store cacheable data. This ability helps in reducing the number of API interactions drastically leading to reduced server usage and consequently faster apps. 

The Appropriate Interface 

Decoupling mandates the implementation of an interface that is not tightly connected to the API providing uniformity to application development. This can be achieved by using HTTP along with URI resources, CRUD and JSON.

The Layered System 

REST API works with a layered system; what this means is that each server, be it security or load-balancing, is set into a hierarchy. This constraints the component behaviour so that one cannot see beyond its layer. 

The New Code 

Finally, there is the Code on Demand, this principle that gives you the option to transmit code or applets through the API layer and this code or applet would actually be used in the application. This principle allows you to build applications that do not just rely on their own code. However, the security concerns have made it the least used of them all.

All of these are essentially the guiding principles of REST API, along with that they also lay emphasis on the work REST can do for you and your application; this, highlighting its importance.

Exploring REST API in Drupal

The Drupal logo is towards the left and REST logo is towards the right with an arrow in the middle.

Now that you know the importance and principles of REST API, it is time to move one to its exploration. REST API can be explored in Drupal through a number of modules, all you have to know is where to look and what to exactly look for. For the same reason, here is the list that would make consuming REST API seem like a walk in the park.  

Drupal Core Modules

There are certain REST modules that are so popular that they have become a part of Drupal core. These are;

RESTful web services

RESTful Web Services is a module that takes advantage of Entity API to provide you the information of all entity types, be it nodes, comments, taxonomy terms or your users. Being built over the Serialization module, it gives you customisation and extension of the RESTful API. It also has the ability to expose additional resources along with adding authentication mechanisms, which can be applied to any of the resources. 

Serialization and Serialization API

The primary purpose of the serialization module is to de-serialize data to and from formats such as JSON and XML. You can simply call it a service provider for the same. 

The Serialization API is based on the Symfony Serializer Component. Its numerous features have made it quite advantageous to the users. 

  • For one, it can serialize and deserialize data;
  • It helps in encoding and decoding to and from new serialization formats respectively, you can read data and also write it;
  • It can also normalize and denormalize data and set it into a new normalization format. 

HAL

HAL is an acronym for Hypertext Application Language. This module uses its namesake to serialise entities. With features similar to the Serialization module, it is often regarded as an extension of the same. The HAL hypermedia format has the potential of being encoded in JSON as well as XML. Being a part of Drupal Core, it is the most sought after format.

This is a module that lets you test drive as well. Yes, once it is installed and configured, you can test drive your site through the HAL browser by simply providing JSON data.

HTTP Basic Authentication 

You must be familiar with the term authentication, the working of HTTP Basic Auth is similar to that. What it does is takes a request, identifies the username and the password of the user and authenticates them against Drupal. It does so by implementing the HTTP Basic protocol, which essentially encodes the username and the password and adds the same in an Authorization header; and all of this done within a request. 

It is to be noted that this module does not use an interface, it acts as a support for Drupal’s Authentication Manager. 

The Alternates of Basic Authentication 

Basic Auth is an important module in the REST API, therefore, certain alternatives are also available to be used in its place. 

OAuth 1.0

The OAuth 1.0 standards are implemented by OAuth 1.0 module to be used in Drupal. It provides a foundation to the modules that want to use the OAuth. 

In Drupal 8, this module achieves the feat of leveraging the OAuth PECL Extension, leading to the implementation of the authentication provider

Simple OAuth(Oauth2) & OpenID Connect

Simple OAuth can be described as the implementation of OAuth 2.0 Authorization Framework RFC. In Drupal, it is a module that makes use of the PHP library OAuth 2.0 Server, which is a part of The League of Extraordinary Packages. Let me tell you something about this library so you know how valuable it is, it has actually become a standard for the modern PHP. With it being thoroughly tested, you can’t go wrong; still you would need to check your options at the time of deciding a project to use. 

Coming to OpenID Connect, it comes along with OAuth 2.0, being an identity layer on top of its protocol. It helps you verify the identity of the end users along with fetching their basic profile information.

OAuth2 JWT SSO 

The name OAuth2 JWT SSO does clear up notions of what it actually does, all three acronyms are at work. It can work with Drupal's very own OAuth 2.0. The reason being its ability to configure Drupal so that both centralized and remote authentication services can be used. 

Like its name suggests, it also works with JWT and SSO, which is short for Single Sign On. It can capitalise on any SSO, provided that it uses OAuth2 as its authentication framework, and JWT as its Bearer token. 

Cookie Based Authentication

If you have ever used a website, you would then know what a cookie actually is. Was it just today when you declined that ‘accept cookies’ request? These help a website to recognise users so that they do not have to log in again. 

Now, web applications tend to use cookie-based authentication, which they implement differently. However, at the end of each day, they will have some cookies set up that would represent an authenticated user. A cookie is transmitted along with every request and the session is deserialized from a store. 

REST UI

More than 20,000  sites have been reported to use this very module. It is known to be fully feature-packed, its maintainers have the same thoughts. 

Coming to its abilities, REST UI provides an interface to configure Drupal 8’s REST module. Due to its handy configuration, you won’t find a need to play with Drupal’s configuration import page. This fact not only benefits the novice Drupal users, but also expedites your configuration by a substantial time margin. You can simply install it by using the default approach, Drush or the Drupal Console. 

Conclusion 

REST API is pretty versatile in its features and Drupal has all the necessary modules to consume it in an optimised manner. If you had to choose a thread to hold your front and backend together, I would say that REST API would not let you down. However, that would only be possible, if you know how to capitalise it using Drupal. I hope I would have enlightened you about the same through this blog. To learn about other web services available in Drupal in addition to REST, read about GraphQL and JSON:API.

Nov 23 2020
Nov 23

4. Track team

This year's DrupalCon will feature well-over 100 sessions, across five tracks, and also four deep dive workshops that are not to be missed (for the first time, these workshops are included in the price of your ticket).

For a number of years, Annertech has provided volunteers to chair tracks. This involves many meetings about the conference to decide what the tracks will be, answering questions from those who wish to speak before sessions are submitted, reviewing and rating sessions after they are submitted (hundreds get submitted, so this is no small task), awarding speaking slots, and ensuring speakers are taken care of during the conference.

Our Director of Projects, Mike King, has been a track chair for many years, including chairing the Agency & Business track this year. In previous years, Mark Conroy, our Director of Development, has chaired the Frontend and Site Building tracks, while Stella has chaired numerous tracks from Agency & Business to Higher Education.

Nov 22 2020
Nov 22

As ADA Accessibility standards adjust to the digital age, websites and all digital properties need to adhere to the latest accessibility compliance standards. 

Legal action in Texas pertaining to ADA web accessibility non compliance has been on a particularly sharp trajectory. In 2019, Texas saw 239 ADA Title III lawsuits, up from 196 in 2018. This represents a 21.9 percent increase in Texas, and many other states have seen similar or even greater increases in their case volume.

ADA Title III lawsuit growthSource: Seyfarth

For organizations that have not yet started to pay attention to ADA web accessibility compliance, now is the time to do so. Taking interest in the newest developments in ADA law, ensuring that your business is kept up-to-date in the sphere of accessibility is not just good business for any organization that wishes to avoid legal action. It's the right thing to do.

In addition, providing web-accessible content to the end user of your site, accessibility can increase the value of your website and provide additional traffic from users that would otherwise not be able to take advantage of the services or products provided by your business.

Basis for ADA Title III Lawsuits

What is an ADA Title III lawsuit? According to ADA.gov, “Title III prohibits discrimination on the basis of disability in the activities of places of public accommodations (businesses that are generally open to the public and that fall into one of 12 categories listed in the ADA, such as restaurants, movie theaters, schools, day care facilities, recreation facilities, and doctors' offices) and requires newly constructed or altered places of public accommodation — as well as commercial facilities (privately owned, nonresidential facilities such as factories, warehouses, or office buildings)—to comply with the ADA Standards.”

In other words, ADA Title III lawsuits consider the accommodations in place are applicable when these accommodations do not meet acceptable standards. While at first glance this appears to only apply to the physical world, ADA Title III has been likewise applied to websites and all digital properties websites as equally applicable places where accessibility concerns are vital to consider. The easiest way to do this is through making your website comply with current Web Content Accessibility Guidelines (WCAG).

WCAG is a series of rules published by the Web Accessibility Initiative, part of the W3C — the primary organization behind international standards for the internet. WCAG 2.1 is the current release of these standards, and complying to them is essential to protecting your organization from lawsuits surrounding ADA Title III violations.

 

WCAG Compliance Imperative

The recent upsurge in lawsuits filed in Texas proves that now, more than ever, it is vital to have a website that follows best practices and compliance standards for for web accessibility. The importance of ADA Title III undisputed in the current environment. When comparing Texas’ 239 lawsuits to California’s much larger number of 4,794,  activity in Texas might not seem that significant. What's important to note, however, is rate of  increase within Texas.

Increase in ADA title III lawsuits 2018 to 2019“These numbers include Title III lawsuits filed on all grounds — physical facilities, websites and mobile applications, service animals, sign language interpreters, and more.” Seyfarth

Similarly, the states of Georgia, Pennsylvania, and Illinois have experienced a particularly steep rise in the number of ADA Title III lawsuits filed between in 2019 over 2018.

While the cases inspected by this study were covering all instances of ADA Title III law, as opposed to web only suits, it is nonetheless important to keep in mind that this upswing is having an impact on all facets of business in an increasingly electronic world, as organizations conduct more and more business online. One particularly visible example of this is obvious in the case of Robles v. Domino’s Pizza, LLC. Legal issues with a company website can carry heavy penalties and the best way to ensure this does not happen is through an a11y-compliant website that follows modern web standards for accessibility. The anticipated increase in ADA Title III cases over the course of the coming year makes it even more important to pay attention to the legal sphere when it comes to web accessibility.

Now more than ever, organizations are being called to action to create websites that are accessible to people of all abilities. 

Interested in an accessibility audit of your site and learning more how to achieve WCAG Compliance? Contact us today. 

Nov 20 2020
Nov 20

In part 1, I covered the basics of how Amazee Labs thinks about Maintenance and the way we approach automating the parts of the process that are repetitive. I also introduced three categories of problems, as well as the two kinds of maintenance engineering.

In part 2 of this two-part series, I’ll introduce the automation of problem detection and resolution. I’ll also introduce Lagoon Insights and the Amazee Labs Maintenance Tooling. Finally, I’ll explain how this all comes together in our service offering.

Automation of problem detection and resolution


When it comes to corrective maintenance, automation almost always focuses on problem detection. If corrective maintenance has been engaged, it's a good indication that there either wasn’t a preemptive maintenance plan, the preemptive maintenance plan just didn’t cover the particular problem, or the preemptive maintenance plan failed entirely.

Preemptive maintenance is a bit more nuanced due to the fact that it's related to planning maintenance to reduce the chaos of managing corrective maintenance interventions.

The following table breaks down how we think about the automatability of the problem categories we introduced earlier.

Table for Automation of problem detection and resolution
However, when it comes to fixing these problems, unfortunately, the fixes are almost impossible to automate. This is where we bring in engineering expertise to resolve the problems. Performance problems: Because these problems are almost always related to operational expectations not being met, the problem detection can be automated. SEO goals, accessibility targets, conversion rates, and technical performance are all covered by pretty great tools that are operated by experts in these fields. 

Reliability category: As mentioned, reliability is the site's propensity to be available when someone tries to use it. Again problem detection is highly automatable. When a site goes down due to platform or infrastructure problems, often the hosting provider will alert you automatically. In most cases with a reasonable quality hosting provider, the problem will be resolved without the maintenance teams involvement. 

Resolutions for reliability problems relating to application configuration or application code can often be automated as well. For instance, one of the policies can remove administrator roles from Drupal users that might have been granted the role by mistake.

Security category: Security problem detection is also highly automatable. This of course does not entirely negate the effectiveness or need for penetration testing or security audits. But a lot of potential problems can be detected. Additionally, the application of updates and patches in the security category is also highly automatable in the right environment. The caveat here is that to achieve this at a reasonable scale, the sites themselves need to meet certain configuration, architecture, and hosting requirements. This category really brings the “cattle not pets” analogy to bear.

Amazee Labs Tooling and the Lagoon Insights


Large parts of the maintenance automation that we have built rests on the Lagoon Insights components of the amazee.io open-source container-based hosting platform. We co-developed a lot of the features in Lagoon Insights along with amazee.io, and contributed these features to the Lagoon Open Source project. 

Although sites hosted on Lagoon benefit from tighter integration with the insights systems, the Amazee Labs tooling has been developed to be interoperable with hosting providers such as Acquia Cloud, Platform.sh, and Pantheon, as well as legacy infrastructure where Drush can execute commands.

Problems Insights
The problems insights component provides a robust database and API for storing and querying detected problems in an appropriate data structure. 

Facts Insights
While most of this article has focused on problem detection and resolution, another aspect of maintenance engineering is to be able to quickly query and collect fact-based information about the systems under maintenance. So for example, while a problem might be phrased as “The webform module needs a security patch applied”, a fact would be “The webform module is on version 8.3.1”. Facts are objective details about the systems, whereas problems are by their nature the opinionated siblings of facts.  

Scanners
There are four systems that collect the information to populate the Problems and Facts Insight databases.

Drutiny: Drutiny is a generic Drupal site auditing and optional remediation tool. 
Fun Fact: The name is a portmanteau of "Drupal" and "scrutiny." 
Another fun fact: Due to its extensible architecture, we can extend Drutiny to scan more than Drupal sites!

Harbor: Harbor is an open-source trusted cloud native registry project that stores, signs, and scans content. Harbor extends the open-source Docker Distribution by adding the functionalities usually required by users such as security, identity and management. Additionally, Harbor has the ability to do vulnerability scanning. Harbor scans images regularly for vulnerabilities and has policy checks to prevent vulnerable images from being deployed.


Trivy: Trivy is a simple and comprehensive vulnerability scanner for containers and other artifacts, suitable for continuous integration environments. The Harbor Scanner Adapter for Trivy is a service that translates the Harbor scanning API into Trivy commands and allows Harbor to use Trivy for providing vulnerability reports on images stored in Harbor registry as part of its vulnerability scan feature.

Fact collectors: We have developed several approaches to automatically scanning a site for the facts that we are interested in. We collect everything from PHP versions in use through to module version, Drupal core version, enabled user counts, etc

Notifications: Finally, the notification engine radiates the information collected and stored in the problems database, and radiates it to any of the notification types supported by Lagoon. We typically use Slack for this.

Putting it all together


Having covered both the way we think about maintenance automation and the tools involved in it, how does this all come together in our Managed Web Maintenance service?

When it comes to automation, we always ask: “Are we automating problem detection or problem resolution?”. And of course, sometimes we’re automating other parts of our process, but mainly we focus on problems.

For performance problems, we integrate certain 3rd party tools to detect conversion issues, accessibility issues, page speed issues, etc. Detected problems are analysed by our maintenance team, and are dealt with on a regular basis.

For reliability and security problems, we have developed a number of our own Drutiny policies and facts scanners. These range from detecting if certain modules are enabled on production sites, through to automated checks of broken links or commonly misconfigured modules. While a fair percentage of the problems are dealt with manually, we’re now in the stage of automating problem remediation for security-related issues. This means, automatically updating code bases, running automated tests to confirm the sites critical paths are still operational, and committing the changes to production.

We also rely heavily on Harbor and Trivy to provide us with an infrastructure level view of the software underlying the Drupal instances we maintain. Detected problems are regularly analysed by our engineering team, and remediation plans are presented to customers for execution.

As we continue to develop and extend our maintenance service, our maintenance engineers work on both developing automated ways to detect and resolve problems, as well as doing the manual work of the human intervention when it's needed. This provides the maintenance engineers with critical information on what to automate next, and which features would be the most valuable for the largest number of sites in our portfolio.
 

Did you know? 


The Amazee Labs Managed Web Maintenance Scanning Software performs 

  • Drupal core status, contrib module status, and best practice drupal configuration scanning
  • Application dependency (Symfony, 3rd party libraries, etc) security scanning
  • Container security scanning for customers using containers
  • Performance and uptime monitoring
  • Automatic HTTPS Certificate validation

Our systems can be configured and augmented to perform custom checks specifically for your website context.

Do you have one or many Drupal sites which are missing a preemptive maintenance strategy?
Do you want to learn more about this maintenance service and how it can solve your maintenance pain?
Then reach out to us and let's chat!

Nov 20 2020
Nov 20

Drupal is a renowned name when it comes to website development. The kind of features and control it allows you to have on your content is quite impressive. The traditional Drupal architecture has repeatedly proven to be valuable and now it is the time for Decoupled Drupal architecture to do the same, which it is on the path of doing. A major reason for the increasing popularity and adoption of the decoupled Drupal is the freedom to make the front-end using the technologies you like.

Since the Decoupled Drupal architecture separates the presentation layer from the backend content, an API becomes the thread that holds the two together to work in sync. So, it is important to choose the right one for your project. While REST API and JSON: API are quite sought after, GraphQL has also emerged as a front runner. So, let us find out what exactly GraphQL is, what it can do and how it plays in the Decoupled Drupal architecture’s picture.

Decoding GraphQL 

The GraphQL logo along with the words GraphQL is written in its signature pink across a white background.


GraphQL, a query language, came into being about eight years ago, however, its popularity came through in 2016, when it was made open-source. Its founder, Facebook, created a unique API because they needed a program to fetch data from the entirety of its content repository. It is a system that is easy to learn and equally easy to implement, regardless of the massive proportions of data you may have or the massive number of sources you may want to go through. 

When I said that GraphQL is a query language, I meant just that. It is a language that will answer all your queries for data using its schema, which can easily be deployed using GraphiQL, an Integrated Development Environment. GraphQL, being a language specific for APIs, is equipped to manipulate data as well with the help of a versatile syntax. The GraphQL syntax was created in a way that it is able to outline the requirements and interactions of data to your particular needs. The shining glory of GraphQL is that you only get what you have asked for, nothing more, nothing less. This means that the application will only work towards retrieving data that is of the essence in the request. The data might have to be loaded from varying sources, but it will still be accurate and precise, succinct to the T and exactly what you sought.

With decoupling becoming a need now more than ever and content and presentation layers segregating from each other, GraphQL becomes the answer for all data queries, essentially becoming the answer for data retrieval for your API and your web application. A query language and a runtime to fulfil every query within your existing data, GraphQL is powered to paint a complete and totally understandable description of your data using the API. Its robust developer tools have proven to make APIs faster, more flexible and extremely friendly to our developer friends. Therefore, achieving decoupling becomes extremely easy as GraphQL provides typed, implementation agnostic contracts amongst systems. 

GraphQL vs JSON:API vs REST API

The power and assistance of APIs have become all the more conspicuous in the decoupled world. With three prominent names taking the lead here, it becomes somewhat tricky to get to the right API conclusion. GraphQL, JSON:API and REST API, all three have their own virtues making them great at whatever they intend to do. However, it becomes almost impossible to talk about one and ignore the other two. My writing would not have been complete without a comparison of the three.

If you look at the core functionality of these three APIs, you will realise that GraphQL and JSON: API are much more similar to each other than REST API, which is a whole other ball game compared to them. Let us look at them.

Parameters GraphQL JSON: API REST API How much data is  retrieved?  Does not over-fetch data  Does not over-fetch data  Inundates the user with unnecessary proportions data  How is the API explored? Has best API exploration due to GraphiQL Uses a browser to explore the API  Relatively does not perform well and the navigable links are rarely available How is the schema documentation? Perfect auto-generated documentation and reliable schema Depends on OpenAPI standard and JSON:API specification only defines generic schema Depends on OpenAPI standard How do the write operations work? Write operations are tricky Write operations come with complete solutions Writes can become tedious with multiple implementations How streamlined it can be during installation and configuration? Provides numerous non-implementation specific developer tools but is low on scalability and security Provides numerous non-implementation specific developer tools and is high on scalability and security Provides numerous tools, but they require specific implementations and come with good scalability and high security

GraphQL

With GraphQL and its distinct fields of query, the developer is asked to specify each and every desired resource in these fields. You might be wondering why? The answer lies in its exactness. It is because of these explicitly mentioned desires that GraphQL never over-fetches data.

Coming to API exploration, GraphQL takes the cake for being the simplest and most conclusive. The fact that a GraphQL query comes with suggestions that can be auto-completed justifies my earlier claim. Moreover, the results are shown alongside the query resulting in smooth feedback. Its in-house IDE, GraphiQL, also helps in generating iterations of the queries, aiding the developers even more. 

If you told me that GraphQL specifications are more equipped at handling reads than writes, I would not object to you. Its mutations require you to create a new custom code every time. You can probably tell how much of a hassle it can be. Regardless, GraphQL can easily support bulk write operations that have already been implemented.

In terms of scalability, GraphQL requires additional tools to capitalise on its full potential and there are certainly numerous developer tools made available, and all of them are not implementation-specific.

JSON: API 

The problem of over-fetching is not witnessed with JSON:API as well. Its sparse fieldsets produce an output similar to GraphQL. However, unlike GraphQL’s uncacheable requests, JSON: API can omit the sparse fieldsets when they become too long and hence, can cache even the longest requests. 

With JSON: API, the explorations are simple as well; as simple as browsing within a web browser, which is basically what the API acts like here. From scouring through different resources to different fields and debugging, you can do all of that with your browser. Can data retrieval be simpler?

In terms of writings, JSON: API is probably the best out of the bunch. It offers a wholesome solution for handling writes by using POST and PATCH requests. Even though bulk support is not available at the moment, it is in the works and would soon be in use.

JSON: API is again similar to GraphQL as it also provides various developer tools that are not implementation-specific. The fact that its infrastructure resembles that of a website, requiring Varnish and CDN, makes it different from the former. 

REST API 

REST API has probably the most over-fetching system going on. Not only does it mandate multiple requests for one piece of content, it would also give you responses that are often even beyond the threshold of verbose. And the data that you end up with is so much more that you asked and needed. 

Again, REST API is completely different from the other two in terms of data exploration. And sadly, this isn’t a good kind of different. REST API is highly dependent on an OpenAPI standard, and if you are not adhering to that, things would seem a tad bleak for you. You will not be able to trust it for an auto-generated documentation or a validatable and programmable schema. The navigation through high volumes of data seeking interactivity is not too impressive either. 

Writing data in REST API is quite easy, almost as easy as reading it. Using POST and PATCH requests, every implementation is unique. Bulk support is not on the table. 

REST API’s infrastructural needs also resemble that of an ordinary website encompassing Varnish or CDN. However, its additional tools, although many, mandate customisation before implementation.

The GraphQL and Decoupled Drupal Dynamic 

GraphQL is a language that is undergoing developments as I am writing and you will be reading, but that does not mean that it is unstable. Moreover, GraphQL is being capitalised by several Drupal websites. The exactness in responses along with the always available introspection layer makes GraphQL truly worth it.

Let us now understand how it is the perfect answer to Decoupled Drupal by asking all the right questions or shall I say queries?

How does Drupal fully capitalise GraphQL?

Drupal can be rigid, it is a fact all of us know. Apart from this, Drupal can also seem to be too much with everything it has to offer. What GraphQL does is, it gives you the ability and the power to create and expose a custom schema that would eventually become the only roadway to all the data; information, operations and interactions, whatever happens within the system. And then, Drupal does not seem to be too rigid. 

You get a GraphQL module in Drupal  which is designed around webonyx or graphql-php. What this means is that the module is basically as jam-packed with features as the actual language is with all the GraphQL specifications.

  • The module can be used as the basis for creating your very own schema by generating a custom code;
  • The module can also be used to extend the already existing schema with the use of the plugin architecture, wherein the plugin would act as the sub-module.
  • To aid development even more, GraphiQL is also included at /graphql/explorer, which acts as a user interface for you.
  • Lastly, there are built-in debugging tools that are competent to issue queries and analyse their responses and that too in real time.

GraphQL is a powerful tool and Drupal has ensured that all its community can easily tap into its power.

The GraphQL Twig module is the next advancement in Drupal. It was and generally is thought that GraphQL queries can only be sent over HTTP, but that isn't true. It can be, but there are other ways as well and this module personifies that. You can segregate the Twig templates from the internal structures of Drupal, so that maintenance and reuse is easier without any involvement of HTTP.

Should we use GraphQL or JSON:API or REST in Drupal?

Before getting into why GraphQL, we have to understand why not REST and what are its limitations. First of all, REST UI is absolutely important to set up the REST module in Drupal. Not to forget, it can be pretty arduous to configure it. In addition, the primary problem with REST is that it over fetches information, bombarding you with data you do not even need and certainly did not ask for. You might have just needed the title of an article, but the author’s user id is also included in the response list. This leads to a cycle of follow-up queries and you end up with the article’s title, link, its author’s name, his information and the entire content of the said article. Over-fetching is putting it lightly.

Because GraphQL uses the APIs in a more simplistic way, it becomes better than REST endpoints. The former does not expose single resources with fixed data structures and links between them, rather it provides you the opportunity to request any selection of data that you may need. You can easily query multiple resources on the server side simultaneously, consequently combining the different pieces of data in one single query. Hence, your work as a front-end developer becomes as easy as pie. You could still go for REST, if you wanted, it does have its own set of merits

Now coming to choosing between JSON: API and GraphQL, this is a more difficult choice to make. These two perform at a level parallel to each other. For instance, installing the JSON: API module is a piece of cake with absolutely no configuration required. As for GraphQL, the installation is easy as well, but there is a need for some configuration. Do you see why I said the choice was difficult?

Where decoupling is concerned, JSON: API and GraphQL are much better than REST. Server-side configuration is not required by the clients to perform content queries. While JSON: API has the default setting of altering every client-generated query, GraphQL mandates the permissions to be held by the consumer so that he can forego any access restrictions. There is no right or wrong method here, both have the right filtering tools for decoupling and both are security for your content.

When is the GraphQL module the most suitable?

Only fetching the data that is asked for should be the tagline of GraphQL and that is why in scenarios where you need data retrieval, it becomes quite handy.

  • Decoupled Drupal applications, with Drupal as the content repository and a React, Angular or Ember powered front-end.
  • Mobile applications, wherein data storage is the need of the hour.
  • Internet of Things data storage.
  • If you want to retrieve the JSON data from Drupal.
  • And also if you plan to use the Twig Templates in Drupal themes.

The GraphQL module would be perfect in all of these instances.

Why do GraphQL and Drupal fit so well?

GraphQL is considered to be an excellent fit for Decoupled Drupal websites, especially if they comprise entities as fields for stored data and these fields have relationships with other entities. GraphQL helps you curate queries for just the fields you need from the Article. This kind of flexibility makes it easy to restructure an object you wanted back, helping you to change the display as well. You can write queries, add or remove fields from the results and you can do all of this without writing a code on the backend.  The GraphQL module’s ability to expose every entity including pages, users and customer data makes all of this seem quite simple. So, it would be suffice to say that the decoupling experience would not be the same without GraphQL.

Has GraphQL impacted the conventional server-client relationship?

Traditionally, the server was the dominant in the server-client relationship, however, now the client holds more power and GraphQL has made certain of this. With it, the client need not follow everything the server imposes, it can simply enunciate its needs on a per-request basis. After the server would show the data possibilities it is equipped to fulfil, it would be extremely easy for the client to define its needs with the catalogued possibilities at the forefront. The shape of the values GraphQL API inserts and returns is every bit the same.

On top of these, GraphQL taps into the deeply nested relational data structures, which are suitable for graph models. The abilities of GraphQL schema and query validation ensures that it can prevent distributed denial-of-service attacks, thereby preventing attempts at overloading queries. The benefits of GraphQL in Drupal are indeed far and many.

What Will the Future Look Like?

GraphQL is not a part of Drupal Core, being an advanced tool that is a little more complex to use. The future is not showing any signs of it becoming one either. However, there are other aspects to look forward to. GraphQL v4 for Drupal is one of the most awaited releases of the GraphQL module. It would bring along numerous improvements for the module that seems to be perpetually evolving. GraphQL schema will be in total control of Drupal developers, since schema customisation was the holy grail of this module, things are looking up and the future brighter. GraphQL and Drupal have a long way to go.
 

Nov 20 2020
Nov 20

Drupal.org has used patches to manage community contribution for many years. But, patches are difficult for new users to learn and require the use of the command line. Recently, Drupal.org code has migrated to Gitlab, and we can now use Gitlab and Drupal.org issues to create Merge Requests to share code and fixes with modules and Drupal Core. Here’s an overview of creating a merge request for folks who want all the details.

Drupal Forks are Special

Drupal Gitlab forks are special because they are accessible to everyone with a Drupal.org account. One fork per issue, and anyone can edit the fork and open a merge request to the main project.

Setup your Drupal Account

Before we begin, you must register for an account on Drupal.org to get access to gitlab forks. Next, add an ssh public key to your Drupal.org user profile so you don’t need a password to work. Lastly, agree to the Drupal.org git terms of service so you can access forks.

Create an issue on Easy Breadcrumb

Before you can create forks on Drupal.org’s gitlab, you need an issue! So, start by creating an issue on Easy Breadcrumb.

Once you have the issue, press “Create issue fork”.

Clone Easy Breadcrumb

Next copy of the module you want to modify. You can get instructions for this by clicking version control near the top of the module.

git clone --branch 8.x-1.x [email protected]:project/easy_breadcrumb.git
cd easy_breadcrumb

Make Your Changes

Now, edit the code on your own computer to fix the issue. This is the hardest part!

Send Your Work to the Issue Fork on Gitlab

The exact git commands vary slightly from issue to issue. So, check the “view commands” link on the issue to see the commands for your issue, but here are the ones I ran. I got the commit message from the commit credit section towards the bottom of the issue.

git remote add easy_breadcrumb-3174165 [email protected]:issue/easy_breadcrumb-3174165.git
git fetch easy_breadcrumb-3174165
git checkout -b '8.x-1.x' --track easy_breadcrumb-3174165/'8.x-1.x'
git add .
git commit -m 'Issue #3174165 by kell.mcnaughton, pattsai: How to support limiting depth' --author="git <[email protected]>"
git push --set-upstream easy_breadcrumb-3174165 HEAD

The commands do the following:

  • Add the new fork as a remote.
  • Fetch the new fork.
  • Checkout a branch on that fork.
  • Add your work to that branch.
  • Commit your work to that branch.
  • Push your work to Gitlab.

Open a Merge Request

Once your work is saved to the issue fork on Gitlab. Go back to the issue on Drupal.org and click Compare near the top of the issue. Then, click open merge request!

Now that your request is open, the maintainer of the module just needs to press merge on Gitlab to make the code part of the project!

Nov 19 2020
Nov 19

Our normally scheduled call to chat about all things Drupal and nonprofits will happen TODAY, Thursday, November 19, at 1pm ET / 10am PT. (Convert to your local time zone.)

No set agenda this month, so we'll have plenty of time to discuss whatever Drupal-related thoughts are on your mind. 

All nonprofit Drupal devs and users, regardless of experience level, are always welcome on this call.

Feel free to share your thoughts and discussion points ahead of time in our collaborative Google doc: https://nten.org/drupal/notes

This free call is sponsored by NTEN.org and open to everyone.

View notes of previous months' calls.

Nov 19 2020
Nov 19

On Tuesday the 10th of November, Jordanians flocked to cast their votes for their representatives in the Jordanian Lower House of Parliament for the next 4 years. 

The first vote was cast at 7:00 am with approx. 28,000 votes in the following 30 minutes.

The Independent Election Commission (IEC), which supervises the election to ensure the integrity of the democratic process, had an additional mandate: Safeguarding the voters during an election season hindered by restrictions due to COVID-19.

Drupal was identified by the IEC as the ideal technology that aligned with their needs to ensure that they’re teams are supported in achieving their objectives.

 

Drupal succeeds in delivering key objectives:

1. Accessibility To Information Leading To Orderly Voting

As in previous elections, voters had designated vote casting locations. However, the lack of accessibility to information regarding these locations that caused frustration amongst voters was avoided this time.

The new IEC Drupal 8 website provided voters with access to this information via a mobile-friendly, simple but powerful search engine.

Voters were able to easily identify their designated ballot casting locations by simply entering their national ID numbers. The search engine would provide voters with their designated polling station from a database that featured a total of 1,824 stations.

This feature proved to be critical towards enhancing clarity, transparency, and access to information vital towards the voting experience.

At one point during election day, 10,510 active users were on the site using the national ID search engine tool to identify their polling station.

The design of the IEC website also helped with accessibility. 

The UI/UX design featured intelligent user journey mapping which optimized the user experience for all website visitors based on their behavior, expectations, and needs.

2. Timely Services and Performance

A total of 403,456 users visited the website during election day and the 48 hours that preceded with a total of 581,963 sessions taking place. Sessions on election day alone amounted to 417,708.

Despite the heavy load, a number of requests, and traffic on election day; the website users didn’t experience any frustrating page loading issues as the average page loading time on mobile was almost 4 seconds and the average server response time was approx. 1 second.

Despite a surge in traffic load, the IEC website didn't suffer from the typical performance and page loading issues.

Traffic was expected to be high and intense during election season. More importantly, the requests and concerns of the website visitors (such as submitting complaints, or requesting information critical to their voting) were time-sensitive, and as such the IEC website couldn’t afford to have any downtime or maintenance issues.

Thanks to Drupal 8’s ability to ensure seamless performance regardless of the traffic load on the website in addition to the 24/7 support team behind the scenes, the IEC was able to successfully deliver all their digital services to the general public via the website with no technical issues arising.

Vardot's IEC project support team was also on standby to deal with any unlikely technical issues that may arise throughout the whole election week.

"Despite facing many technical challenges—due to the internal team's difficulty in acquiring materials—the Vardot team over-delivered. They're an extremely dedicated and results-driven team."

Digital Marketing Consultant @ Independent Elections Commission

3. Superior User Experience

The vast majority of the website visitors (>95%) relied upon their mobile devices to browse and navigate the IEC website.

During the peak traffic period, bounce rates were low (approx. 29%) which is a testament to the successful implementation of user journey mapping during the UI/UX design phase of the IEC website. 

The IEC opted to build their new Drupal site using Varbase CMS because they were impressed with the demonstrated success and simplicity of building enterprise-grade multilingual websites optimized for search engine performance across all languages.

This objective was successfully delivered as the main source of traffic proved to be from organic search results - 40% of traffic was through Google Organic Search.

Proving the superiority of Drupal websites when it comes to SEO performance and ranking higher on popular search engines such as Google and Yahoo!

CMS Buyers Guide

Need help choosing the ideal CMS?

Download our free CMS Buyers Guide!

Nov 18 2020
Nov 18

In our blog post about Innovating Healthcare with Drupal, we talked about using Drupal to deliver an application that improves the healthcare experience for palliative care patients.  Our application was a resounding success.  The global COVID-19 pandemic hits and the need to keep people out of the Emergency Rooms to stop the spread of the Coronavirus suddenly becomes urgent.  To move the Drupal application out of tightly controlled pilots to a more widely distributed application requires adherence to HIPAA (USA) and PIPEDA (Canada) guidelines to safeguard patient information.  Unfortunately, the tried and tested Drupal DevOps and hosting environments we’ve become accustomed to don’t come close to providing the level of security required as a platform to become compliant with HIPAA or PIPEDA.  This is where the MedStack hosting service comes in to save the day.

MedStack is an application hosting platform that provides ISO 27001 compliance for the environment in which your application resides, but not for the application itself.  The interesting feature of MedStack is that their environment can spin up any Docker image, producing a hosting platform that conforms to privacy requirements while giving you the freedom to write your application in any language that can be run on a Docker image.  It is up to you, the application developers, to ensure you adhere to security best practices within your application to keep it secure.  Among the application security items to consider are password policies, two-factor authentication, private vs. public files, permissions and keeping up with the Drupal security patches. Privacy Impact Assessments (PIA) and Threat and Risk Assessments (TRA) will still have to be done on your applications to ensure they meet the requirements for your healthcare application and what steps are required to remedy any deficiencies.

Docker-based solutions such as Drupal VM, DDev or Lando are widely used in the Drupal development community.  These solutions are excellent for spinning up a feature-rich development environment, eliminating the need for developers to use specific operating systems or to create locally-running LAMP development stacks.  Unfortunately, you can’t use Drupal VM out of the box on Medstack.  MedStack uses its own MySQL Database service to provide the proper HIPAA/PIPEDA compliance and you should streamline your Docker images to be production-configured environments.

The following screenshots should give you some insight into what Medstack provides.

Showing Medstack Control - Container management

With some identifying information removed, shown is Medstack Control which allows you to set up new clusters, manage the existing Docker services, create new nodes and manage your database servers. What you should note are the details shown in this screenshot:  Encryption on the network, encryption at rest and encryption in transit. Safeguarding patient data is paramount and encryption of data at rest and on the network is mandatory. Likewise, this particular application is for a Canadian healthcare network, therefore we have to run in the Central Canada region.  We are able to spin up a new cluster in the US or EU, thus satisfying in-country hosting requirements.

Medstack Control - Container properties

Drilling into the docker service, you’re able to update the service’s configuration, shell or exec commands in your container and see the history of events and tasks performed on your environment.  Need Drush?  No problem.  You can execute drush commands in the shell to manage your environment.  Just configure Drush in your Docker image.

Coupling a properly configured Drupal application with Medstack has allowed us to move Drupal into a HIPAA and PIPEDA compliant environment, satisfying the underlying privacy requirements demanded by our healthcare institutions.  We can now focus on the application and leave Medstack to worry about compliance issues.  Working with our healthcare partners, we continue to evolve our Drupal application in the healthcare space.

Nov 18 2020
Nov 18

Have you ever wanted to exclude a block on just one content type? You would think that the existing block visibility plugin for "Content types: [x]" plus the "Negate this condition" would do it; however, what you quickly learn is that the plugin also returns false for non-node pages altogether! So, for example, when you say "show on NOT article" for example, it really evaluates to "show only on node pages that are not articles."

What we need to do is write a custom Condition plugin which has some more expansive logic, so we get what we want--"show on every page EXCEPT if that's a node page of bundle article."

To do this, we first need a custom module. For small things like this, I usually scaffold out a small module called "tweaks" using Drupal Console. In there, you need to create a Condition Plugin (inherits ConditionPluginBase), so create a folder called /src/Plugin/Condition, and in it put a NotNodeType.php with the code from the gist below!

https://gist.github.com/chrisfromredfin/d5b583c30129ac6f98d9687768eb2734.js

Nov 18 2020
Nov 18

The Authoring Experience Problem

Authoring experience is a huge concern for web teams everywhere. As it should be. The ability for digital publishers to keep up with the rapidly accelerating demands of their users, and the pace of their competitors, largely depends on providing easy-to-use, expressive tools for their teams to create compelling content.

In practice, this means that publishers need the ability to quickly produce rich digital content from a range of possible components: text, images, videos, social media assets, and so on. Authors should be able to mix various elements quickly and effortlessly, with simple controls for managing content flow and layout. Under the hood, content should be structured and predictable. We’ve written about this before: organizations cannot sacrifice structure for flexibility. They need both.

And yet authoring experience has been a major shortcoming of Drupal, often articulated by marketing and editorial leaders as, “Drupal is too complicated for my organization.”

Other platforms from across the entire spectrum of digital publishing systems – from Adobe Experience Manager to Wordpress, Squarespace to WiX – provide rich, intuitive tools for easily creating all kinds of digital content. What Drupal offers in terms of flexibility – powerful APIs, easily customizable structured content, and a huge range of content management features – has come at the expense of the authoring experience.

Layout Paragraphs

A while ago I wrote about Entity Reference with Layout (or ERL), a Drupal 8 module that combines Paragraphs with the Layout API for a more flexible, expressive authoring experience in Drupal. ERL had one concerning drawback: you had to use a special field type for referencing paragraphs. That means ERL wouldn’t “just work” with existing paragraph fields.

Layout Paragraphs, the successor to ERL, fixes that problem. The Layout Paragraphs module combines Drupal Paragraphs with the Layout API and works seamlessly with existing paragraph reference fields.

The Layout Paragraphs module makes it dead simple for authors to build rich pages from a library of available components. It offers drag-and-drop controls for managing content flow and layout, and works seamlessly with existing paragraph reference fields.

Layout Paragraphs features include:

  • Easily configure which content components, or paragraph types, are available to your authors wherever Layout Paragraphs is used.
  • A highly visual, drag-and-drop interface makes it simple to manage content flow and layout.
  • Site admins can easily configure Layout Paragraphs to support nested layouts up to 10 levels deep.
  • A “disabled bin” makes it easy to temporarily remove content elements without deleting them.
  • Layout Paragraphs works seamlessly with existing paragraph reference fields.
  • Authors can easily switch between different layouts without losing nested content.
  • Layout Paragraphs is fully customizable and works with Layout Plugins, Paragraph Behaviors, and other Drupal APIs.

What About Layout Builder?

While there are strong similarities between Layout Paragraphs and Layout Builder, Layout Paragraphs solves a fundamentally different problem than Layout Builder.

Layout Builder, in Drupal core, is a powerful system for managing layouts across an entire website. With Layout Builder, site administrators and content managers can place content from virtually anywhere (including custom blocks) within specific regions of a layout. Layout Builder is extremely powerful, but doesn’t directly solve the problem I mentioned above: “Drupal is too complicated for my organization.”

Layout Paragraphs adds a new interface for effortlessly managing content using Drupal Paragraphs. It is simple, fast, and expressive. Layout Paragraphs is a field widget, and works the same way as other Drupal fields. The Layout Paragraphs module makes it simple for publishers to create and manage rich content from a library of available components: text, images, videos, etc. Layout Paragraphs is completely focused on the authoring experience.

Try It Out

If you want to see Layout Paragraphs in action, simply download the module and give it a try. Setup is fast and easy. From the Layout Paragraphs module page:

  1. Make sure the Paragraphs module is installed.
  2. Download/Require (composer require drupal/layout_paragraphs) and install Layout Paragraphs.
  3. Create a new paragraph type (admin > structure > paragraph types) to use for layout sections. Your new paragraph type can have whatever fields you wish, although no fields are required for the module to work.
  4. Enable the “Layout Paragraphs” paragraph behavior for your layout section paragraph type, and select one or more layouts you wish to make available.
  5. Make sure your new layout section paragraph type is selected under “Reference Type” on the content type’s reference field edit screen by clicking “edit” for the respective field on the “Manage fields” tab.
  6. Choose “Layout Paragraphs” as the field widget type for the desired paragraph reference field under “Manage form display”.
  7. Choose “Layout Paragraphs” as the field formatter for the desired paragraph reference field under “Manage display”.
  8. That’s it. Start creating (or editing) content to see the module in action.

If you’re using Layout Paragraphs in your projects, we’d love to hear about it. Drop a note in the comments section below, or get involved in the issue queue.

Nov 18 2020
Nov 18

In Gatsby you use GraphQL to query data from your backend, the source plugin you use will allow you to access data from your source API. The gatsby_source_drupal plugin pulls data from Drupal JSON:API so it’s queryable with GraphQL inside your Gatsby app, while gatsby_source_graphql is just a wrapper around any backend GraphQL schema. This means we get access to the schema in the GraphQL 3 module for Drupal instead of the one defined in the gatsby_source_drupal plugin.

Let's compare some common query examples from both of these methods.

The queries here are kept simple to focus on specific elements and context.

Lets get a list of 3 nodes of type Article for a link list. Make sure they are published and sorted by sticky and changed.

JSON:API allNodeArticle

Query


query {
  allNodeArticle(
    filter: {
      status: {
        eq: true
      }
    }
    sort: {
      fields: [sticky, changed]
      order: [DESC, DESC]
    }
    skip: 0
    limit: 3
  ) {
    edges {
      node {
        drupal_internal__nid
        title
        path {
          alias
        }
      }
    }
  }
}

Query


query {
  drupal {
    nodeQuery(
      filter: {
        conditions: [
          { field: "status", value: "1" }
          { field: "type", value: "article" }
        ]
      }
      sort: [
        { field: "sticky", direction: DESC }
        { field: "changed", direction: DESC }
      ]
      offset: 0
      limit: 3
    ) {
      entities {
        ... on Drupal_NodeArticle {
          nid
          title
          path {
            alias
          }
        }
      }
    }
  }
}

The main difference I see in the query is how the filter and sort parameters work. With gatsby_source_drupal we have a set of filters for fields defined on the entity type; id, langcode, status, title, etc.

With gatasby_source_graphql and the GrapqhQL module I'm able to use "conditions" the way I would with EntityQuery in Drupal, working in PHP. For instance, I could filter my list of nodes by taxonomy term by adding a condition for that field.


...
    nodeQuery(
      filter: {
        conditions: [
          { field: "status", value: "1" }
          { field: "type", value: "article" }
          { field: "field_tags.entity.name", value: "sailboat"}
        ]
      }
    }
...

I could do this in gatsby_source_drupal as well, but it looks like:


...
  allNodeArticle(
    filter: {
      status: {
        eq: true
      }
      relationships: {
        field_tags: {
          elemMatch: {
            name: {
              eq: "sailboat"
            }
          }
        }
      }
    }
...

It might just be because I come from the backend first that I find the nodeQuery way more intuitive. I like that you can customize the query with conditions, conjunctions and groups. https://drupal-graphql.gitbook.io/graphql/queries/filters

Sometimes I use the Drupal Paragraphs module to make flexible columns of text. In this very simple example I've named the paragraphs field field_components and there are 2 types of paragraphs:

  • Columns: has one field field_components, another paragraph reference field.
  • Text: has field_heading and field_text

On my Article content type I added a paragraphs field called "field_components" as well. I can create an article with a "Columns" paragraph and inside the "Columns" paragraph I can add "Text" paragraphs. The Columns paragraph becomes a wrapper and can be styled to cause the nested Text paragraphs to layout as columns.

In the query, I've aliased the paragraph fields to "column_wrapper" and "columns", so it's a little clearer than field_components in the data.

JSON:API nodeArticle

Query


query($id: Int!) {
  nodeArticle(drupal_internal__nid: { eq: $id }) {
    relationships {
      column_wrapper: field_components {
        ... on paragraph__columns {
          relationships {
            columns: field_components {
              ... on paragraph__text {
                field_heading {
                  processed
                }
                field_text {
                  processed
                }
              }
            }
          }
        }
      }
    }
  }
}

Query


query($id: String!) {
  drupal {
    nodeById (
      id: $id
    ) {
      ... on Drupal_NodeArticle {
        column_wrapper: fieldComponents {
          entity {
            ... on Drupal_ParagraphColumns {
              columns: fieldComponents {
                entity {
                  ... on Drupal_ParagraphText {
                    fieldHeading {
                      processed
                    }
                    fieldText {
                      processed
                    }
                  }
                }
              }
            }
          }
        }
      }
    }
  }
}

You can see some differences in the way the schema identifies the fields. One uses relationships and field_components while the other uses fieldComponents and entity

JSON:API: nodeArticle has arguments for all the node fields; langcode, status, created, etc. while the GraphQL nodeById has only id and langcode. But if you're not looking for a specific node id and want to filter on something else, use the more flexible nodeQuery.

In my previous post I talked about using Drupal to process and serve derivative images instead of having Gatsby process them all at build time, which can take a very long time as the number of posts/images grows.

I compared the two source queries for this as well, and I think the differences are significant in this case. You can review that post for the details but I'll briefly summarise.

The goal was to load a set of image styles from an image field. To do this with gatsby_source_drupaland the JSON:API requires an additional Drupal contrib module to expose the images styles. The GraphQL module just handles it with derivative(style: [name]) on the field. Which can be aliased to query all the styles you wish.


...
  fieldImage {
    small: derivative(style: small) {
      width
      url
    }
    medium: derivative(style: medium) {
      width
      url
    }
    ...

The resulting data from GraphQL was easier to process, while the structure of the data from the JSON:API version kind of didn't make sense and required some extra massaging.

In general the gatsby-source-drupal plugin gives us queries in the form of "entityBundle" or "allEntityBundle" like the following:

  • nodeArticle, allNodeArticle
  • paragraphColumns, allParagraphColumns
  • taxonomyTermTags, allTaxonomyTermTags

These take arguments mapped to the entity fields.

The GraphQL module instead provides us with entity queries on each entity type that we can use just like an EntityQuery in Drupal PHP, as well as entityById and entityRevsionById.

  • queryNode, nodeById, nodeRevisionById
  • queryUser, userById
  • blockContentQuery, blockContentById, blockContentRevisionById

So far I've been able to do the things I want with either solution, but I find myself favoring GraphQL module. I assume that's from my backend roots. Lucky for us the GraphQL Explorer makes it all pretty easy to explore. Please let me know in the comments if I've said anything preposterous, or if you have any additional insights. Thanks for reading!

"Code like nobody is watching... but then refactor. ;)"

Nov 18 2020
Nov 18

From the consumer perspective, there’s never been a better time to build a website. User-friendly website platforms like Squarespace allow amateur developers to bypass complex code and apply well-designed user interfaces to their digital projects. Modern site-building tools aren’t just easy to use—they’re actually fun.

For anyone who has managed a Drupal website, you know the same can’t be said for your platform of choice. While rich with possibilities, the default editorial interface for Drupal feels technical, confusing, and even restrictive to users without a developer background. Consequently, designers and developers too often build a beautiful website while overlooking its backend CMS.

Drupal’s open-ended capabilities constitute a competitive advantage when it comes to developing an elegant, customer-facing website. But a lack of attention to the needs of those who maintain your website content contributes to a perception that Drupal is a developer-focused platform. By building a backend interface just as focused on your site editors as the frontend, you create a more empowering environment for internal teams. In the process, your website performs that much better as a whole.

UX principles matter for backend design as much as the frontend

Given Drupal’s inherent flexibilities, there are as many variations of CMS interfaces as there are websites on the platform. That uniqueness is part of what makes Drupal such a powerful tool, but it also constitutes a weakness.

The editorial workflow for every website is different, which opens an inevitable training gap in translating your site’s capabilities to your editorial team. Plus, despite Drupal’s open-source strengths, you’ll likely need to reinvent the wheel when designing CMS improvements specific to your organization.

For IT managers, this is a daunting situation because the broad possibilities of Drupal are often overwhelming. If you try to make changes to your interface, you can be frustrated when a seemingly easy fix requires 50 hours of development work. Too often, Drupal users will wind up working with an inefficient and confusing CMS because they’re afraid of the complexity that comes with building out a new interface.

Fortunately, redesigning your CMS doesn’t have to be a demanding undertaking. With the right expertise, you can develop custom user interfaces with little to no coding required. Personalized content dashboards and defined roles and permissions for each user go a long way toward creating a more intuitive experience.

Improving your backend design is often seen as an additional effort, but think of it as a baseline requirement. And, by sharing our user stories within the Drupal community, we also build a path toward improving the platform for the future.

Use Drupal’s Views module to customize user dashboards

One of the biggest issues with Drupal’s out-of-the-box editorial tools is that they don’t reflect the way any organization actually uses the CMS. Just as UX designers look to provide a positive experience for first-time visitors to your site, your team should aim for delivering a similarly strong first impression for those managing its content.

By default, Drupal takes users to their profile pages upon login, which is useful to . . . almost no one. Plus, the platform’s existing terminology uses cryptic terms such as “node,” “taxonomy,” and “paragraphs” to describe various content items. From the beginning, you should remove these abstract references from your CMS. Your editorial users shouldn’t have to understand how the site is built to own its content.

Powering Our Communities homepage

In the backend, every Drupal site has a content overview page, which shows the building blocks of your site. Offering a full list that includes cryptic timestamps and author details, this page constitutes a floodgate of information. Designing an effective CMS is as much an exercise in subtraction as addition. Whether your user’s role involves reviewing site metrics or new content, their first interaction with your CMS should display what they use most often.

Manage News interface

If one population of users is most interested in the last item they modified, you can transform their login screen to a custom dashboard to display those items. If another group of users works exclusively with SEO, you can create an interface that displays reports and other common tasks. Using Drupal’s Views module, dashboards like these are possible with a few clicks and minimal coding.

By tailoring your CMS to specific user habits, you allow your website teams to find what they need and get to work faster. The most dangerous approach to backend design is to try and build one interface to rule them all.

Listen to your users and ease frustrations with a CMS that works

Through Drupal Views, you can modify lists of content and various actions to control how they display in your CMS. While Views provides many options to create custom interfaces, your users themselves are your organization’s most vital resource. By watching how people work on your site, you can recognize areas where your CMS is falling short.

Drupal content dashboard

Even if you’ve developed tools that aimed to satisfy specific use cases, you might be surprised the way your tools are used. Through user experience testing, you’ll often find the workarounds your site editors have developed to manage the site.

In one recent example, site editors needed to link to a site page within the CMS. Without that functionality, they would either find the URL by viewing the source code in another tab and copying its node ID number. Anyone watching these users would find their process cumbersome, time-consuming, and frustrating. Fortunately, there’s a Drupal module called Linkit that was implemented to easily eliminate this needless effort.

There are many useful modules in the Drupal ecosystem that can enhance the out-of-the-box editorial experience. Entity Clone expedites the content creation process. Views Bulk Operations and Bulk Edit simplify routine content update tasks. Computed Field and Automatic Entity Label take the guesswork out of derived or dependent content values. Using custom form modes and Field Groups can help bring order and streamline the content creation forms.

Most of the time, your developers don’t know what solutions teams have developed to overcome an ineffective editorial interface. And, for fear of the complexity required to create a solution, these supposed shortcuts too often go unresolved. Your backend users may not even be aware their efforts could be automated or otherwise streamlined. As a result, even the most beautiful, user-friendly website is bogged down by a poorly designed CMS.

Once these solutions are implemented, however, you and your users enjoy a shared win. And, through sharing your efforts with the Drupal community, you and your team build a more user-friendly future for the platform as well.

Nov 18 2020
Nov 18

As you may know, Drupal 6 has reached End-of-Life (EOL) which means the Drupal Security Team is no longer doing Security Advisories or working on security patches for Drupal 6 core or contrib modules - but the Drupal 6 LTS vendors are and we're one of them!

Today, there is a Critical security release for Drupal core to fix a Remote Code Execution (RCE) vulnerability. You can learn more in the security advisory:

Drupal core - Critical - Remote code execution - SA-CORE-2020-012

Here you can download the Drupal 6 patch to fix, or a full release ZIP or TAR.GZ.

If you have a Drupal 6 site, we recommend you update immediately! We have already deployed the patch for all of our Drupal 6 Long-Term Support clients. :-)

FYI, there were other Drupal core security advisories made today, but those don't affect Drupal 6.

If you'd like all your Drupal 6 modules to receive security updates and have the fixes deployed the same day they're released, please check out our D6LTS plans.

Note: if you use the myDropWizard module (totally free!), you'll be alerted to these and any future security updates, and will be able to use drush to install them (even though they won't necessarily have a release on Drupal.org).

Nov 18 2020
Nov 18

Install the latest version:

Versions of Drupal 8 prior to 8.8.x are end-of-life and do not receive security coverage.

Additionally, it's recommended that you audit all previously uploaded files to check for malicious extensions. Look specifically for files that include more than one extension, like filename.php.txt or filename.html.gif, without an underscore (_) in the extension. Pay specific attention to the following file extensions, which should be considered dangerous even when followed by one or more additional extensions:

  • phar
  • php
  • pl
  • py
  • cgi
  • asp
  • js
  • html
  • htm
  • phtml

This list is not exhaustive, so evaluate security concerns for other unmunged extensions on a case-by-case basis.

Nov 18 2020
Nov 18
Generates a username automatically in Drupal site

First of all, generating a username automatically in your Drupal site is very useful for the site admins and website visitors. Let's start with the fact that today, to use the services of the site, users have to log in.

As a rule, registration includes filling in the following data:

Nov 18 2020
Nov 18

Faceted search offers users with a superior search experience by displaying filters against their search results. It is particularly useful for websites having large catalogues and listings. Once the user types in their search query, they will be presented with a list of relevant filter options to further narrow down their search. These filtering elements are facets.

Previously Facet API in Drupal 7, the Facet module in Drupal 8 enables your website with faceted searching abilities. Facets also supports Drupal 9! Let’s look at configuring and implementing Faceted search with Drupal’s own search server, Search API.

Faceted Search

What is Faceted Search?

If your users are finding it hard to see what they are searching for even after keying in their search query, they are bound to get frustrated. Faceted search provides users with multiple filters at the same time for the various attributes of the content. The facets provided are based on the search query the user has executed. Facets will also display the number of matched results (usually within brackets) next to it. Let’s take a look at this below screenshot to understand Facets better-

Facets Module for Drupal 8

In one of our recent Drupal 8 projects, a quick search for Homes in Columbia on this website presents to you with facets like Communities, Hot Deals, Quick Move-ins and more. You will also see the count of the results next to each facet. So, a query with “Columbia” keyword is sent to the search server to retrieve the already configured and indexed categories (Communities, Hot Deals, etc.)

Installing the Facets Module for Drupal 8

As previously discussed, we will be implementing Faceted search using Drupal’s Search API module. 

Step 1: Enabling the modules

Install and enable these modules
•    The Facet Module 
•    Search API module

Step 2: Creating Content Types

Create the content you would like to include in the faceted search by adding Content types as shown below. You can also use the default content types provided by Drupal.

Create Content Types   Step 2: Create Content Types


Step 3: Configuring the Search server

Navigate to Configuration -> Search and metadata -> Search-API from the admin interface to configure your search server. Give a name to your search server (here - data server). 

Configure the Search server   Step 3: Configure the Search server

Step 4: Configuring the Search Index

Next, configure the search index to improve the search performance. Navigate to Configuration -> Search and metadata -> Search-API -> Index -> data_index.

Configuring the Search Index   
    Step 4: Configuring the Search Index

Give a name to your index and then select Content as your Datasources since we will be indexing the Content entities here.
You can then move on to the next section - Configuring the Datasource (here – Content). Here you can choose to select all the bundles or only select a few from the below list to index.
Next, select your server that you had already created (here - data server). Select the “Index items immediately” option to begin the indexing process. Click on Save.

Configuring the Datasource   
   Configuring the Datasource


Step 5: Adding Fields for Indexing

Next, we need to add Fields to be indexed. Navigate to Configuration -> Search and metadata ->Search API -> data index and select the Fields tab. Click on the Add fields button to create fields according to your requirement.

Step 5: Adding Fields for Indexing
  Step 5: Adding Fields for Indexing


Step 6: Indexing the Content

Under the same location, click on the View tab to start the process of indexing your content. In the Start Indexing Now section, click on the Index Now button. It will then show you a progress bar with the status of the number of items that have been indexed.

Faceted Search 
   Step 6: Index the Content

Step 7: Creating a View

Now we will be creating a view for the data that needs to be indexed and displayed to your users. Navigate to Structure -> Views -> Add View.

Creating a View
   Step 7: Creating a View

Give a name for the View.
Under View Settings dropdown list, select the index that you have created in Step 4.
Create a page for your search results by clicking on the Create a page checkbox under the Page Settings tab. Give a name and a path for the same. 
Under Items to Display, select 0 if you want to display all the results in one page. Else, select a number of results to be displayed. 
Under Page Display settings, you can select the format in which you want to display your results – Table, Grid, HTML list or Unformatted list. We have selected Unformatted list here. Click on Save.

Step 8: Adding Fields to the View

Here we will be adding fields that we have indexed earlier to the View.
Go to Views, click on Add button next to the Fields section. Select the Fields, click on Add and Configure. 
Under Render Settings, select the Link to Content checkbox so that the results displayed are clickable. 
Click Save.

Adding Fields to the View  
   Step 8: Adding Fields to the View


Step 9: Configuring the Facets

Now let’s begin configuring and enabling the facets. Navigate to Configuration -> Search and meta data -> Facets

Click on the Add Facet button.

Configuring the Facets  
  Step 9: Configuring the Facets

Select the Facet Source – This will be your View that you created previously.
Select the Field – This will display the fields you had added for indexing in Step 5.
Give a name to the Facet.
Click on Save.

Next, you will then see more configuration options for displaying the facets (as shown in the below image). Widgets will list out a number of options like List of links, array, dropdown, etc. You can choose what suits your website the best.
Select the “Transform entity ID to label” to avoid displaying the machine name of the content type.
Click on Save.

Faceted SearchConfiguring the Facets

Step 10: Placing the Facet blocks in the chosen page regions 

Next, place the Facets you created as blocks in a page region of your choice. 
Navigate to Structure -> Block Layout.
Select the region of the page where you would like to place the block containing the Facets.
Here, we are selecting Sidebar. Click on the Place Block button next to the Sidebar.
In the next dialog box, search for the Facet name and click on Place Block.

Placing the Facet blocks in the chosen page regions
Step 10: Placing the Facet blocks in the chosen page regions


In the Configure Block section, mention the Search page path that you had previously created. Here -“site-search” is our page we had created. 
Give a display name for your Block and select the Display title checkbox if you want the block name to be displayed (here – Type).
Click on Save Block.

The Result

And just like that, your faceted search page and functionality is ready! Notice the Facet called Type (display name) that has Basic page and Article listed as content types to filter against.

Result: The Faceted Search Page
Result: The Faceted Search Page
Nov 17 2020
Nov 17

Drupal 8 has lot of inbuilt functionality within it, one of the main important features is Configuration Synchronization. It will help us to migrate full setup properly on site deployment. But the main problem is that, According to the Drupal 8 CMI documentation,

The Configuration Manager module in Drupal 8 provides a user interface for importing and exporting configuration changes between a Drupal installation in different environments, such as Development, Staging and Production, so you can make and verify your changes with a comfortable distance from your live environment.

The same idea appears in this article,

Perhaps the most important concept to understand is that the configuration system is designed to optimize the process of moving configuration between instances of the same site. It is not intended to allow exporting the configuration from one site to another. In order to move configuration data, the site and import files must have matching values for UUID in the system.site configuration item. In other words, additional environments should initially be set up as clones of the site. We did not, for instance, hope to facilitate exporting configuration from whitehouse.gov and importing it into harvard.edu.

So Still we hardly depends on the Features module, But we can use the CMI (Configuration management Interface between two different sites with simple hacking solution.

The CMI works with based on the site UUID, If the sites have different UUID then, it won’t work, So changing destination site’s UUID with Source site’s UUID would solve the problem.

Just follow the below steps to use CMI between two different sites,

1. Export Configuration from your source site (Site A)

2. Extract the file, Open the system.site.yml file and get the Source site (Site A) UUID

3. Run the drush command in your destination site (Site B)

drush config-set "system.site" uuid "Your Source Site UUID here"

4. After that, try as usual Import Process in destination site (Site B)

It will accept the Site A configuration in Site B. So we can migrate all datas into our another site.

FYI : I am not sure, It is a effective / Proper way to do it.. If there is any problem with this method please mentioned in the comment.

Nov 17 2020
Nov 17

In such a time, i want to place blocks in sidebar region with the dynamic weight. It means the blocks should render in different position for each page request. I have searched and tried lots of method but unfortunately i can’t find proper method to do that. So i have decided to do that with some hacky way.

Drupal 8 is providing a hook to alter the region template_preprocess_region, it would prepares values to the theme_region. I have planned to use the hook to alter the block's position to be rendered in the region.

Adding the following codes in THEMENAME.theme file would solve the problems,

function themename_preprocess_region(&$variables) {
  if ($variables['region'] == 'sidebar_second') {
    $variables['elements'] = shuffle_assoc($variables['elements']);
    $content = '';
    foreach ($variables['elements'] as $key => $value) {
      if (is_array($variables['elements'][$key])) {
        $content .= \Drupal::service ('renderer')->render($value);
      }
    }
    $variables['content'] = array(
      '#markup' => $content,
    );
  }
}

function shuffle_assoc($list) {
  if (!is_array($list)) {
    return $list;
  }

  $keys = array_keys($list);
  shuffle($keys);
  $random = array();
  foreach ($keys as $key) {
    $random[$key] = $list[$key];
  }
  return $random;
}

It is working well in my site but i know it is the hacky way, not sure about the proper way to do that. If anyone of you know about this kindly share it in the comments :)

Nov 17 2020
Nov 17

Twig can be extended in many ways; you can add extra tags, filters, tests, operators, global variables, and functions. You can even extend the parser itself with node visitors. In this blog,

I am going to show you how to create new custom twig filters in drupal. For example we are going to create a filter to remove numbers from string, will explain with hello_world module.

Create hello_world folder in modules/custom/ folder with the following files,

1. hello_world.info.yml // It would contains normal module .info.yml file values, Check here for more details

2. hello_world.services.yml // It would contain following lines,

services:
  hello_world.twig_extension:
    arguments: ['@renderer']
    class: Drupal\hello_world\TwigExtension\RemoveNumbers
    tags:
      - { name: twig.extension }

3. src/TwigExtension/RemoveNumbers.php It would contain followings in that,

namespace Drupal\hello_world\TwigExtension;


class RemoveNumbers extends \Twig_Extension {    

  /**
   * Generates a list of all Twig filters that this extension defines.
   */
  public function getFilters() {
    return [
      new \Twig_SimpleFilter('removenum', array($this, 'removeNumbers')),
    ];
  }

  /**
   * Gets a unique identifier for this Twig extension.
   */
  public function getName() {
    return 'hello_world.twig_extension';
  }

  /**
   * Replaces all numbers from the string.
   */
  public static function removeNumbers($string) {
    return preg_replace('#[0-9]*#', '', $string);
  }

}

Enable the hello_world module and clear the cache, then you could use the “ removenum “ filters in your twig file,

{{ twig-value-with-numbers | removenum }}

It would remove the all numbers from the string, enjoy with your custom filters !

Download the hello_world module here

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web