Aug 26 2019
Aug 26

In order to achieve a consistent user experience and interface design, Atomic Design is a great approach.

Atomic Design is not only a budget saver, but also your best friend for future requirements and allows you to fulfill customers wishes quickly and easily. 

The first step is to define the elements that are most in use. This would be, for example, colors, fonts, buttons and images. Starting with these simple elements, different components can be assembled, such as cards or sliders.

Example: Image + Text + Link = Card

Example: Image + Text + Link = Card


The analogy for atomic design is that it is organized along chemical notation, ranging from atoms to molecules to organisms to templates to pages. On each level the components are assembled from smaller components of the lower levels.
 

Chemical notation in 'Atomic Design'

Chemical notation in `Atomic Design`


When we design user experiences (UX) at 1xINTERNET we try to build the best possible user interfaces with the fewest amount of components.

The focus is to create efficiency in the project and to potentially save budget. When fewer components are designed in UX, fewer components need to be programmed and tested. 
 

This approach not only reduces initial efforts considerably but also reduces the cost of the project and its maintenance.


Imagine the following example: You have three different interaction pages and you create a great but different UX for each of those. Then, the users have three different systems to learn. If you instead create an equally good UX by reusing as many components as possible, the user has to learn much less and can interact much faster with your application.

Conclusion

A strict design process and the right integration with the website technology certainly offers efficiency benefits. We plan to write more blog posts on the topic in the next few weeks and are looking forward to your feedback.

Aug 26 2019
Aug 26

Our lead community developer, Alona Oneill, has been sitting in on the latest Drupal Core Initiative meetings and putting together meeting recaps outlining key talking points from each discussion. This article breaks down highlights from meetings this past week. You'll find that the meetings, while also providing updates of completed tasks, are also conversations looking for community member involvement. There are many moving pieces as things are getting ramped up for Drupal 9, so if you see something you think you can provide insights on, we encourage you to get involved.

We've had some gaps in August with limited meetings, so we're happy to be back in the swing of things and reporting a lot of great updates.

Drupal 9 Readiness (08/19/19)

Meetings are for core and contributed project developers as well as people who have integrations and services related to core. Site developers who want to stay in the know to keep up-to-date for the easiest Drupal 9 upgrade of their sites are also welcome.

  • It usually happens every other Monday at 18:00 UTC.
  • It is done over chat.
  • Happens in threads, which you can follow to be notified of new replies even if you don’t comment in the thread. You may also join the meeting later and participate asynchronously!
  • Has a public Drupal 9 Readiness Agenda anyone can add to.
  • The transcript will be exported and posted to the agenda issue.

Symfony development updates

Composer initiative issues affecting D8 and D9

Simpletest deprecation process

The active issue [meta] How to deprecate Simpletest with minimal disruption is one of the biggest deprecations in Drupal 8 while being one of the least trivial to update for, however, work on it is still a priority. 

Composer in Core meeting 08/21/19

  • It usually happens every other Wednesday at 15:30 UTC.
  • It is for contributors to the composer initiative in core who are working on improving drupal's relationship with composer.
  • It is done over chat.
  • Happens in threads, which you can follow to be notified of new replies even if you don’t comment in the thread. You may also join the meeting later and participate asynchronously!
  • Has a public migration meeting agenda anyone can add to.
  • The transcript will be exported and posted to the agenda issue.
  • For anonymous comments, start with a bust in silhouette emoji. To take a comment or thread off the record, start with a no entry sign emoji.

Scaffold files in core

The issue Add core scaffold assets to drupal/core's composer.json extra field is currently waiting on committer attention.

File security component

The issue Add drupal/core-filesecurity component for writing htaccess files has been fixed.

Relocate the scaffold plugin

The issue Relocate Scaffold plugin outside of 'core' directory has also been fixed.

Vendor Hardening Plugin

The issue Add Composer vendor/ hardening plugin to core is ready to be reviewed.

Template files

We have the current template files but still needed are the scaffold files as well as the vendor hardening plugin. Then we will be able to make a template that can be used to make a tarball.

Drupal.org changes to support this

  • Change packaging so that it can package 8.7.x the old way and 8.8.x the new way using composer to create a project.
  • Add a variant of the tool that Webflow uses to generate the drupal/core-pinned-dependencies and drupal/core-development-dependencies into the drupal.org jenkins workflow that publishes those 'somewhere'.

Optionally supporting composer.json for extensions

The issue [PP-3] Provide optional support for using composer.json for dependency metadata still needs work done on it.

Core cross-initiative meeting 8/22/2019

Migrate

Status

  • On 8/21 we discussed the best course of action to resolve.
  • We hoped to deprecate the Drupal 6 migrations and move them to contrib, but we do not have a way to validate how to make deprecations.
  • V Spagnolo has been working on several issues and they are all advancing well.
  • Nathaniel Catchpole shared some criteria for getting into 9.0-beta1.
  • Good progress has been made on multilingual (biggest blocker right now).
  • Multilingual components must land before 9.0-beta1, thankfully the initiative as a whole is pretty stable.

Blockers (Drupal 6 Deprecations)

  • Needs release manager sign-off on the plan to get deprecation made.
  • Plan for moving those elements deprecated into contrib projects.

Next Steps

  • Need release manager criteria for making the multilingual migration stable.
  • Need release manager review on deprecation plan.
    • Need to document it, our target this week.
    • Need sign off on this and what we’ll do next, the target being next week. 
    • Need a way to get deprecated items into contrib. 
      • Are there any dependencies to deprecate items? Do we have to have them in contrib first?

Auto Updates

Status

  • Contrib Alpha-1 was released this week, it includes:
    • Notifying site owners when a public safety announcement has been released by the security team.
      • Optionally this can be done via email, or in a set message on the page (similar to existing alerts).
    • 8 or so readiness checks to determine if your site can be automatically updated.
    • The above features are available in drupal 7 and 8.
    • Signature signed package now on packagist (dev-master), credit to MWDS folks.
      • This will ensure that MITM attacks are not possible because files are known.
      • This is a base that actual automated updates will rely on.
      • Timelines in the initiative update are still accurate.
      • No impact on 8.8, if added they will be new features on the update module.
      • More user-testing is needed for the PSA. We would like it to be in 8, but it could be in contrib in the meantime. The module is future-proofed enough to disable in contrib when it comes time to enable in core.

Blockers

There are currently no blockers.

Next Steps

  • Following the PSA there will be the beta release and user testing.
  • The core release is targetting 8.9?

CMI2

Status

  • The API patch was committed since the last call, it’s in an experimental module.
  • By the 8.8 feature freeze, the UI portion of the module will not yet be ready.
  • This means we can’t release it in core until 9.1.

Blockers

  • Resources are limited, the patch is large, and more people are needed to help with other aspects of the tool (maybe at Drupalcon Amsterdam).
  • The plan requires the release manager review & discussion.

Next Steps

  • Requires sign off for accessibility on WYSIWYG.
  • Two views issues fixed for content moderation.

Drupal 9

Status

Blockers

There are currently no blockers.

Next Steps

Continue working on Symfony4, Drupal 8 not using deprecated API’s and multi-version compatibility Symfony3 requirements to open the Drupal9 branch.

Claro

Status

  • We have made progress to the critical global issues, mini pager, messages, and others are not completed but are looking likely to land. Some specific one-off pages are most at risk.
  • Targeting 8.8 release, but hoping to descope some things to meet this goal.
  • Authoring components still need designs, hoping to get that work done to get into beta for 8.8
  • Roadmap to stabilize Claro.

Blockers

There are two issues the release manager needs to review.

Next Steps

Composer (Much of the information here was covered in the Composer meeting on 8/21)

Status 

  • Moving well, major patches are in which are focused on bells and whistles that will make the experience better.
    • One major patch changed core development workflow there have been no complaints so far (requires composer 1.9 if you’re working w/ 8.8 repo).
  • Scaffolding plan today requires some duplication of some files. It still needs to be confirmed with the core team.
  • Vendor core clean up plugin now strips out all the testing files from the vendor directory and makes sure there is an htaccess file, this is now a vendor hardening plugin. Waiting to be committed.
  • After the scaffolding and vendor hardening plugin are complete we can create a template that will generate a tarball that is composer ready.

Blockers

There are two patches waiting to be committed. 

Next Steps

Demo

Status

Aug 26 2019
Aug 26

My previous post explores how requesting a medical appointment online begins a patient's digital journey. A hospital's appointment request form might be a new patient's first interaction with a healthcare institution. As such, it is one of the most public-facing forms for a healthcare institution, establishing the baseline for the quality of other external and even internal facing forms. The user experience of the appointment request form sets the tone for the entire patient experience.

Experience

A patient's successful user experience when finding, filling out, and submitting an appointment request form is determined by the visual, information, and interactive design of the website and form itself.

Visual design matters

Visual design identifies a healthcare institution’s brand and aesthetic. The quality of care at a healthcare institution is reflected on their website. The website's visual design should be clean, efficient, and caring. Since an appointment request form results in a call back from a live person, including a photograph of a nurse or clinician on the form or landing page can visually reinforce this expectation and experience.

Information design matters

Information design ensures that the patient understands how to navigate the form and makes it clear what information is required and what information is optional. Appointment request forms act as an ambassador of sorts, beginning an interaction between a patient and healthcare clinicians. The form's information design and corresponding editorial sets the tone of a patient's interaction with a healthcare institution.

Interactive design matters

Interactive design improves the flow and process for filling out a form. As someone fills out an appointment request form, conditional logic may hide or show different individual or groups of inputs. When a user clicks submit, how a form validates submitted data can increase or decrease a user's frustration.

Keeping the visual, information, and interactive design in mind, let's look at how the top US hospitals approach their request an appointment.

 U.S. News' Best Hospitals rankings

 U.S. News' Best Hospitals rankings

Evaluation

The U.S. News' Best Hospitals rankings is primarily based on the overall quality of a hospital's care, treatments, and outcomes. The U.S. News' patient experience ratings never directly ask questions about a patient's digital experience. However, the fact that the number 1 ranked hospital, the Mayo Clinical, has a 'Request Appointment' link on the U.S. News' Best Hospitals landing page reinforces the importance of this form.

Mayo Clinical's 'Request Appointment' link

Mayo Clinical's 'Request Appointment' link

We are going to look at how the top four hospitals allow patients to request an appointment on their individual websites. We will examine the positives and negatives to each hospital's appointment request form. The goal is to extrapolate some recommendations and best practices which can be used to build an appointment request form template for the Webform module for Drupal 8.

It’s essential for us to see how users navigate to the appointment form from a hospital's homepage. We will examine how many inputs, questions, steps, and conditions are included on a form, and then determining the level of effort needed to complete a form. Finally, we want to assess the overall user experience, which encompasses the visual, information, and interactive design of the form.

Mayo Clinic

Mayo Clinic's Request an Appointment

Mayo Clinic's Request an Appointment

On the Mayo Clinic's home page, the appointments link is the first and largest element on the page. It takes three clicks from the home page to navigate to their appointment request form. The appointment request form contains 20-30 inputs depending on who is completing the form. Inputs are grouped by contact, insurance, and medical information. Most inputs are required.

Information matters

On top of the U.S. News' direct link to the Mayo Clinic's appointment form as well as the prominent appointment link on the Mayo Clinic's homepage, Mayo Clinic's appointment landing page does a great job of providing all the relevant contact information needed to make an appointment by phone. This ensures even if a patient can't complete the form, they can call the hospital. This contact information, with links to dedicated international and referring physician form, are then included as a sidebar on Mayo's appointment request form.

Experimentation matters

The goal of this blog post's exploration is to identify best practices and recommendations, which helps to establish a baseline for a good appointment request form. Once there is a baseline, it also helps to experiment to improve a form's user experience

Typically, required inputs are marked with red asterisks. Since most inputs are required and there are only a very few optional inputs on the form, Mayo's team inverts the normal, more common, approach of noting required inputs. Instead, they include the message, "All fields are required unless marked optional" with a right aligned "(optional)" label next to optional inputs. This approach appears to be successful. The more important lesson is to be cognizant of a form's required and optional inputs.

Layout matters

The form groups related inputs into visually defined and well-labeled sections. The form also uses a vertical layout with left-aligned labels. Left-aligned labels are more difficult for users to comprehend the relationship between inputs and labels. Alternatively, the form's mobile layout uses the recommend top-aligned labels approach, and it is easier to fill-out. Mayo Clinic should experiment with using top-aligned labels for the desktop form and see if it increases the forms completion rates.

Improvement matters

Mayo Clinic values and promotes their appointment request form. The Mayo's Clinic digital team has put a considerable amount of thought, care, and consideration into their appointment request form's user experience. They should continue to experiment, test, and improve this form.

Cleveland Clinic

Cleveland Clinic's Online Appointment Request Form

Cleveland Clinic's Online Appointment Request Form

The Cleveland Clinic's homepage also includes a prominent link to appointments. It takes three clicks from the home page to navigate to the appointment request form. Their appointment request form is a multistep form with five pages and contains between 50-90 inputs. Most inputs are required. The form collects information about patients, caregivers, referring physicians, and comprehensive medical history. Entering this much information takes a considerable amount of time to complete.

Design matters

Cleveland Clinic's forms is a beautifully designed and logically arranged multi-step form. Multi-step forms makes it easier to collect a lot of information. It is important to ask, "Does an appointment request form need to collect a lot of information?"

Time matters​

When it comes to digital experiences, people have less patience. How long it takes to complete a task matters. Does a new patient need to supply their marital status or subscribe to a mailing list to book their first appointment? Wouldn’t it be more appropriate to collect a patient's medical history after the appointment is booked?

Information matters

Relating to how long it takes to fill out an appointment request form, it is important to ask the value and purpose of each question. Cleveland Clinic's form asks, "Has the patient been seen at Cleveland Clinical in the past?" if answered "Yes", the form then proceeds to ask for a Medical Record Number (MRN). It makes complete sense to collect an existing patient's MRN, which can be used to look up a patient's medical history. However, once the patient provides an MRN, does the form still need to obtain a comprehensive medical history?

Failure matters​

Asking for too much information increases the likelihood that a patient or caregiver might not be able to complete a form or result in them being frustrated with the process of completing the form. Let’s not forget the appointment request form facilitates a phone call that ultimately results in an appointment. Cleveland Clinic's form should also include the phone number on every page to ensure if someone is frustrated they can pick up the phone and still book an appointment.

Goals matters

Cleveland Clinic's form is an overwhelming user experience. At the very least, users should be given the option to complete a more straightforward appointment request form, but also provided a secondary option to provide a full medical history

Before even building any form, it helps to summarize the goal of the form with the scope of the information being collected. This summary can also become the form's introduction text. For public-facing forms, each input should be scrutinized to determine if the information is required to complete the immediate task at hand.

Johns Hopkins Hospital

Johns Hopkins Hospital's Request an Appointment

Johns Hopkins Hospital's Request an Appointment

The Johns Hopkins Hospital homepage includes a small icon with a label linking to their appointments landing page. It takes four clicks from the home page to navigate to the appointment request form. Their form contains 20-25 inputs with five optional inputs. The form has one conditional question asking, "Who are you seeking care for?"

Simplicity matters

By and large, the Johns Hopkins Hospital's appointment request form is the simplest form that we are going to review. There is no input grouping and it has a straightforward two column layout. This, despite the fact that grouping related inputs using top-alignment and a one column layout, is generally recommended. The simplicity of this form makes it easy for users to understand how much information is required to fill out the form. Having all the required inputs displayed before any optional inputs also makes it easier for a user to fill in the form quickly. The two column layout does make the form more difficult to fill out. When the browser is resized to a mobile layout, the form is easier and faster to fill out. Making a form easy for everyone to fill out, including users with disabilities, is essential.

Accessibility matters

The Johns Hopkins Hospital's appointment request form is not accessible to users who depend on screen readers. The form's inputs are missing the necessary code, which allows screen readers to accurately describe to an end-user the labels for each input. Instead of a screen reader saying something like, "Input first name required" the screen reader just states, "Input required" for all the form's inputs.

The inaccessibility of the Johns Hopkins Hospital's appointment request form is a significant problem. An inaccessible appointment request form is the equivalent to not having a wheelchair ramp to enter a building. Fortunately, this issue is not hard to fix. The John Hopkins Hospital digital team needs to become more aware of and monitor the accessibility of their forms.

Expectations matters

Nowhere on the Johns Hopkins Hospital's appointment request form does it indicate the patient or caregiver will receive a callback within a defined period of time. The Johns Hopkins Hospital's appointments landing does include this information. This information should be repeated at the top of the form just in case the user has not been to the appointments landing page. Users need to know what to expect when the click submit on a form. In the specific case of an appointment request form, users need to know when (i.e. how many days) and how they will be contacted (i.e., phone or email).

Massachusetts General

Massachusetts General's Request an Appointment

Massachusetts General's Request an Appointment

The Massachusetts General's homepage also includes two links to appointments. It takes three clicks from the home page to navigate to the appointment request form. The appointment request form contains 20-25 inputs in three groupings with only two optional inputs. Yes/no conditional logic is used to ask for more information about the patient and referring physician

Tone matters

Massachusetts General's appointment request form asks questions using a tone that guides the user through the process of completing the form. For example, every hospital wants to know if a patient has insurance. Instead of displaying a large dropdown menu labeled "Insurance", Massachusetts General's form states, "To facilitate processing, please provide insurance plan name if known." Using this type of phrasing explains to a user why this information is needed, while also stating it is optional. This is an excellent example of the relationship of asking the right question with collecting the best answer.

Questions matters

Many patients decide which hospital to be treated based on physician referrals. When intaking new patients, hospitals may need to contact a referring physician to get a patient's medical history. Most appointment request forms ask for the referring physician's contact information as individual inputs. Massachusetts General's appointment request form just asks for the "Name of referring provider, facility name, address and phone if known." This approach is a brilliant and deliberate decision. First off, it makes it very easy for a patient to enter what information they have about their physician ensuring that at least some information is collected. Appointment requests are always reviewed by an actual person who can take the plain text physician information, fix a misspelling, look up the referring physician's NPI (National Provider Identifier), and get all the needed information into a new patient's medical record.

Massachusetts General appointment request form is asking some very well thought out questions, which are being supported by the right process.

Consistency matter

Massachusetts General has an exemplary appointment request form. There are no immediate issues that need to be addressed by their form. Ironically, Massachusetts General's international appointment request form does have some design and user experience issues. Without getting into specifics, most of the problems on international appointment request form would be resolved if this form followed the best practices and care established by the domestic appointment request form. Providing a consistent user experience across an institution's webforms can ensure a better patient experience.

An appointment request form is a small but key piece to the complex puzzle of a patient’s experience and journey. Soon, someone is going to disrupt, change, and simplify this puzzle. For example, people will be able to schedule an appointment using voice-controlled systems. When we look at the user experience of these relatively simple appointment request forms, there is still some work to be done now and moving forward.

We have extensively explored four hospitals appointment request forms. In my next blog post, I am going to make some concrete recommendations and strategies for building an exemplary appointment request form.

Almost done…

We just sent you an email. Please click the link in the email to confirm your subscription!

OK

Aug 26 2019
Aug 26

The Drupal CMS is a universal platform for website development. It is equally suitable for small business websites and complex e-commerce solutions. This article will tell you about the price formation of Drupal website development and answer the question “How much does it cost to build a website?”.

Website design

First of all, you need to understand what goals and objectives you set for your website. For example, you need a website for your blog with a standard set of features and simple design. Such a Drupal website can be created by yourself absolutely free. Just read a guide or watch tutorial videos. 

Another matter, if you need to develop a selling promotional website for your business with a custom design, it is a serious step that you will have to spend money on. It’s no secret to anyone that websites with a unique design increase sales (according to “The business value of design” by McKinsey). A professional web designer can help you to develop a custom design for your project, but that's not all. Also, you will need a front-end developer to run your website with a new design. Of course, if you have some extra money and your time is too expensive, you can hire a project manager who will monitor the progress of the project, but it’s not neсessary.

Drupal development

Every Drupal website contains Drupal modules. Drupal modules allow you to add the necessary features to your website and you can install them for free. But every business is unique and sometimes a standard set of features is not enough to implement a project, that's why the development of additional Drupal modules is required. To do this, you need an experienced Drupal development team. Below you can look at average hourly rates for mid-level developers.

  • North America: $60-$140 per hour

  • Western Europe: $40-$60 per hour

  • Eastern Europe: $20-$40 per hour

  • Asia and Pacific: $10-$20 per hour

As you can see, the per-hour rate can vary dramatically based on a region of the world, that’s why companies prefer to work with outsource web development firms from different countries. 

Website development cost

Mainly, the cost of a website is calculated based on an hourly rate multiplied by a number of hours spent on a project. 

The website development cost directly depends on the features that you need and on the complexity of website design. Below, you’ll find some examples of the existing types of sites and see how long it takes to develop a website, based on our experience.

Simple promo page

If you need an opportunity to edit the content of your promo page by yourself, the Drupal CMS will be the best solution. The development of a simple promo page will take 50-70 hours.

Approximate total cost:

  • North America: $4200-$9800 

  • Western Europe: $2000-$4200 

  • Eastern Europe: $1000-$2800 

  • Asia and Pacific: $500-$1400 

Company website

The development of creative custom design is very important for corporate websites because the reputation of your company will directly depend on this. Development of a standard corporate website with such sections as "About Us", "Services", "Portfolio", "News", "Blog", "Contacts" will take 200-250 hours.

Approximate total cost:

  • North America: $15000-$35000 

  • Western Europe: $8000-$15000 

  • Eastern Europe: $4000-$10000 

  • Asia and Pacific: $2000-$5000 

Mobile app

Drupal is universal. It suits not only the websites’ development, but it can also be used for the back-end development of mobile applications. This fact makes Drupal even more attractive for your choice. The minimum estimated time for the back-end run on Drupal is 150-200 hours.

Approximate total cost:

  • North America: $9000-$28000 

  • Western Europe: $6000-$12000 

  • Eastern Europe: $3000-$8000 

  • Asia and Pacific: $1500-$4000 

E-commerce

E-commerce websites are online stores and large trading platforms, such as Amazon. Perhaps, the most difficult type of projects to implement, so the time which will be needed for an e-commerce website development is appropriate. The minimum estimated time is 250-300 hours.

Approximate total cost:

  • North America: $15000-$42000 

  • Western Europe: $10000-$18000 

  • Eastern Europe: $5000-$12000 

  • Asia and Pacific: $2500-$6000 

Conclusion

In this article, we gave you approximate costs for different kinds of websites and now you have an idea of how the price is calculated.

Remember, that price for website development entirely depends on a development team and your requirements. It is very important to find the experienced development team to achieve excellent results in your project. And don't forget that every business is unique and requires individual solutions.


Liked the article? Twit @adcisolutions and let us know what you think about this article on Twitter.
Aug 25 2019
Aug 25

I have previously written about using Drupal’s definition files in component-based theming and how it is possible to have component-specific layout and asset library definition files. Now that Layout Builder is stable it is time to have a look at how it can be used with theme components.

Two schools

There are two main approaches to integrating independent theme components with Drupal. They could be described as the ‘Twig developer’ approach and the ‘site builder’ approach.

In the Twig developer approach integration is done entirely in Twig ‘presenter’ templates. A significant benefit of this straightforward approach is that no contrib modules are required (although Component Libraries is usually used). There is also no need for endless pointing and clicking in the admin UI, and dealing with related configuration. The downside is that in most cases this approach leaves the admin UI disconnected from the components. This means that a site builder not familiar with Twig or with no access to the Twig templates has no way of affecting the appearance of the integrated components.

In the site builder approach integration is done using the admin UI. Until now this has required sophisticated contrib modules like Display Suite and UI Patterns, with their own ecosystems of related modules. These are great modules, but ideally it should be possible to use just Drupal core for the site builder approach. With Layout Builder in core we are a huge leap closer to that goal.

Revisiting an old example

Back in January 2017 I wrote a blog post on how to use UI Patterns to integrate a simple Code component. I will now use the same component in an example of how to use Layout Builder for the same task.

Step 1: Create layout and asset library definitions

Starting from where we ended up in that old blog post, we need to provide information about the component’s variables and assets to Drupal in some other way than a .ui_patterns.yml file. This can now be easily done using Drupal core’s definition files.

Drupal core only supports theme (or module) specific definition files, but the Multiple Definition Files module can be used to add support for component-specific files. The results are identical in both approaches, so using Multiple Definition Files is completely optional.

Multiple Definition Files does add one useful feature though. Drupal places layout regions in content, so for example a code region would be accessed in Twig as content.code. To remove this Drupalism, Multiple Definition Files makes layout regions accessible in the Twig root context, so a code region is accessible in Twig as just code.

The definitions in this example are in component-specific files. If you want to use Multiple Definition Files, be sure to enable the module at this point.

The layout definition, shila-code.layouts.yml:

shila_code:
label: 'Code'
category: 'Shila'
template: source/_patterns/01-molecules/content/shila-code/shila-code
library: shila_theme/shila_code
regions:
language:
label: 'Language'
code:
label: 'Code'

and the library definition, shila-code.libraries.yml:

shila_code:
css:
theme:
source/_patterns/01-molecules/content/shila-code/prism.css: {}
js:
source/_patterns/01-molecules/content/shila-code/prism.js: {}

Note the library reference in shila-code.layouts.yml, linking the two definitions. Be sure to clear caches after adding the new definitions.

(As a side note, the UI Patterns From Layouts module makes it possible to use Drupal’s definition files with UI Patterns as well.)

Step 2: Enable Layout Builder

After enabling the Layout Builder module, Layout Builder has to be enabled for our Code paragraphs type’s display.

If you are moving away from a Display Suite layout, be sure to select - None - for the layout and save the settings before enabling Layout Builder. It seems that if you do not do this, Display Suite is still active and Layout Builder won’t work as expected.

Step 3: Integrate the component using Layout builder

Clicking on the ‘Manage layout’ button takes us to Layout Builder. Display Suite and UI Patterns show a textual representation of a layout, but Layout Builder is different as it displays the layout visually.

First we remove the default section, then we add a new section and choose the ‘Code’ layout for it.

As a reminder, this is what our extremely simple shila-code.html.twig template looks like:

<pre><code{% if language %} class="language-{{ language }}"{% endif %}>{{ code }}</code></pre>

Layout Builder’s visual representation will not be a problem in many cases. However, components often use variables for non-visual purposes or have layouts with overlapping regions. Our Code component is a good example, since language is used as a part of a CSS class name. Layout Builder does not expect layouts to be used in this way, and the result is a completely unusable UI.

Oh no. What we need is a text based way to configure the layout. The good news is that due to accessibility reasons a textual version of the Layout Builder UI is already in the works. So, let’s apply the patch from that issue, clear the caches, and see what happens.

A ‘Layout overview’ link has appeared! Let’s click on it.

This is looking good! We can now add the right paragraph fields to the respective regions in the Code layout and then click on the ‘Save layout’ button. Let’s have a look at a page that contains a Code paragraph.

Something is obviously going wrong here. Let’s have a look at the HTML source.

Since we have placed fields in Layout Builder blocks, there is unwanted HTML from the block and field templates. What we really need is just the contents of the fields, nothing else. This can be an issue with many independent theme components, since they often do not expect extra markup to be provided.

Display Suite has a ‘Full Reset’ field formatter and UI Patterns has an ‘Only content’ option. But what can we do here? Creating specific Twig templates to remove the HTML is an option, but not an ideal one. What we want is an easy UI based way to deal with the problem.

Step 4: Install and enable the Full Reset module (if required)

After searching for existing solutions and not coming up with anything, I ended up writing a small module for this particular purpose. Full Reset does two things: it provides a field formatter and adds a reset option to Layout Builder blocks. The name is a hat tip to the trusty Display Suite field formatter I have used so many times.

A good way to support third party settings for Layout Builder blocks is still being worked on, so a core patch must be applied for Full Reset to work.

With the core patch applied and Full Reset installed and enabled, we can select the ‘Full Reset’ field formatter and tick the ‘Full Reset this block’ checkbox.

Now the block and field templates are not used at all, and the component works and looks like expected! It still does not work in Layout Builder’s visual preview though, because Layout Builder expects the content to be a valid DOM node and have a surrounding HTML tag. For this reason Full Reset does not remove block theming in the preview.

Conclusion

Layout Builder can be successfully used in component-based theming today. There might be some problems with specific components like the one in this example, but these problems can be circumvented by applying two patches and using the Full Reset module.

For me this means that I can now stop using Display Suite, a module that I have used in almost all of my Drupal projects since 2011. Personally I find this change to be just as significant as it was when Field API and Views were added to core.

Aug 25 2019
Aug 25

The Drupal 8 Shield module allows you to protect your site using a simple htaccess authentication. It’s great for sites that you are working on that you don’t want the world (or Google) to see yet. This way you can send the site to a client or anyone really to test and just provide them the username/password to view the site. Once it’s ready to go, you can launch the site and remove this module.

If you have ever wanted to password protect your Drupal site, the Shield module will help with that!

Install the Drupal 8 Shield module like you would any other Drupal module. Note that it shows up in the module list as PHP Authentication Shield.

Once it’s installed, click on the Configure link to get to the configuration page.

It’s recommended to leave the Allow command line access checkbox checked. This will not let the Shield module affect command line access using tools such as Drush. You will want to fill out the Username and Password based on what you want a user to use to access the site.

Note: The Authentication message does not seem to work when I tested it with Chrome.

Click the Save configuration button and you should immediately see the authentication popup. You can test this out in a private browsing or incognito window to see it again. Once you enter the correct username and password, you will be able to access every page on your site.

That’s all there is to it. This works well for development and staging environment while you are developing the site but want the client to be able to look at it. Now go out and password protect your Drupal sites!

Aug 24 2019
Aug 24

Greg Anderson, Open Source Contributions Engineer at Pantheon joins Mike Anello to talk about the Drupal Community's Composer Support in Core Initiative.

Discussion

DrupalEasy News

Upcoming Events

Sponsors

  • Drupal Aid - Drupal support and maintenance services. Get unlimited support, monthly maintenance, and unlimited small jobs starting at $99/mo.
  • WebEnabled.com - devPanel.

Follow us on Twitter

Subscribe

Subscribe to our podcast on iTunes, Google Play or Miro. Listen to our podcast on Stitcher.

If you'd like to leave us a voicemail, call 321-396-2340. Please keep in mind that we might play your voicemail during one of our future podcasts. Feel free to call in with suggestions, rants, questions, or corrections. If you'd rather just send us an email, please use our contact page.

Aug 24 2019
Aug 24

In the last blog post we were introduced to managing migration as configuration entities using Migrate Plus. Today, we will present some benefits and potential drawbacks of this approach. We will also show a recommended workflow for working with migration as configuration. Let’s get started.

Example workflow for managing migration configuration entities.

What is the benefit of managing migration as configurations?

At first sight, there does not seem to be a big difference between defining migrations as code or configuration. You can certainly do a lot without using Migrate Plus’ configuration entities. The series so far contains many examples of managing migrations as code. So, what are the benefits of adopting s configuration entities?

The configuration management system is one of the major features that was introduced in Drupal 8. It provides the ability to export all your site’s configuration to files. These files can be added to version control and deployed to different environments. The system has evolved a lot in the last few years, and many workflows and best practices have been established to manage configuration. On top of Drupal core’s incremental improvements, a big ecosystem has sprung in terms of contributed modules. When you manage migrations via configuration, you can leverage those tools and workflows.

Here are a few use cases of what is possible:

  • When migrations are managed in code, you need file system access to make any changes. Using configuration entities allows site administrators to customize or change the migration via the user interface. This is not about rewriting all the migrations. That should happen during development and never on production environments. But it is possible to tweak certain options. For example, administrators could change the location to the file that is going to be migrated, be it a local file or on a remote server.
  • When writing migrations, it is very likely that you will work on a subset of the data that will eventually be used to get content into the production environment.  Having migrations as configuration allow you to override part of the migration definition per environment. You could use the Configuration Split module to configure different source files or paths per environment. For example, you could link to a small sample of the data in development, a larger sample in staging, and the complete dataset in production.
  • It would be possible to provide extra configuration options via the user interface. In the article about adding HTTP authentication to fetch remote JSON and XML files, the credentials were hardcoded in the migration definition file. That is less than ideal and exposes sensitive information. An alternative would be to provide a configuration form in the administration interface for the credentials to be added. Then, the submitted values could be injected into the configuration for the migration. Again, you could make use of contrib modules like Configuration Split to make sure those credentials are never exported with the rest of your site’s configuration.
  • You could provide a user interface to upload migration source files. In fact, the Migrate source UI module does exactly this. It exposes an administration interface where you have a file field to upload a CSV file. In the same interface, you get a list of supported migrations in the system. This allows a site administrator to manually upload a file to run the migration against. Note: The module is supposed to work with JSON and XML migrations. It did not work during my tests. I opened this issue to follow up on this.

These are some examples, but many more possibilities are available. The point is that you have the whole configuration management ecosystem at your disposal. Do you have another example? Please share it in the comments.

Are there any drawbacks?

Managing configuration as configuration adds an extra layer of abstraction in the migration process. This adds a bit of complexity. For example:

  • Now you have to keep the uuid and id keys in sync. This might not seem like a big issue, but it is something to pay attention to.
  • When you work with migrations groups (explained in the next article), your migration definition could live in more file.
  • The configuration management system has its own restrictions and workflows that you need to follow, particularly for updates.
  • You need to be extra careful with your YAML syntax, especially if syncing configuration via the user interface. It is possible to import invalid configuration without getting an error. It is until the migration fails that you realize something is wrong.

Using configuration entities to define migrations certainly offers lots of benefits. But it requires being extra careful managing them.

Workflow for managing migrations as configuration entities

The configuration synchronization system has specific workflows to make changes to configuration entities. This imposes some restrictions in the way you make updates to the migration definitions. Explaining how to manage configuration could use another 31 days blog post series. ;-) For now, only a general overview will be presented. The general approach is similar to managing configuration as code. The main difference is what needs to be done for changes to the migration files to take effect.

You could use the “Configuration synchronization” administration interface at /admin/config/development/configuration. In it you have the option to export  or import a “full archive” containing all your site’s settings or a “single item” like a specific migration. This is one way to manage migrations as configuration entities which lets you find their UUIDs if not set initially. This approach can be followed by site administrators without requiring file system access. Nevertheless, it is less than ideal and error-prone. This is not the recommended way to manage migration configuration entities.

Another option is to use Drush or Drupal Console to synchronize your site’s configuration via the command line. Similarly to the user interface approach, you can export and import your full site configuration or only single elements. The recommendation is to do partial configuration imports so that only the migrations you are actively working on are updated.

Ideally, your site’s architecture is completed before the migration starts. In practice, you often work on the migration while other parts of the sites are being built. If you were to export and import the entire site’s configuration as you work on the migrations, you might inadvertently override unrelated pieces of configurations. For instance, this can lead to missing content types, changed field settings, and lots of frustration. That is why doing partial or single configuration imports is recommended. The following code snippet shows a basic Drupal workflow for managing migrations as configuration:

# 1) Run the migration.
$ drush migrate:import udm_config_json_source_node_local

# 2) Rollback migration because the expected results were not obtained.
$ drush migrate:rollback udm_config_json_source_node_local

# 3) Change the migration definition file in the "config/install" directory.

# 4a) Sync configuration by folder using Drush.
$ drush config:import --partial --source="modules/custom/ud_migrations/ud_migrations_config_json_source/config/install"

# 4b) Sync configuration by file using Drupal Console.
$ drupal config:import:single --file="modules/custom/ud_migrations/ud_migrations_config_json_source/config/install/migrate_plus.migration.udm_config_json_source_node_local.yml"

# 5) Run the migration again.
$ drush migrate:import udm_config_json_source_node_local

Note the use of the --partial and --source flags in the migration import command. Also, note that the path is relative to the current working directory from where the command is being issued. In this snippet, the value of the source flag is the directory holding your migrations. Be mindful if there are other non-migration related configurations in the same folder. If you need to be more granular, Drupal Console offers a command to import individual configuration files as shown in the previous snippet.

Note: Uninstalling and installing the module again will also apply any changes to your configuration. This might produce errors if the migration configuration entities are not removed automatically when the module is uninstalled. Read this article for details on how to do that.

What did you learn in today’s blog post? Did you know the know the benefits and trade-offs of managing migrations as configuration? Did you know what to do for changes in migration configuration entities to take effect? Share your answers in the comments. Also, I would be grateful if you share this blog post with others.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Aug 23 2019
Aug 23

Today, we are going to talk about how to manage migrations as configuration entities. This functionality is provided by the Migrate Plus module. First, we will explain the difference between managing migrations as code or configuration. Then, we will show how to convert existing migrations. Finally, we will talk about some important options to include in migration configuration entities. Let’s get started.

Example of migration defined as configuration entity.

Drupal migrations: code or configuration?

So far, we have been managing migrations as code. This is functionality provided out of the box. You write the migration definition file in YAML format. Then, you place it in the migrations directory of your module. If you need to update the migration, you make the modifications to the files and then rebuild caches. More details on the workflow for migrations managed in code can be found in this article.

Migrate Plus offers an alternative to this approach. It allows you to manage migrations as configuration entities. You still use YAML files to write the migration definition files, but their location and workflow is different. They need to be placed in a config/install directory. If you need to update the migration,  you make the modifications to the files and then sync the configuration again. More details on this workflow can be found in this article.

There is one thing worth emphasizing. When managing migrations as code you need access to the file system to update and deploy the changes to the file. This is usually done by developers.  When managing migrations as configuration, you can make updates via the user interface as long as you have permissions to sync the site’s configuration. This is usually done by site administrators. You might still have to modify files depending on how you manage your configuration. But the point is that file system access to update migrations is optional. Although not recommended, you can write, modify, and execute the migrations entirely via the user interface.

Transitioning to configuration entities

To demonstrate how to transition from code to configuration entities, we are going to convert the JSON migration example. You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD config JSON source migration whose machine name is udm_config_json_source. It comes with four migrations: udm_config_json_source_paragraph, udm_config_json_source_image, udm_config_json_source_node_local, and udm_config_json_source_node_remote.

The transition to configuration entities is a two step process. First, move the migration definition files from the migrations folder to a config/install folder. Second, rename the files so that they follow this pattern: migrate_plus.migration.[migration_id].yml. For example: migrate_plus.migration.udm_config_json_source_node_local.yml. And that’s it! Files placed in that directory following that pattern will be synced into Drupal’s active configuration when the module is installed for the first time (only). Note that changes to the files require a new synchronization operation for changes to take effect. Changing the files and rebuilding caches does not update the configuration as it was the case with migrations managed in code.

If you have the Migrate Plus module enabled, it will detect the migrations and you will be able to execute them. You can continue using the Drush commands provided the Migrate Run module. Alternatively, you can install the Migrate Tools module which provides Drush commands for running both types of migrations: code and configuration. Migrate Tools also offers a user interface for executing migrations. This user interface is only for migrations defined as configuration though. It is available at /admin/structure/migrate. For now, you can run the migrations using the following Drush command: drush migrate:import udm_config_json_source_node_local --execute-dependencies.

Note: For executing migrations in the command line, choose between Migrate Run or Migrate Tools. You pick one or the other, but not both as the commands provided by the two modules have the same name. Another thing to note is that the example uses Drush 9. There were major refactorings between versions 8 and 9 which included changes to the name of the commands.

UUIDs for migration configuration entities

When managing migrations as configuration, you can set extra options. Some are exposed by Migrate Plus while others come from Drupal’s configuration management system. Let’s see some examples.

The most important new option is defining a UUID for the migration definition file. This is optional, but adding one will greatly simplify the workflow to update migrations. The UUID is used to keep track of every piece of configuration in the system. When you add new configuration, Drupal will read the UUID value if provided and update that particular piece of configuration. Otherwise, it will create a UUID on the fly, attach it to the configuration definition, and then import it. That is why you want to set a UUID value manually. If changes need to be made, you want to update the same configuration, not create a new one. If no UUID was originally set, you can get the automatically created value by exporting the migration definition. The workflow for this is a bit complicated and error prone so always include a UUID with your migrations. This following snippet shows an example UUID:

uuid: b744190e-3a48-45c7-97a4-093099ba0547
id: udm_config_json_source_node_local
label: 'UD migrations configuration example'

The UUID a string of 32 hexadecimal digits displayed in 5 groups. Each is separated by hyphens following this pattern: 8-4-4-4-12. In Drupal, two or more pieces of configuration cannot share the same value. Drupal will check the UUID and the type of configuration in sync operations. In this case the type is signaled by the migrate_plus.migration. prefix in the name of the migration definition file.

When using configuration entities, a single migration is identified by two different options. The uuid is used by the Drupal’s configuration system and the id is used by the Migrate API. Always make sure that this combination is kept the same when updating the files and syncing the configuration. Otherwise you might get hard to debug errors. Also, make sure you are importing the proper configuration type. The latter should not be something to worry about unless you utilize the user interface to export or import single configuration items.

Tip: If you do not have a UUID in advance for your migration, you can copy one from another piece of configuration and change some of the hexadecimal digits. Keep in mind that this could lead to a duplicated UUID if you happen to make the exact changes to the UUID in two separate files. Another option is to use a tool to generate the values. Searching online for UUID generators will yield many  tools for this.

Automatically deleting migration configuration entities

By default, configuration remains in the system even if the module that added it gets uninstalled. This can cause problems if your migration depends on custom migration plugins provided by your module. It is possible to enforce that migration entities get removed when your custom module is uninstalled. To do this, you leverage the dependencies option provided by Drupal’s configuration management system. The following snippet shows how to do it:

uuid: b744190e-3a48-45c7-97a4-093099ba0547
id: udm_config_json_source_node_local
label: 'UD migrations configuration example'
dependencies:
  enforced:
    module:
      - ud_migrations_config_json_source

You add the machine name of your module to dependencies > enforced > module array. This adds an enforced dependency on your own module. The effect is that the migration will be removed from Drupal’s active configuration when your custom module is uninstalled. Note that the top level dependencies array can have others keys in addition to enforced. For example: config and module. Learning more about them is left as an exercise for the curious reader.

It is important not to confuse the dependencies and migration_dependencies options. The former is provided by Drupal’s configuration management system and was just explained. The latter is provided by the Migrate API and is used to declare migrations that need be imported in advance. Read this article to know more about this feature. The following snippet shows an example:

uuid: b744190e-3a48-45c7-97a4-093099ba0547
id: udm_config_json_source_node_local
label: 'UD migrations configuration example'
dependencies:
  enforced:
    module:
      - ud_migrations_config_json_source
migration_dependencies:
  required:
    - udm_config_json_source_image
    - udm_config_json_source_paragraph
  optional: []

What did you learn in today’s blog post? Did you know that you can manage migrations in two ways: code or configuration? Did you know that file name and location as well as workflows need to be adjusted depending on which approach you follow? Share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Aug 23 2019
Aug 23

Marketing department these days are feeling the heat to make processes faster, agile, and efficient in this fast-paced digital world. That’s where the concept of Drupal comes into the picture! 

Drupal, since its inception, has been considered crucial for the companies which are trying to get on the digital transformation bandwagon in order to provide a faster and more nimble digital experience to customers.

Drupal comes equipped with a set of marketing tools to target the right audience and increase the overall ROI for the businesses. Additionally, the suite of tools and solutions available, build a solid foundation important for marketers, to unearth its potential and other martech capabilities; making it one of the most powerful and widely approached platforms for all the marketing needs. 

The blog gives you insights on how Drupal can be a perfect choice for marketers.-

Leveraging Drupal For Marketing

Drupal has become an inseparable part of the marketing space which not only drives the business revenue but also makes a remarkable balance between marketing technologies and its ecosystem with its content first, commerce-first, and community-first marketing solutions.

Creative Freedom 

Dependence on the IT team for implementation is the most known problem for digital marketers. With traditional CMS it takes an added effort, time, and resources across design, development and marketing departments to work in sync.

By the time changes are implemented the model already looks outdated to the new situation.

“Drupal seamlessly incorporates with the existing marketing and sales technologies of the enterprises”


The cutting-edge Drupal modules and distributions empowers different teams with to have their creative freedom in order to manage the development of a project at their own pace and convenience. 

With Drupal’s architecture, organizations have a platform where they can dynamically launch their website. Marketing teams can curate the structure with segment content and visuals to lay the foundation of a strong digital strategy in its backbone.

Drupal turns out to be a useful asset for organizations which are looking for implementing it in their digital business as it seamlessly incorporates with their existing marketing and sales technologies.

Balancing Marketing Ecosystem

CRM systems are important in running businesses and boosting sales management. It is important for organizations to integrate and be able to customize the martech stack to their unique needs. 

Salesforce written in blue cloud
Drupal offers features that help you create a fine balance between CRM and marketing ecosystem- with content-first, commerce-first, and community-first marketing solutions. Business, technology, and marketing professionals use Drupal to create such agile solutions that are easy-to-use and offer a wide reach across the web. Built fully-responsive, customers and users can discover products and solutions with the help of any device. 

It possesses infinite potential with native features and module extensions, including collaboration with third-party digital marketing tools.

In all, it’s a platform to help you push your strategy for the upcoming phase of digital customer engagement and digital business.

Responsive Customer Experience

Based on the system of engagement, it gives a solid foundation important, for all the single interactions with customers and people within the organization alike, to provide the ultimate unified experience. Streamlining your business operations and aligning your digital strategies, it delivers on the mobile-first approach. Enterprises’ marketing teams can manage the website and administrative pages with ease across multiple platforms.

“Personalized customer experience is evolving at a blistering pace”

Organizations competing in this customer-centric age are putting efforts to provide a personalized experience to customers for keeping them engaged and simultaneously attracting new visitors on board for lead generation.

The digital strategies run by marketing teams majorly focuses on increasing leads, conversions, and revenue percentage of the company gradually via digital channels. 

As customers lie at the center of the organization, this can be accomplished by offering them a tailor-made experience across all channels.

The motive is the same but the way it is rendered has evolved significantly and Drupal has a large part to play in it. Here’s how-

Have a look at the video to understand more about the Acquia Lift and how it can be beneficial for marketers-

Focus on Personalization with Acquia Lift

There is no denying to the fact that companies are leaving no stone unturned to provide personalized results to users- it's not just about presenting content on the website but rather tweaking the whole experience. Simply put, they are trying to ensure that every single user-prospect or customer - gets what they want, even before they realize it.

Acquia Lift is a powerful amalgamation of data collection, content distribution, and personalization helping marketers deliver on the refined user experience 

Acquia Lift bolsters the personalized experiences of the customers. It is a powerful amalgamation of data collection, content distribution, and personalization that helps enterprises’ marketing teams ascertain the refined user experience without much dependency on development or IT teams. 

7 Vertical and 4 horizontal blocks with text written on them

Source: Acquia

Acquia Lift encompasses three key elements to boost personalization-

Profile Manager

It helps you build a complete profile of your users right from when they land on your website as anonymous visitors up until the stage where they are repeat visitors or loyal customers. Collecting user info such as demography, historical behavior, and real-time interactions, it complements the collected insights on user preference with best-suggested actions in your capacity.

Content Hub

This cloud-based tool provides a secure content syndication, discovery, and distribution. Any content created within the organization can be consolidated and stored here; readily available to broadcast on any channel, in any format.

Searches on varied topics and automatic updates give insights on a wide spectrum of content being developed within the enterprise- in different departments, across websites, and on different platforms.

Experience Builder

This is the most essential element of Acquia Lift. It lets you create a personalized experience for your users from the beginning.

The Experience Builder is an easy-peasy drag-and-drop tool that allows you to personalize every section of your website to showcase different content to different target audiences, based on the information retrieved from the Profile Manager.

Marketing teams can:
  1. Define the rules and protocol on how content should be displayed to a different segment of site visitors
  2. Carry out A/B testing to determine what type of content drives more conversions for which user segments.


All this can be executed with simple overlays onto the existing website segments, without disturbing the core structure of the site and without depending on IT teams for implementation.


Multilingual

With companies expanding their reach to the international markets, Drupal multilingual features are definitely worth to make capital out of it. As it supports 94 international languages, it can translate the complete website within a fraction of seconds with less than 4 modules in action, enabling marketing teams to deliver localized content experiences; thus increasing the probability of turning a visitor into a customer. 

“Drupal multilingual features are definitely worth to make capital out of it”

Despite the features of Drupal site’s language for the audience, the editors and admins have the option to choose a different language for themselves at the backend. Marketing and editorial teams have the rights to create and manage multilingual sites, with no need for additional local resources.

Layout Builder

The Layout Builder with its stable features showcased in Drupal 8.7 allows content marketers to create and edit page layouts and arrange the presentation of individual content, its types, media, and nodes. It also lets you feed user data, views, fields, and menus.

"Marketing teams can preview the pages with ease without impacting the user-experience”

It acts as a huge asset for enterprise marketing and digital experience teams-

  1. It offers flexibility to help you create custom layouts for pages and other specific sections on websites. You can override the predefined templates for individual landing pages when required.
  2. Content authors can embed videos effortlessly across the site to enhance user experience and ultimately drive conversion rate.
  3. Marketers can preview the pages with ease and without the fear of impacting the user experience


Layout Builder explained with the help of flowers in square gift box

Source: Dri.es


All these features offered ensures that marketing teams have more control over the website and can work independently without needing to take help from developers. This ultimately reduces the turnaround time for launching campaigns and hence the dependency on development teams.

Improved UI with API - First headless architecture

With the ever-increasing demands of acquiring and retaining customers, marketing teams are always in a hurry to redesign and update the backend and front-end in a short period. However, this becomes quite a strenuous task for them to update and redesign digital properties rapidly keeping in mind the evolving customer expectations.

Traditional Drupal architecture could take ample amount of time to make updates and redesigns because the refinement needs to take place at both front end and back end resulting in a dependency on developers and designers for the completion of the project.

“A decoupled CMS strategy can work wonders for a website that is in desperate need for a change”

But now with the powerful feature of Drupal, i.e., decoupled Drupal, marketing teams can become more agile and efficaciously segregate the processes & streamline the upgrades without impacting the user experience at the front end. This uber-cool feature of the Drupal helps in making the design and UX alterations easier to maintain.


Three blocks on the grey background with text written on them
Source: Acquia

Having the flexibility to come up with more ideas and implementing them by being able to easily add them to the website is a huge upliftment for marketers. New requirements can pop up anytime in the marketer’s mind that could be added to the site for more customer engagement and lead generation. The decoupled approach gives the extra agility needed to keep improving the public-facing site.

Final Words

Drupal has the potential to provide a promising future for better digital experiences with its every upcoming release. The appending of new features in Drupal will help marketing teams to become more flexible and scalable and yet provide a surpassing customer experience in no time. Consequently, conversion rate, sales, and brand visibility would increase manifolds.

Marketing teams in the organization who are already running their sites on Drupal have a lot to be happy about. Specifically, the development around Acquia Lift and Acquia Journey gives them freedom and hence no reliance on developers for any website updates and making content live, etc, to target the audience at the right time and increase ROI.

And for the marketing teams of the organizations those who are envisaging to roll out a plan for shifting to Drupal due to its all-inclusive features, the culmination from the highly-skilled and empowered team will make it worth all the efforts.

Aug 23 2019
Aug 23

With enterprises looking for ways to stay ahead of the curve in the growing digital age, machine learning is providing them with the needed boost for seamless digital customer experience.

Machine learning algorithms can transform your Drupal website into an interactive CMS and can come up with relevant service recommendations targeting each individual customer needs by understanding their behavioural pattern.


Machine Learning integrated Drupal website ensures effortless content management and publishing, better targeting and empowering your enterprise to craft personalized experiences for your customers. It automates the customer service tasks and frees up your customer support teams, subsequently impacting RoI.

However, with various big names competing in the market, let’s look at how Amazon’s Machine Learning stands out amongst all and provides customised offerings by integrating with Drupal.

Benefits of Integrating AWS Machine Learning with Drupal

AWS offers the widest set of machine learning services ranging from pre-trained AI services for computer vision, language, recommendations, and forecasting. These capabilities are built on the most comprehensive cloud platform and are optimized without compromising security. Let’s look at the host of advantages it offers when integrated with Drupal.

Search Functionality

One of the major problems encountered while searching on a website is the usage of exact keyword. If the content uses a related keyword, you will not be able to find it without using the correct keyword.

This problem can be solved by using machine learning to train the search algorithm to look for synonyms and display related results. The search functionality can also be improved by using automatically filtering as per past reads, the search results according to the past reads, click-through rate, etc.

Amazon Cloudsearch is designed to help users improve the search capabilities of their applications and services by setting up a scalable search domain solution with low latency and to handle high throughput.

Image Captioning

Amazon Machine Learning helps in automatic generation of related captions for all images on the website by analyzing the image content. The admin would have the right to configure whether the captions should be added automatically or after manual approval, saving a lot of time for the content curators and administrators of the website.

Amazon Rekognition helps search several images to find content within them and easily helps segregate them almost effortlessly with minimal human interaction.

Website Personalization

Machine learning ensures users get to view tailored content on websites as per their favorite reads and searches by assigning them unique identifier (UID) and tracking their behaviour (clicks, searches, favourite reads etc) on the website for personalized web experience.

Machine learning analyzes the data connected with the user’s UID and provides personalized website content.

Amazon Personalize is a machine learning service which makes it easy for developers to create individualized recommendations for its customers. It saves upto 60% of the time needed to set up and tune the infrastructure for the machine learning models as compared to setting own environment.

Another natural language processing (NLP) service that uses machine learning to find insights and relationships in text is Amazon Comprehend. It easily finds out which topics are the most popular on the internet for easy recommendation. So, when you’re trying to add tags to an article, instead of searching through all possible options, it allows you to see suggested tags that sync up with the topic.

Vulnerability Scanning

A website is always exposed to potential threats, with a risk to lose customer confidential data.

Using machine learning, Drupal based websites can be made secure and immune to data loss by automatically scanning themselves for any vulnerabilities and notifying the administrator about them. This gives a great advantage to websites and also help them save the extra cost spent on using external software for this purpose.

Amazon Inspector is an automated security assessment service, which helps improve the security and compliance of the website deployed on AWS and assesses it for exposure, vulnerabilities, and deviations from best practices.

Voice-Based operations

With machine learning, it’s possible to control and navigate your website by using your voice. With Drupal standing by its commitment towards accessibility, when integrated with Amazon Machine Learning features, it promotes inclusion to make web content more accessible to people.

Amazon Transcribe is an automatic speech recognition (ASR) service. When integrated with a Drupal website, it benefits the media industry with live subtitling of news or shows, video game companies by streaming transcription to help hearing-impaired players, enables stenography in courtrooms in legal domain, helps lawyers to make legal annotations on top of live transcripts, and enables business productivity by leveraging real-time transcription to capture meeting notes.

The future of websites looks interesting and is predicted to benefit users through seamless experience by data and behavior analysis. The benefits of integrating Amazon Machine Learning with Drupal will clearly give it a greater advantage over other CMSs and will pave the way for a brighter future and better roadmap.

Srijan has certified AWS professionals and an expertise in AWS competencies. Contact us to get started with the conversation.

Aug 23 2019
Aug 23

Agiledrop is highlighting active Drupal community members through a series of interviews. Now you get a chance to learn more about the people behind Drupal projects.

This time we talked with Greg Dunlap, pinball wizard and Lullabot's senior digital strategist. We spoke of how satisfying it is to work on interesting things with the right people, the importance of the Drupal Diversity and Inclusion initiative, and the close similarities between the Drupal community and Greg's local pinball community.

1. Please tell us a little about yourself. How do you participate in the Drupal community and what do you do professionally?

I am a Senior Digital Strategist at Lullabot; historically, I’ve been very much involved in the development and technical side of website building, but in recent years I’ve gotten much more into the content, digital strategy and information architecture part, and so that’s more how I do my work these days, sort of dealing with bigger picture problems. 

As far as my participation in the Drupal community, it’s been pretty light these days. I still speak at conferences here and there about various things, but my contribution beyond that has dropped off quite a bit. I’ll pop in in an issue here and there, and I’ve been involved in the Drupal Diversity and Inclusion group and similar projects, but other than that, my participation is pretty light right now. 

I think it’s also been a bit of me taking my life back, I was very very involved in the Drupal community for almost a decade, and so I think a lot of it was also just sort of me taking some time for myself. It’s hard because I built my career through participating in the community, so to some extent that’s necessary, but you really need to find a balance. 

2. When did you first come across Drupal? What convinced you to stay, the software or the community, and why?

I came across Drupal at an interesting time in my life. It was around 2007, Drupal 5 was out, and I was working for a newspaper in Seattle called the Seattle Times. We were doing a migration to Drupal, and through part of that migration we hired Lullabot to come in and help us out. And that was when I first met Jeff Eaton and Matt Westgate, and it was Jeff Eaton who pushed me to get involved in contributing. I was talking to him about a problem and he said “wow, you should really file a core issue about that”, which I did, and 10 years later it got marked “won’t fix”, so that was great. 

But at the time I was looking for a new job anyway and Drupal was just taking off, so I started getting involved with the local Drupal user group, and through that I met a bunch of really cool people. I also needed to figure out what my next thing was going to be professionally, and so all of this stuff kind of came at just about the right time for me. I was looking for something in Drupal to dig my teeth into, and we were having a bunch of problems around deployment and configuration management at the Seattle Times, and so that kind of just became my niche. 

And through that I met a lot of people who helped and I also wrote a lot of code, and that ended up getting me my first Drupal job, which was at Palantir; then, everything just kind of snowballed after that.

Now I’m basically working for the company that was my first contact with Drupal. And through that Eaton and I became really close friends, and the two of us are the strategy team at Lullabot. One of the things that I’ve learned over the years is that people always tell you to follow your passions, but I really think that the people that you do things with are much more important than what you do. Because, granted, it’s great to do what you want, but if you don’t have the right people around you, it’s not going to be any good anyways.

3. What impact has Drupal made on you? Is there a particular moment you remember?

I’m doing a project right now for Lullabot which is involving me going back and listening to a lot of our old podcasts, and one of the things I did was, I went back and listened to my old podcasts (we used to do this series called Drupal Voices and I got interviewed on it a lot). 

And I really noticed as I listened to them, that every year as I was on the podcast, I could hear in my voice my confidence level growing. The first year, for example, was at DrupalCon D.C. and I was very scattered, I could tell I was very nervous; then the next year, I sounded much more confident, and then the year after that when I was at DrupalCon Chicago, I could tell I had really found my steps and stride.

Chicago turned out to be a really formative DrupalCon for me because I gave the very first ever core conversation at a DrupalCon and I was very very nervous about it. And as a result of that core conversation Dries came and asked me to lead the initiative for Drupal8, for CMI. So, DrupalCon Chicago really stands out as a turning point for me in the Drupal community, and where I really hit my stride. I’m just a little upset that the talk wasn’t recorded.

4. How do you explain what Drupal is to other, non-Drupal people?

Even when I explain my job, usually all I say is just “I build websites”, and so I just say that Drupal is software that you use to build websites. Sometimes you’ll meet somebody who actually understands conceptually what a content management system is, but I usually don’t even bother going down that rabbit hole. “I build websites” is close enough for anybody to at least get the idea.

5. How did you see Drupal evolving over the years? What do you think the future will bring?

It’s crazy seeing how Drupal has changed. Returning to the podcasts from question three: the first Lullabot podcast was in 2006, and at that time, while there were a couple of shops doing things, the global economy in Drupal was essentially nothing. Now, it’s grown to billions of dollars in thirteen years, and I think that change has been incredible; not in a good or bad way in particular, it’s just been change. 

For a lot of people who prefer a small scrappy group, it’s probably been a negative thing, but for people who prefer a more mature industry that they can grow into and make a career out of, it’s been a positive change. And I think that, as Drupal grows, we’re going to be seeing more and more focus on that maturity, that focus on stability. One of the things that we hear all the time now in the Drupal community is that we’re much more focused on predictable releases, on backwards compatibility, easier migrations, all of the stuff that you focus on when your focus is much more on stability.

Because, given the kind of industries that we’ve grown into, stability and predictability are super important. And I think that theme is going to continue to grow over the years, not that we won’t have new features, but I think that the turnaround time on them is going to continue to be more stretched out; we’re already seeing this now with Drupal 9, for example.

The experimental modules are also interesting, as they’ve allowed us to get new features into core in the middle of the release cycle, e.g. the Content Moderation came in and Migration and stuff like that. This takes a long time, however; Content Moderation is a functionality that’s been in development for years in contrib, it’s not as straightforward as somebody just writing a patch and whipping that out in 3 months.

Getting new functionality into core is a very long process that demands lots of testing, and even then the module has to have the “experimental” status for 6 months. All these smaller processes make the overarching process longer, but they also make it more transparent, predictable and stable - it’s essentially just a trade-off. 

6. What are some of the contributions to open source code or to the community that you are most proud of?

I wrote a lot of code for the Configuration Management initiative, and even though that code ended up all getting thrown away and rewritten, to me the mere ability to put that on a path to getting done is extremely satisfying. I ran it for about 2 years and then I handed it off to Alex Pott who ran it for about 2 years before Drupal 8 got released. 

Getting all of those concepts in place and putting together a team of people to get those concepts in place and get it working and rolling forward is something that I’m really happy with. And it was also really great because it represented the end of a long period for me; I started with configuration management as my niche that I dug into in the Drupal 5 era, and then to see that all the way through to getting done in Drupal 8 was really satisfying for me. 

Recently, working with the Drupal Diversity and Inclusion group has been really satisfying. It’s a truly amazing group of people who are really interested in growing our community in positive ways, and making sure our community is open to everybody who wants to contribute and welcoming to everybody who wants to contribute. 

I think this is going to be more and more important going forward, because as Drupal becomes a global enterprise, we need to be able to bring all of those voices in to speak. Even Dries is starting to talk about how important that is now (e.g. in Seattle in his keynote). 

I think that work is really important and I’m really glad to see more focus on the community management side, because traditionally Drupal has been a place where we bring in contributors and then we kind of burn through them. We need to realize that contribution is hard and takes a lot of time, and focusing on how we can make that contribution cycle more healthy for people is really crucial to sustaining the community - so, in whatever ways that comes in or works towards is really great. 

I think that anything we could do to make the Drupal community more welcoming to people is going to be really important. Obviously, growing the community is important, but so is bringing in different voices and viewpoints, so that we can make the community more open and more interesting and really bring in all of the wonderful differences we have in the world.

7. Is there an initiative or a project in Drupal space that you would like to promote or highlight?

Well, the one that we just talked about, of course! Anybody who’s interested in the Drupal Diversity and Inclusion initiative can join the DDI channel (diversity-inclusion) on Slack. That’s where a lot of that discussion happens; there are weekly meetings on Thursdays, and from there you can get links to their website and other similar resources. 

I’ve also been really interested in the work on the Drupal product side for Layout Manager recently. I was a little skeptical of that when it first came out, but we’ve been using it on a couple of client projects and I’ve been really impressed with it. I think that it’s going to fill a lot of gaps and needs in the Drupal community. 

While the UI is still a little rough, I think that once the usability gets some polish on it, it’s going to be a really important thing for Drupal going forward. I’ve been really really pleased to see how that’s been working, and the clients just adore it; every time we demo it for a client, they completely freak out, so, I’m really looking forward to seeing how that progresses.

8. Is there anything else that excites you beyond Drupal? Either a new technology or a personal endeavor. 

I don’t get into a lot of technology stuff out of work anymore these days as a hobby. My biggest hobby outside of Drupal is pinball; I’ve been playing it competitively for 25 years and I’ve always been very involved in the local community here in Portland, Oregon. 

I’ve been really involved in that for a really long time - running tournaments and playing in tournaments, I was also the state representative for the group that runs the pinball rankings for many years, and recently I’ve gotten really into fixing up and repairing pinball machines, which has been really cool. It’s very physical and manual, not at all like working on your computer, so it kind of like rubs a different part of my brain than computer work does - and then when you’re done, you have something fun you can play, which is really cool. 

But I will say that one of the nicest things about running tournaments and being involved, similarly to Drupal, in the pinball community, is that you can build the community that you want to see. One of the things I’ve really done in Portland is trying to bring together a different set of voices to help run tournaments, to be the face of the community here, to create welcoming and safe spaces for people. And we have seen, for instance, the number of women that we have competing in tournaments here grow by leaps and bounds as a result of that work, and that’s been extremely gratifying too. 
 

Aug 22 2019
Aug 22
ReThink Orphanages

ReThink Orphanages is a courageous organization with a benevolent charge, grand ambition, a network of high-powered partners, and a commitment to make the world a better place. The organization is…

Visit Site Witness.orgWitness.org: Using Video to Document and Tell Stories

Witness.org is a Brooklyn-based non-profit that “…makes it possible for anyone, anywhere to use video and technology to protect and defend…

Visit Site NWETC

The Northwest Environmental Training Center (NWETC) is an organization that is committed to helping environmental protections improve their career opportunities. The organization focuses on two…

Visit Site Fred Hutchinson Cancer Research Center eagle-i IntegrationAbout Fred Hutchinson Cancer Research Center

Fred Hutchinson Cancer Research Center’s (FHCRC) Shared Resources core facilities support biomedical research by providing services and expertise that…

Visit Site University of Washington Center for Reinventing Public Education (CRPE)

We originally partnered with CRPE's in-house web manager in 2012. He was familiar with the content management aspect of Drupal, but needed a bit of support with the more intricate ways that Drupal…

Visit Site Seattle Humane Society

We began working with Seattle Humane Society in February of 2016. They had reached out to us because they needed some emergency help with their website. For some reason, their site was reverting…

Visit Site Middle East Policy Council

The Middle East Policy Council (MEPC) is a 501(c)(3) nonprofit organization founded in 1981 whose mission is to contribute to American understanding of the political, economic and cultural issues…

Visit Site American College for Healthcare Sciences (ACHS)

ACHS originally partnered with Freelock in August of 2011 to perform some easy wins on their site. We started with a Freelock Site Assessment and code review of their Drupal 6 website. After that…

Visit Site Peninsula College and Athletics Sites

Peninsula College reached out to us in 2012 for some emergency work related performance issues on their site and problems with their site crashing. Once we were able to jump in and diagnose, we…

Visit Site Appliance Standards Awareness Project - ASAP

Appliance Standards Awareness Project (ASAP) organizes and leads a broad-based coalition effort that works to advance, win and defend new appliance, equipment and lighting standards which deliver…

Visit Site Georgetown University Qatar

We were contacted by the Georgetown University team in Qatar in late November 2016 regarding several of their sites that were using the Drupal Domain Access module. They had several requirements…

Visit Site Seattle Children's Alliance

Another Drupal 8 site upgrade! In June of 2016 we were approached by Seattle’s Children’s Alliance for a Drupal 5 to Drupal 8 migration. Their main concern was that their Drupal 5 site modules…

Visit Site Fred Hutchinson Cancer Research Center – HANC

We started working with another team at Fred Hutch in early 2016. They contacted us after they were needing some local TLC with their HIV/AIDS Network Coordination global CRM database (…

Visit Site DIYZ.com

We were approached in late 2015 by a marketing/design agency to take over this project. Their Drupal developer was on her way out and they needed some Drupal expertise. The website was just in…

Visit Site Jim Ovia Foundation

We originally worked with the main Jim Ovia stakeholder on a separate project, when she worked with a different giving organization. She then reached out to us in September of 2015 to let us know…

Visit Site World Vision Knit for Kids

When the team at World Vision approached us in late 2015 to work on a few of their Drupal sites, they also had their Knit for Kids website that was in WordPress. They wanted to take that site and…

Visit Site Second Nature Sports

Second Nature Sports is owned and operated by the same folks over at Locker Soccer Academy, so when their previous developer was closing shop, it was just natural to have them roll this site into…

Visit Site Locker Soccer AcadamyLocker Soccer Academy

In December of 2013, our friends at Locker Soccer Academy reached out to us regarding their soccer academy sites, that were already developed by a shop in Colombus, Ohio. The development shop was…

Visit Site mEducation Alliance screenshotWorld Vision mEducation Alliance

The product owners of the mEducation Alliance website contacted us in September, 2015 to provide ongoing monthly Drupal security and module updates. The site is a partnership between…

Visit Site World Vision Chinese siteWorld Vision Chinese/Korean Websites

World Vision decided to partner with Freelock in September of 2015 for their Chinese and Korean websites. Headquartered in Federal Way, Washington – this was a perfect fit! We took…

Visit Site Queen City Yacht Club

Our friends at Queen City Yacht Club approached us in April of 2015 regarding their Drupal 5 website. It was at the end of life and they wanted to upgrade. Their motivations were that they wanted…

Visit Site National Center for Science Education

In June of 2015, our collegue recommended our Drupal maintenance services to the National Center for Science Education. Our colleague was looking forward to large project and just didn't have the…

Visit Site Snoqualmie Tribe

Snoqualmie Tribe contacted us in December of 2014 in desperate need to secure their website. It turns out, they were susceptible to the Drupalgeddon attack and needed the Drupal 7.34 core…

Visit Site Lease Crutcher Lewis

In December of 2013, we were contacted by Lease Crutcher Lewis to take over their hosting. Their site is a great contender for Drupal 8! They were also interested in our Drupal maintenance plan…

Visit Site Bonavita World

In early 2014, our friends at Bonavita came to Freelock requesting that we help manage their main website Espresso Supply. Soon thereafter, they wanted to launch a website for one of their brands…

Visit Site Washington Housing Alliance Action Fund

The Washington Low Income Housing Alliance Action Fund first talked to us about building a new site last October, but they were not ready to proceed. This summer they were finally ready, and used…

Visit Site Makah Community Portal About the Makah Tribe

The Makah Tribe is located in Neah Bay, Washington. They had a custom portal for the Makah Tribe and the Neah Bay Community. Their website is the hub for the community,…

Visit Site Max Dale's Steak and Chop House

Max Dale's Steak House is a popular restaurant a few dozen miles up the road in Mt. Vernon, Washington. If you're not from this state, you might have heard of Mt. Vernon when a major Interstate…

Visit Site IslandWood

About Islandwood

IslandWood is a nonprofit educational…

Visit Site Northwest Wall & Ceiling Bureau

Freelock built an informational website for the Northwest Wall & Ceiling Bureau (NWCB), an international trade organization for the wall and ceiling industry. We delivered a snappy…

Visit Site Totem Ocean

Freelock teamed up with Eben Design to build a new site for Totem Ocean Trailer Express, a shipping company that has two voyages between Tacoma and Anchorage each week. In addition to being a…

I-TECH

The International Training and Education Center for Health (I-TECH), a non-profit collaboration between the University of Washington and the University of California, San Francisco, came to…

Bellingham School District

For the Bellingham School District, Freelock put together a large, multi-faceted Drupal site for district-wide information and standardized sites for schools within the district containing…

Visit Site Olympic Peninsula Tourism Commission

The Olympic Peninsula Tourism Board came to us in early 2009 to assist them in developing a visually stunning and highly functional website in a Content Management System. The site needed to be…

Latitudes in Learning

Latitudes in Learning creates personalized learning experiences that engage people and organizations in meaningful interactions with the world. Learning is one of the most difficult activities in…

Visit Site World Class Hunting The Project

The owner of World Class Hunting approached Freelock to launch their website. They had worked with Drupal on a previous project, so they were familiar and enjoyed Drupal. The project…

Visit Site Joey Klein The Project

Joey Klein's team approached Freelock to "rescue" their website. Through the course of their planning stages, the original developers found that they had reached a point where they did…

Visit Site LedgerSMB Web Site

LedgerSMB is an open source accounting and financial package. Freelock has used LedgerSMB for its bookkeeping practically since the project began, forking an earlier open source project called…

Visit Site Hydra Content Management System

Freelock worked closely with Consumer Media Network to create a content management system (Hydra) through which they can track assignments and submissions. We implemented a custom workflow system…

Visit Site DanceSafe

DanceSafe is a non-profit, harm reduction organization promoting health and safety within the rave and nightclub community. Local chapters consist of young people from within the dance culture…

Visit Site Answers for Elders

Freelock developed an informational website for Answers for Elders. Answers for Elders is an online resource for adults caring for elderly parents. Branded as the "Boomers' Online Community to…

Visit Site Ge•cko About Geocko

Geocko builds powerful tools that save time and increase engagement for nonprofit organizations.  They simplify tasks, so organizations can stay forcused on running programs and…

Visit Site I-TECH Drupal 7 Upgrade About I-TECH

The International Training and Education Center for Health (I-TECH) is a center in the University of Washington's Department of Global Health.  I-TECH headquarters in Seattle with…

Visit Site Nightingale-Bamford School About Nightingale-Bamford School

The Nightingale-Bamford School is an independent all-female university-preparatory school founded in 1920.  With grades K-12, NBS is one of the top-ranked private…

Visit Site Lindbergh Gallery About Lindbergh Gallery

Lindbergh Gallery is Erik Lindbergh's venue for selling aerospace sculptures, furniture and other custom art pieces he creates.  Erik Lindberg is the grandson of Charles…

Visit Site RevEquip About RevEquip

The RevEquip team is composed of foodservice consultants experienced in Revit and the creation of foodservice documents.  Revit Families are developed to meet the standards…

Visit Site Nia Technique, Inc. About Nia Technique, Inc.

Nia is a sensory-based movement practice that draws from martial arts, dance arts and healing arts. Nia Technique Inc is built…

Visit Site Crossfit Games/PLY Interactive

In December 2011, Ply Interactive came to Freelock for assistance theming the Crossfit Games site, because of our Drupal expertise and the tight schedule for the project. This site was built from…

Visit Site Alaska Fishing Jobs Center

Scott Coughlin, a 24-season veteran of Alaska's salmon, herring and halibut fisheries, came to Freelock for help getting his site done. We picked up the pieces of the development project, yet…

Visit Site West Seattle Family Zone

The folks at WSFZ needed a directory-based website that was easy to maintain and easy to use. Catering to the families of this popular Seattle-area neighborhood, West Seattle, WSFZ wanted a clean…

Visit Site Cool Day Trips

Freelock was approached to build a website focused on the “Top 50” Day Trips from Seattle, based on rankings and reviews from users. Cooldaytrips.com includes a description of each destination,…

Visit Site Maltby Produce Markets, CSA

Our friends at Flower World came back to us for their latest project, Maltby Produce Markets CSA. Maltby Produce Markets grow a wide variety of fruits and vegetables in their fields, orchards,…

Visit Site Littlestar Prints

Littlestar Prints is one of our favorite sites. It combines some powerful drag-and-drop photo editing in the browser with e-commerce. We got to use both of our favorite software packages--Drupal…

Visit Site Organic Materials Research Institute (OMRI)

Another large scale project by Freelock Computing, the Organic Materials Research Institute tested our abilities to work with a variety of sources and the Drupal system. The project brought their…

Visit Site TerraBella Flowers - Organic Florist in Seattle

TerraBella Flowers & Mercantile specializes in European garden-style designs, using an assortment of local, organic, and sustainably grown flowers and is based in the Greenwood neighborhood…

Visit Site RadioFrame Networks

RadioFrame Networks came to Freelock in late 2008 for a web development project aimed at bringing their existing corporate static site content into the Joomla CMS. We worked with their existing…

WestSide Baby

WestSide Baby, in partnership with the Puget Sound community, provides essential items to local children in need by collecting and distributing diapers, clothing, toys and equipment. They partner…

Visit Site Booktrope

Booktrope's goal of getting online books to as many people as possible (for free!) was a project right up our alley, fitting nicely with our open source foundation.

From the beginning, we…

Visit Site Robinson Newspapers

Robinson Papers hired us to build a centrally-managed web site with different personalities for each of their papers but shared control. We built out a sophisticated system in Drupal, with…

Visit Site Riverview Community Church

Riverview originally came to Freelock to design and develop a website that would serve the needs of their rapidly growing church community. Originally a simple Joomla deployment, we have since…

Visit Site Phytec America

In early 2008, Phytec America brought Freelock on to maintain its Linux server systems and add an additional server. We configured a new server as an internal mail and file server, and converted…

Visit Site BlueView Technologies

BlueView Technologies initially hired us for Linux server administration in 2007. We have worked extensively with BlueView, expanding their IT infrastructure from one server which did everything…

Visit Site Running Wild Spirit Zen Cart

Running Wild Spirit is one of our earliest Zen Cart projects and has greatly contributed to the success and growth of their business. Based out of the small town of Maltby an hour and a half away…

Visit Site Seattle Jobs Scraper System

SeattleJobs.org hired Freelock to implement a system to scrape job listings from their customers' web sites. SeattleJobs.org lists jobs from about 45 member companies. In order for Seattle Jobs…

Visit Site Outdoor Research

Outdoor Research, an outdoor gear manufacturer, hired us to build their main public web site, along with a full administrative back end. We implemented a system that imports product data from…

Visit Site P5 Group Website

When P5 Group was looking to create a dynamic website, they looked to Freelock Computing to help. Using Drupal as a CMS platform, we created a site with multiple access levels for their wide…

Visit Site An XML-based Report Browser

One of Freelock, LLC's ongoing customers needed a web front-end for a proprietary reporting tool. This reporting tool could be configured to generate reports that ended up in Microsoft Excel.…

Custom Client Extranet

One of Freelock, LLC's ongoing customers needed a password-protected site that actually provided different content to different users, depending on the company or user group the user is part of…

Single-Source Help System

Over the course of creating help systems for multiple clients, Freelock, LLC developed a set of scripts that manage browse sequences, tables of contents, and cross-references for help systems.…

Aug 22 2019
Aug 22

22 Aug

Nick Veenhof

1xINTERNET made a great post about calculating how much you are giving back and contributing to Open Source. Open Source makes our business model possible and for some people in the company, and certainly yours truly, made a career out of it. Our CEO even started Dropsolid as he thought we could approach this differently. If this makes a difference for us, why not for some of our employees as well, or could it even be a big impact in recruitment?

Delivering Open Digital Experiences with Open Source components on our own Dropsolid Platform is also is very interesting for clients. We are able to deliver quality projects for a budget someone once could only dream of. The downside is that you are not alone that has this magical ability, so it is a competitive landscape. The question then is how you can stand out, how can you make that difference.

Let's honor 1xINTERNET by doing our math.

How do we contribute

Our contribution to the Drupal project can be divided in the same 3 areas:

  • Community work
  • Event, Sponsorships, and memberships
  • Source code

Community work

During the last year and even this year we spent a lot of time helping the Belgian Drupal Community. We organised Drupal User Group meetings, helped with organising Drupalcamp Belgium and are currently actively involved in organising Drupal Developer Days. Next to that is yours truly also active with the European Splash Awards as an organisational member and active as Chair of the Belgian Drupal Community. We also had several speakers from Dropsolid on all these events and I've also volunteered as Drupalcon Track Chair or Co-Chair for the DevOps track, both Drupal Europe and Drupalcon Amsterdam. Attending those meetings during the day or evening takes time, but is well worth it. And next to that we gave back to the community by going to Drupalcamp Kiev and Drupal Mountain Camp to share the knowledge.

When taking all of the above, time spent on community activities adds up to around 1 FTE. This includes time spent to organise it, give back with speaker preparations and time spent to organise those sponsorships and facilitate our employees to attend. For your information, in 2018 we had on average 65 people working for Dropsolid.

Sponsorships and memberships

Since 2018 we invested in the following events and memberships

  • Silver Sponsor for Drupal Europe 2018
  • Gold Sponsor for Drupalcamp Belgium
  • Gold Drupaljam XL 2019
  • Diamond Sponsor Drupalcon Amsterdam 2019
  • Organisation Member of Belgian Drupal Community
  • Organisation Member of the Drupal Association
  • Supporting Partner of the Drupal Association
  • Donation to the Promote Drupal Fund
  • Employ a Provisional member of the Drupal Security Team

in total we spent close to 1% of our total yearly spent in sponsorships, travel to and memberships related to the Drupal project.

Drupal.org contributions

Without a doubt we actively contribute back to the Drupal.org source code. Some contributions are also meetings from the community events and other important milestones that matter for the community. All efforts matter, not just code.

We contributed our complete install profile Dropsolid Rocketship and all our paragraphs that come with it. Dropsolid has a dedicated R&D team that consists out of 8 people. This team has a Drupal Backend & Frontend R&D Engineer, a couple DevOps engineers, 2 machine learning geniuses and myself. Next to that we supported one of our employees in his personal pet project drupalcontributions.org. Let's take a look at that site to come up with some numbers.

We support over 30 projects, have over 207 credits, with just 58 credits in the last 3 months and 181 comments in the last 3 months. I dare to say that we are ramping it up.

We currently rank nr. 42 of the organisations that contribute most back to Drupal in the last 3 months. Compared to 1xINTERNET we have a bigger headcount, but nevertheless I am equally proud, just as they are, to see this happen and to make an impact. 

Not only do we contribute with our own agenda, we also contract Swentel to spend just 1 day a month on Drupal contributions without any agreed agenda. By just supporting 1 day a month, this accounted to 17 comments in the last 30 days alone. To make a point, just counting credits isn't fair. It's how you come together and find a common solution. Communication is key and we support this with hard earned money.

How much did we contribute since 2018?

So, given that from the 8 R&D persons, we at least contribute back 1 FTE to the Drupal ecosystem. Combined with contributions that happen back from the projects by our development teams and the core contribution sponsorships for Swentel we easily get to 2 FTE's to contribute back on a total headcount of 65. 

If we add up the efforts, next to the payroll, in community work, sponsorships and memberships Dropsolid contributes an equivalent of +- 4.5% of our annual yearly spent in 2018. With the Drupalcon Amsterdam in 2019 happening as a Diamond Sponsor, this will be even bigger. We're aiming to send more than 20 payroll employees to Drupalcon and have 3 selected sessions! We're also not cutting back on our efforts to contribute back, on the contrary!

War on Talent

Being visible in the ecosystem with code contributions, event contributions, sponsorships helps to be a company people want to work for. Obviously this is not the only motivation but given that in our technical teams we employ around 1/4th of the employees as remote developers, a good company brand is important. We see that this works and this helps us to attract and retain highly skilled employees. We now have employees in countries such as Portugal, Slovenia, Poland, Ukraine, Romania, Bulgaria, ...

Lead, don't just follow

By actively participating, whether it is code or events or anything that decides the future of Drupal or for that matter Digital Experiences, you are ahead of the curve. This allows for more competitive offers towards clients and this helps you win deals that as a follower of the pack might not be possible. Dropsolid believes in taking the lead when it comes to being the best Drupal partner in Greater Europe, and is proud to believe we are. Up to you to believe us or not ;-)

Are you a client or looking for a job and are interested in hearing more, don't hesitate to contact us.

Aug 22 2019
Aug 22

The first part of this article described why and how the stakeholders of a project can contribute to Drupal. This developer-oriented article is a summary of the Drupal.org documentation for new code contributors. We will cover: how to work on the issue queue, how to publish a project, and how to approach this process with Drupal 9 in mind. 

Before we start, let’s just remember two things:

  • Contributing is easy and not only for senior developers. Even the smallest contribution matters, this is the summation of those contributions that make the community and the codebase strong.
  • You are part of a community. A great way to get started with contribution is to find a mentor at a Drupal event, and there is always someone ready to help on the Drupal Slack #contribute channel.

The Drupal.org issue queue

While working on client projects, you might encounter core or contributed bugs or you may need a new feature.

Chances are that this issue has already been discussed in the issue queue that covers the Drupal core or all projects, including contributed. If your issue is not yet in the queue, use the issue template summary to file one. You might also check this page about creating good issues.

The main process for an issue is Active > Needs work > Needs review > Reviewed and tested by the community (RTBC) > Fixed > Closed.

When the issue is fixed or closed, the project maintainer will include the changes in the codebase (dev branch) and will most probably wrap it with several other closed issues into a  release. Read about the semantic versioning model.
If you need the patch before the release, you can include it in your client project with Composer.

If the issue has not been reviewed/tested yet, just make sure that you are using the right version of the Drupal core or contributed project.

Project Version

From there, the main tasks you can contribute to as a developer are:

  • When the issue needs review: manual or automated testing
  • When the issue needs work: re-roll or create a patch.

We will summarize the process of creating a patch below:

You can also find novice issues and continue your reading with the Novice code contribution guide and the New contributor tasks: programmers.

Create a patch

Now that the codebase is on GitLab, there was a proof of concept for fixing a trivial issue on Drupal.org using only the issue queue and GitLab.

For most cases, you still need to use the following flow.

Create the patch from the issue branch

The branch is the one from the Version meta on the issue.

Drupal core: https://www.drupal.org/project/drupal/git-instructions
Contributed project: click on the Version control tab

Version control tab

Follow then the git instructions on the page to create the patch, basically:

  • Create a new branch: git checkout -b [patch_branch]

  • Make your changes then review them with git diff

  • Add your changes with git add

  • Output them as a patch file
    git diff [base_branch] > [description]-[issue-number]-[comment-number].patch

Submit the patch

Once your changes have been made, upload the patch on the issue, then include a comment that describes your solution and everything that will facilitate the review, like screenshots or interdiff.

If there are unit tests available, request a test for your patch, then set the issue status as “Needs review”.

Needs review

Contribute a full project

Workflow

There are several ways to go, but this procedure works well for us while working for client projects on a module (or theme) that could be contributed.

The first iteration is built straight on the client project repository with a generic name until the related ticket testing is finished, so that:

  • reviewers can check all the code in one place and most likely they already have an environment running with exhaustive data for testing.
  • if the ticket is reopened, the process is simplified (there is no release to maintain or Drupal.org process involved).

Using a generic name still favours reusability among other client projects if we decide that it will not be contributed on Drupal.org yet. We can then isolate it in a dedicated repository, then include it via Composer.

Publish it on Drupal.org

1. Make sure that your project contains the following files

  • The composer.json file that contains the project metadata + system requirements (e.g. ext-json), third party libraries (e.g. solarium/solarium) and Drupal dependencies (e.g. drupal/search_api).
  • The [your_project].info.yml file with the Drupal dependencies, prefixed by their namespace (e.g. drupal:node or webform:webform_ui).
  • A readme with the use case, the dependencies, related projects (and how this one differs), a roadmap if the project is not feature complete.

2. Fix and check coding standards

  • Autofix with phpcbf: phpcbf --standard=Drupal --extensions=php,module,inc,install,test,profile,theme,css,info,txt,md /path/to/project
  • Check what needs to be fixed manually: phpcs --standard=Drupal --extensions=php,module,inc,install,test,profile,theme,css,info,txt,md /path/to/project

3. Create the project page

Most likely, it will be a Module or a Theme: https://www.drupal.org/project/add

Give it a name and a machine name.
Add the key points: project classification, description (the project readme is a good start) and organization. Make also sure to add other maintainers if you are working in a team.

4. Create a first, unversioned, dev release

  • Create a dev branch (e.g. 8-x-1.x) and push the code
  • Follow instructions on the Version control tab.
  • Not mandatory, but it is recommended to create a clean installation with the latest release of Drupal so we’re sure about 2 things: all the composer based dependencies are resolved properly and there are no side effects caused by the client project.
  • At this stage, other developers might include your code in the project with composer require drupal/your_project:1.x-dev
  • On your client project, as there is no semantic version release yet (e.g. 8.1.0), it is a good habit to add the commit hash so you make sure that you keep control over the codebase that is actually used:  composer require drupal/your_project:1.x-dev#commit

5. Create intermediate versioned releases

When the development is ready, you might create alpha then beta releases.
Beta releases should not be subject to api changes.

  • Check out the latest dev branch commit: git checkout  8.x-1.x
  • Tag it: git tag 8.x-1.0-alpha1
  • Push it: git push origin tag 8.x-1.0-alpha1
  • Create the release and document it. This article from Matt Glaman provides great information about creating better release notes.

6. Create a stable versioned release

When there are no open bugs and the project is feature complete, a first stable release will tell site builders that the module is ready to be used (e.g. 8.1.0). Follow the same steps as 5.

7. Apply for permission to opt into security advisory coverage

Follow the security review process.

8. Create a new major release

If there are backward compatibility changes, creating a new major release (e.g. 8.2.x)  will indicate site builders and developers that there are BC breaks.

Read more with Best practices for creating and maintaining projects.

Be prepared for Drupal 9

TLDR; Drupal 9 should be released in June 2020. The first version will be about removing deprecated APIs and getting new versions of third-party libraries (Symfony 4 or 5, Twig 2, …). So, if a Drupal 8 project does not use deprecated code, it will work in Drupal 9 :)

Drupal 8.7 started the deprecation work and 8.8 will define the full deprecation list.

Code contribution is needed there as well: on the core with issues like deprecating legacy include files, or by helping contributed modules to be Drupal 9 ready and fixing contributed modules Drupal 9 compatibility issues.

Codebase analysis tools

Several tools are here to help if you are contributing to issues or maintaining a project.

The upgrade status module scans the code of the contributed and custom projects you have installed, and reports on any deprecated code that must be replaced before the next major version.

Drupal-check, a static analysis tool based on PHPStan, will check for correctness (e.g. using a class that doesn't exist), deprecation errors, and more.

Further read about Drupal 9, how to prepare for it and analysis of top uses of deprecated code in Drupal contributed projects in May 2019.

Developer toolbox

Here is a brief recap of the tools that might help you while contributing.

Other ways to contribute than code

There are more ways to contribute, even if you are not a developer, the Drupal community is looking for

Read more about contribution on the Getting Involved Guide.

Aug 21 2019
Aug 21

Today we will learn how to migrate content from LibreOffice Calc and Microsoft Excel files into Drupal using the Migrate Spreadsheet module. We will give instructions on getting the module and its dependencies. Then, we will present how to configure the module for spreadsheets with or without a header row. There are two example migrations: images and paragraphs. Let’s get started.

Example configuration for Microsoft Excel and LibreOffice Calc migration.

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD Google Sheets, Microsoft Excel, and LibreOffice Calc source migration whose machine name is ud_migrations_sheets_sources. It comes with four migrations: udm_google_sheets_source_node.yml, udm_libreoffice_calc_source_paragraph.yml, udm_microsoft_excel_source_image.yml, and udm_backup_csv_source_node.yml. The image migration uses a Microsoft Excel file as source. The paragraph migration uses a LibreOffice Calc file as source. The CSV migration is a backup in case the Google Sheet is not available. To execute the last one you would need the Migrate Source CSV module.

You can get the Migrate Google Sheets module using composer: composer require drupal/migrate_spreadsheet:^1.0. This module depends on the PHPOffice/PhpSpreadsheet library and many PHP extensions including ext-zip. Check this page for a full list of dependencies. If any required extension is missing the installation will fail. If your Drupal site is not composer-based, you will not be able to use Migrate Spreadsheet, unless you go around a lot of hoops.

Understanding the example set up

This migration will reuse the same configuration from the introduction to paragraph migrations example. Refer to that article for details on the configuration. The destinations will be the same content type, paragraph type, and fields. The source will be changed in today's example, as we use it to explain Microsoft Excel and LibreOffice Calc migrations. The end result will again be nodes containing an image and a paragraph with information about someone’s favorite book. The major difference is that we are going to read from different sources.

Note: You can literally swap migration sources without changing any other part of the migration.  This is a powerful feature of ETL frameworks like Drupal’s Migrate API. Although possible, the example includes slight changes to demonstrate various plugin configuration options. Also, some machine names had to be changed to avoid conflicts with other examples in the demo repository.

Understanding the source document and plugin configuration

In any migration project, understanding the source is very important. For Microsoft Excel and LibreOffice Calc migrations, the primary thing to consider is whether or not the file contains a row of headers. Also, a workbook (file) might contain several worksheets (tabs). You can only migrate from one worksheet at a time. The example documents have two worksheets: UD Example Sheet and Do not peek in here. We are going to be working with the first one.

The spreadsheet source plugin exposes seven configuration options. The values to use might change depending on the presence of a header row, but all of them apply for both types of document. Here is a summary of the available configurations:

  • file is required. It stores the path to the document to process. You can use a relative path from the Drupal root, an absolute path, or stream wrappers.
  • worksheet is required. It contains the name of the one worksheet to process.
  • header_row is optional. This number indicates which row containing the headers. Contrary to CSV migrations, the row number is not zero-based. So, set this value to 1 if headers are on the first row, 2 if they are on the second, and so on.
  • origin is optional and defaults to A2. It indicates which non-header cell contains the first value you want to import. It assumes a grid layout and you only need to indicate the position of the top-left cell value.
  • columns is optional. It is the list of columns you want to make available for the migration. In case of files with a header row, use those header values in this list. Otherwise, use the default title for columns: A, B, C, etc. If this setting is missing, the plugin will return all columns. This is not ideal, especially for very large files containing more columns than needed for the migration.
  • row_index_column is optional. This is a special column that contains the row number for each record. This can be used as unique identifier for the records in case your dataset does not provide a suitable value. Exposing this special column in the migration is up to you. If so, you can come up with any name as long as it does not conflict with header row names set in the columns configuration. Important: this is an autogenerated column, not any of the columns that come with your dataset.
  • keys is optional and, if not set, it defaults to the value of row_index_column. It contains an array of column names that uniquely identify each record. For files with a header row, you can use the values set in the columns configuration. Otherwise, use default column titles like A, B, C, etc. In both cases, you can use the row_index_column column if it was set. Each value in the array will contain database storage details for the column.

Note that nowhere in the plugin configuration you specify the file type. The same setup applies for both Microsoft Excel and LibreOffice Calc files. The library will take care of detecting and validating the proper type.

Migrating spreadsheet files with a header row

This example is for the paragraph migration and uses a LibreOffice Calc file. The following snippets shows the UD Example Sheet worksheet and the configuration of the source plugin:

book_id, book_title, Book author
B10, The definitive guide to Drupal 7, Benjamin Melançon et al.
B20, Understanding Drupal Views, Carlos Dinarte
B30, Understanding Drupal Migrations, Mauricio Dinarte
source:
  plugin: spreadsheet
  file: modules/custom/ud_migrations/ud_migrations_sheets_sources/sources/udm_book_paragraph.ods
  worksheet: 'UD Example Sheet'
  header_row: 1
  origin: A2
  columns:
    - book_id
    - book_title
    - 'Book author'
  row_index_column: 'Document Row Index'
  keys:
    book_id:
      type: string

The name of the plugin is spreadsheet. Then you use the file configuration to indicate the path to the file. In this case, it is relative to the Drupal root. The UD Example Sheet is set as the worksheet to process. Because the first row of the file contains the header rows, then header_row is set to 1 and origin to A2.

Then specify which columns to make available to the migration. In this case, we listed all of them so this setting could have been left unassigned. It is better to get into the habit of being explicit about what you import. If the file were to change and more columns were added, you would not have to update the file to prevent unneeded data to be fetched. The row_index_column is not actually used in the migration, but it is set to show all the configuration options in the example. The values will be 1, 2, 3, etc.  Finally, the keys is set the column that serves as unique identifiers for the records.

The rest of the migration is almost identical to the CSV example. Small changes were made to prevent machine name conflicts with other examples in the demo repository. For reference, the following snippet shows the process and destination sections for the LibreOffice Calc paragraph migration.

process:
  field_ud_book_paragraph_title: book_title
  field_ud_book_paragraph_author: 'Book author'
destination:
  plugin: 'entity_reference_revisions:paragraph'
  default_bundle: ud_book_paragraph

Migrating spreadsheet files without a header row

Now let’s consider an example of a spreadsheet file that does not have a header row. This example is for the image migration and uses a Microsoft Excel file. The following snippets shows the UD Example Sheet worksheet and the configuration of the source plugin:

P01, https://agaric.coop/sites/default/files/pictures/picture-15-1421176712.jpg
P02, https://agaric.coop/sites/default/files/pictures/picture-3-1421176784.jpg
P03, https://agaric.coop/sites/default/files/pictures/picture-2-1421176752.jpg
source:
  plugin: spreadsheet
  file: modules/custom/ud_migrations/ud_migrations_sheets_sources/sources/udm_book_paragraph.ods
  worksheet: 'UD Example Sheet'
  header_row: 1
  origin: A2
  columns:
    - book_id
    - book_title
    - 'Book author'
  row_index_column: 'Document Row Index'
  keys:
    book_id:
      type: string

The plugin, file, amd worksheet configurations follow the same pattern as the paragraph migration. The difference for files with no header row is reflected in the other parameters. header_row is set to null to indicate the lack of headers and origin is to A1. Because there are no column names to use, you have to use the ones provided by the spreadsheet. In this case, we want to use the first two columns: A and B. Contrary to CSV migrations, the spreadsheet plugin does not allow you to define aliases for unnamed columns. That means that you would have to use A, B in the process section to refer to these columns.

row_index_column is set to null because it will not be used. And finally, in the keys section, we use the A column as the primary key. This might seem like an odd choice. Why use that value if you could use the row_index_column as the unique identifier for each row? If this were an isolated migration, that would be a valid option. But this migration is referenced from the node migration explained in the previous example. The lookup is made based on the values stored in the A column. If we used the index of the row as the unique identifier, we would have to update the other migration or the lookup would fail. In many cases, that is not feasible nor desirable.

Except for the name of the columns, the rest of the migration is almost identical to the CSV example. Small changes were made to prevent machine name conflicts with other examples in the demo repository. For reference, the following snippet shows part of the process and destination section for the Microsoft Excel image migration.

process:
  psf_destination_filename:
    plugin: callback
    callable: basename
    source: B # This is the photo URL column.
destination:
  plugin: 'entity:file'

Refer to this entry to know how to run migrations that depend on others. In this case, you can execute them all by running: drush migrate:import --tag='UD Sheets Source'. And that is how you can use Microsoft Excel and LibreOffice Calc files as the source of your migrations. This example is very interesting because each of the migration uses a different source type. The node migration explained in the previous post uses a Google Sheet. This is a great example of how powerful and flexible the Migrate API is.

What did you learn in today’s blog post? Have you migrated from Microsoft Excel and LibreOffice Calc files before? If so, what challenges have you found? Did you know the source plugin configuration is not dependent on the file type? Share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Aug 21 2019
Aug 21

Today we will learn how to migrate content from Google Sheets into Drupal using the Migrate Google Sheets module. We will give instructions on how to publish them in JSON format to be consumed by the migration. Then, we will talk about some assumptions made by the module to allow easier plugin configurations. Finally, we will present the source plugin configuration for Google Sheets migrations. Let’s get started.

Example configuration for Google Sheets migration

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD Google Sheets, Microsoft Excel, and LibreOffice Calc source migration whose machine name is ud_migrations_sheets_sources. It comes with four migrations: udm_google_sheets_source_node.yml, udm_libreoffice_calc_source_paragraph.yml, udm_microsoft_excel_source_image.yml, and udm_backup_csv_source_node.yml. The last one is a backup in case the Google Sheet is not available. To execute it you would need the Migrate Source CSV module.

You can get the Migrate Google Sheets module and its dependency using composer: composer require drupal/migrate_google_sheets:^1.0'. It depends on Migrate Plus. Installing via composer will get you both modules.  If your Drupal site is not composer-based, you can download them manually.

Understanding the example set up

This migration will reuse the same configuration from the introduction to paragraph migrations example. Refer to that article for details on the configuration. The destinations will be the same content type, paragraph type, and fields. The source will be changed in today's example, as we use it to explain Google Sheets migrations. The end result will again be nodes containing an image and a paragraph with information about someone’s favorite book. The major difference is that we are going to read from different sources. In the next article, two of the migrations will be explained. They read from Microsoft Excel and LibreOffice Calc files.

Note: You can literally swap migration sources without changing any other part of the migration.  This is a powerful feature of ETL frameworks like Drupal’s Migrate API. Although possible, the example includes slight changes to demonstrate various plugin configuration options. Also, some machine names had to be changed to avoid conflicts with other examples in the demo repository.

Migrating nodes from Google Sheets

In any migration project, understanding the source is very important. For Google Sheets, there are many details that need your attention. First, the module works on top of Migrate Plus and extends its JSON data parser. In fact, you have to publish your Google Sheet and consume it in JSON format. Second, you need to make the JSON export publicly available. Third, you must understand the JSON format provided by Google Sheets and the assumptions made by the module to configure your fields properly. Specific instructions for Google Sheets migrations will be provided. That being said, everything explained in the JSON migration example is applicable in this case too.

Publishing a Google Sheet in JSON format

Before starting the migration, you need the source from where you will extract the data. For this, create a Google Sheet document. The example will use this one:

https://docs.google.com/spreadsheets/d/1YVJt9isPNjkUNHf3YgoTx38r04TwqRYnp1LFrik3TAk/edit#gid=0

The 1YVJt9isPNjkUNHf3YgoTx38r04TwqRYnp1LFrik3TAk value is the worksheet ID which will be used later. Once you are done creating the document, you need to publish it so it can be consumed by the Migrate API. To do this, go to the File menu and then click on Publish to the web. A modal window will appear where you can configure the export. Note that it is possible to publish the Entire document or only some of the worksheets (tabs). The example document has two: UD Example Sheet and Do not peek in here. Make sure that all the worksheets that you need are published or export the entire document. Unless multiple urls are configured, a migration can only import from one worksheet at a time. If you fetch from multiple urls they need to have homogeneous structures. When you click the Publish button, a new URL will be presented. In the example it is:

https://docs.google.com/spreadsheets/d/e/2PACX-1vTy2-CGzsoTBkmvYbolFh0UDWenwd9OCdel55j9Qa37g_earT1vA6y-6phC31Xkj8sTWF0o6mZTM90H/pubhtml

The previous URL will not be used. Publishing a document is a required step, but the URL that you get should be ignored. Note that you do not have to share the document. It is fine that the document is private to you as long as it is published. It is up to you if you want to make it available to Anyone with the link or Public on the web and potentially grant edit or comment access. The Share setting does not affect the migration. The final step is getting the JSON representation of the document. You need to assemble a URL with the following pattern:

http://spreadsheets.google.com/feeds/list/[workbook-id]/[worksheet-index]/public/values?alt=json

Replace the [workbook-id] by worksheet ID mentioned at the beginning of this section, the one that is part of the regular document URL, not the published URL. The worksheet-index is an integer number starting that represents the order in which worksheets appear in the document. Use 1 for the first, 2 for the second, and so on. This means that changing the order of the worksheets will affect your migration. At the very least, you will have to update the path to reflect the new index. In the example migration, the UD Example Sheet worksheet will be used. It appears first in the document so worksheet index is 1. Therefore, the exported JSON will be available at the following URL:

http://spreadsheets.google.com/feeds/list/1YVJt9isPNjkUNHf3YgoTx38r04TwqRYnp1LFrik3TAk/1/public/values?alt=json

Understanding the published Google Sheet JSON export

Take a moment to read the JSON export and try to understand its structure. It contains much more data than what you need. The records to be imported can be retrieved using this XPath expression: /feed/entry. You would normally have to assign this value to the item_selector configuration of the Migrate Plus’ JSON data parser. But, because the value is the same for all Google Sheets, the module takes care of this automatically. You do not have to set that configuration in the source section. As for the data cells, have a look at the following code snippet to see how they appear on the export:

{
  "feed": {
    "entry": [
      {
        "gsx$uniqueid": {
          "$t": "1"
        },
        "gsx$name": {
          "$t": "One Uno Un"
        },
        "gsx$photo-file": {
          "$t": "P01"
        },
        "gsx$bookref": {
          "$t": "B10"
        }
      }
    ]
  }
}

Tip: Firefox includes a built-in JSON document viewer which helps a lot in understanding the structure of the document. If your browser does not include a similar tool out of the box, look for one in their extensions repository. You can also use a file formatter to pretty print the JSON output.

The following is a list of headers as they appear in the Google Sheet compared to how they appear in the JSON export:

  • unique_id appears like gsx$uniqueid.
  • name appears like gsx$name.
  • photo-file appears like gsx$photo-file.
  • Book Ref appears like gsx$bookref.

So, the header name from Google Sheet gets transformed in the JSON export. They get a prefix of gsx$ and the header name is transformed to all lowercase letters with spaces and most special characters removed. On top of this, the actual cell value, that you will eventually import, is in a $t property one level under the header name. Now, you should create a list of fields to migrate using XPath expressions as selectors. For example, for the Book Ref header, the selector would be gsx$bookref/$t. But that is not the way to configure the Google Sheets data parser. The module makes some assumptions to make the selector clearer. So, the gsx$ prefix and /$t hierarchy are assumed. For the selector, you only need to use the transformed name. In this case: uniqueid, name, photo-file, and bookref.

Configuring the Migrate Google Sheets source plugin

With the JSON export of the Google Sheet and the list of transformed header names, you can proceed to configure the plugin. It will be very similar to configuring a remote JSON migration. The following code snippet shows source configuration for the node migration:

source:
  plugin: url
  data_fetcher_plugin: http
  data_parser_plugin: google_sheets
  urls: 'http://spreadsheets.google.com/feeds/list/1YVJt9isPNjkUNHf3YgoTx38r04TwqRYnp1LFrik3TAk/1/public/values?alt=json'
  fields:
    - name: src_unique_id
      label: 'Unique ID'
      selector: uniqueid
    - name: src_name
      label: 'Name'
      selector: name
    - name: src_photo_file
      label: 'Photo ID'
      selector: photo-file
    - name: src_book_ref
      label: 'Book paragraph ID'
      selector: bookref
  ids:
    src_unique_id:
      type: integer

You use the url plugin, the http fetcher, and the google_sheets parser. The latter is provided by the module. The urls configuration is set to the exported JSON link. The item_selector is not configured because the /feed/entry value is assumed. The fields are configured as in the JSON migration with the caveat of using the transformed header values for the selector. Finally, you need to set the ids key to a combination of fields that uniquely identify each record.

The rest of the migration is almost identical to the JSON example. Small changes were made to prevent machine name conflicts with other examples in the demo repository. For reference, the following snippet shows part of the process, destination, and dependencies section for the Google Sheets migration.

process:
  field_ud_image/target_id:
    plugin: migration_lookup
    migration: udm_microsoft_excel_source_image
    source: src_photo_file
destination:
  plugin: 'entity:node'
  default_bundle: ud_paragraphs
migration_dependencies:
  required:
    - udm_microsoft_excel_source_image
    - udm_libreoffice_calc_source_paragraph
  optional: []

Note that the node migration depends on an image and paragraph migration. They are already available in the example. One uses a Microsoft Excel file as the source while the other a LibreOffice Calc document. Both of these migrations will be explained in the next article. Refer to this entry to know how to run migrations that depend on others. For example, you can run: drush migrate:import --tag='UD Sheets Source'.

What did you learn in today’s blog post? Have you migrated from Google Sheets before? If so, what challenges have you found? Did you know the procedure to export a sheet in JSON format? Did you know that the Migrate Google Sheets module is an extension of Migrate Plus? Share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

Aug 21 2019
Aug 21

Do you like people who are warm and friendly or cold and hostile? You’ve got it right! I’m comparing Interactive to Non-interactive (static) websites here. In this increasingly digital generation, it isn’t sufficient to place some content on your website and wait for it to work its magic. Providing a web User experience without interactivity is like opening a store filled with inventory without a salesperson to interact with. 
When you create an interactive website, you are forming a connection with your audience. It propels a two-way communication on a medium where you cannot directly interact with a user. Studies have proven that people are more likely to convert on, return to or recommend websites that are interactive. Drupal CMS offers a wide variety of interactive themes and modules that can be easily adapted to your website and further customized.

What is an interactive website?

Put simply, an interactive website is a website that communicates and allows for interaction with users. And by interaction, we don’t just mean allowing users to “click” and “scroll”. Offering users with content that is amusing, collaborative and engaging is the essential objective of an interactive website. An interactive website design will not just display attractive content, it will exhibit interactive content. Content that will compel users to communicate and deeply engage with the website. 
 

Interactive website design 
Interactive Website Designs Communicate & Engage with users 

Why do you need one?

Today, all businesses in the digital market are racing to expand their audience. Most of them, however, forget that increasing traffic is simply not enough. Retaining and engaging users is what converts. Engaging your users should be your prime motive and for this you will first need an interactive business website. 

  • Drives more engagement. Interactive business websites can make your website less boring, thus garnering more action. 
  • Users will spend more time on a website that interacts with them. This increases your conversion rate, decreases bounce rate and can boost the SEO of your website.
  • Develops a more personalized user experience that can result in happy users. 
  • Engaged users are more likely to maintain a long-term relationship with websites.
  • Interactive website designs can create lasting effects in user’s minds. This improves your brand awareness and reach. 
  • Interactive websites encourages users to recommend your website and link back to it.
  • More conversions means you have a better chance in making a sale!

How to make interactive websites? 

Creating an interactive website from scratch is easier and more effective as you envision and plan the customer journey from day one. Nevertheless, if you already have a website that you think is static or needs more interactive website features, it is never too late. The first step is to define your business objectives and then identify various touch points from where you can interact with your customers.
If budgets and timelines are constraints you could also look at HTML5 interactive website templates (not recommended if you need customizations).
There are various interactive website features that can increase user engagement but you should pick the ones that suit your business goals. For example, if you are sell financial services, having an interest calculator in your website can prove to be very useful. Nonetheless, the most essential interactive feature that you just cannot ignore is responsiveness. Users will respond to your website on various devices only when it looks and feels presentable.
So what kind of interactive website features or elements can you utilize for your benefit?

  • Social Media Applications

There is no denying that Social media marketing can give you the visibility like no other marketing programs if done right. Provide your users with an option to like and share your content on social media platforms like LinkedIn Twitter or Facebook. Or just to be able to follow your page. You can also display live feed from your social media page to keep users updated. 

  • Simple Interactive Tools

Offer your users with simple interactive tools like Quizzes, short Games, math tools, tax calculators, etc. connected to your business objectives. Integrating simple software tools that can provide your users with instant results have proven to boost user engagement. 

  • Interactive Page Elements

You can enhance your page elements by adding something interesting and attractive to it. For example, colourful and dynamic hover-states on links or images, on-scroll or on-click loading/animation, navigation with clicks on image stories, and much more. Add videos or animations to say more about your business in an interactive way.

  • Forms and Feedback

Allowing users to get in touch with you via a contact form is a great way to connect with them. Not only does it let you increase your database of leads, it is a nice way of saying “We care”. Feedback forms lets you identify your strengths and weaknesses via the best source – your audience! 

  • Chat Widgets

What’s better than a live person chatting with you, answering all your questions about the products or services being offered?! That’s probably the highest level of interactivity you can offer in an interactive business website. If live chat sounds like too much commitment, you could also opt for Chat bots that can be configured to answer predictive questions.

  • User-generated Content

Letting users add their content on your website is a great way to improve interactivity. This can be done in the form of Comments (in your blogs/articles section), inviting them to write guest posts, submit images or even creating a small discussion Forum.

  • Other interactive website Features

You can get creative with the interactive features you want for your audience but here’s a short list of commonly used interactive elements –

  • Google Maps makes you a more trust-worthy brand and provide a great way to improve interaction especially when they are clickable.
  • Newsletters can keep your users coming back to your website for more updates.
  • Voting and showing them results of previous polls helps increase engagement.
  • Search functionalities eases the user from the pain of navigating through your website.
  • Ratings can be a quick and interactive method of getting instant feedback that can improve your products/services/work.
  • Slideshows offer a great way to engage users and can make them want to keep going to the next image.
interactive elements     
           Interactive Website Features and Elements 
                                         

Drupal for Interactive Websites

When you build your website with Drupal, you will come across multiple options in the form of modules and features that can instantly turn your static website into an interactive one. With Drupal 8, responsiveness comes out of the box. Which means that you don’t need any additional modules to make your Drupal website look great irrespective of the devices. In addition there are a variety of modules that encourage interactivity like the Search API, Contact forms module, Social Media module, Slideshow module,  SimpleNews (or newsletters) and much more! 

Aug 21 2019
Aug 21

The JS frameworks have changed quite a lot in Drupal especially with API-first concept adding to the scenario. It is only expected that developers are inclined towards learning more about JS and related possibilities.

Recently, I was tasked to render blocks on a page while keeping individual components encapsulated in their own. JavaScript has the potential to make web pages dynamic and interactive without hindering the page speed and so I decided to opt for progressively decoupled blocks.

I came across this Drupal module Progressive Decoupled Blocks, which allows us to render blocks in a decoupled way and seemed a perfect fit for the situation.

Anatomy of Progressive Decoupled Blocks

The module is javascript-framework-agnostic, progressive decoupling tool to allow custom blocks to be written in the javascript framework of one’s choice. What makes it unique is one can do all this without needing to know any Drupal API.

It keeps individual components compact in their own directories containing all the CSS, JS, and template assets necessary for them to work, and using an info.yml file to declare these components and their framework dependencies to Drupal.

Did it work?

I decided to test the module in the new Umami Demo profile, that comes out of the box with Drupal, with React and Angular blocks.

I used the Layout module and picked a page to place these blocks. This is one of the best parts I liked about the module, every piece of block content that we need to be placed on the page, can be done without even visiting the block page and setting visibility.

Module Architecture

The module has a custom block derivative, which searches for all components that have been added and exposes them as blocks. This architecture makes it super easy to add a new block.

You can refer to this git for the block derivative - https://git.drupalcode.org/project/pdb/blob/8.x-1.x/src/Plugin/Derivative/PdbBlockDeriver.php

Refer to this git for component discovery - https://git.drupalcode.org/project/pdb/blob/8.x-1.x/src/ComponentDiscovery.php

Possibilities

For React JS there are 2 examples in this module.

First is with a simple React Create element class, without JSX. This example renders a simple text inside a React component. This is a very good starter example for anyone who wants to understand how to integrate a react component with Drupal in a progressive way.

The second example is a ToDo app, which allows you to add and remove todo items to a list. It makes use of local storage to store any item that has been added. This app creates React components, integrate these components in other component and renders a fully functional DOM inside Drupal DOM in a progressive way.

This example comes with a package.json which needs to be installed before making it functional. However, the component did not render perfectly on Umami theme, I made certain changes to make sure it renders correctly. Patch attached in the end.

progressive decoupled block

I decided to extend this module and added a new component to render a banner block (comes OOB with Umami), and exposed this block as an API via JSON:API. As this was a very simple block, I decided to create this block without JSX. Also, I decided to generate the URL with static ID in the path. This can, however, be made dynamic, which I plan to do later in my implementation. I also decided to choose the same class names to save on styling. The classes can be found in the Umami theme profile.

(function ($, Drupal, drupalSettings) {
Drupal.behaviors.pddemo = {
attach: function (context, settings) {
$.ajax(drupalSettings.pdb.reactbanner.url, {
type: 'get',
success: function (result) {
var image = result.included[0].attributes.uri.url;
var title = result.data.attributes.field_title;
ReactDOM.render(React.createElement(
'div',
{
className: 'block-type-banner-block',
style: {backgroundImage: 'url(' + image + ')'}
},
React.createElement(
'div',
{className: 'block-inner'},
React.createElement(
'div',
{className: 'summary'},
React.createElement(
'div',
{className: 'field--name-field-title'},
title
)
)
)
), document.getElementById('reactbanner'));
},
error: function (XMLHttpRequest, textStatus, errorThrown) {
console.log(textStatus || errorThrown);
},
});
}
};
})(jQuery, Drupal, drupalSettings);


Angular JS

Angular JS comes with many more easy and complex examples. Things didn’t work as smoothly as it did with React. There were a couple of changes we were required to make it work.

Patch is attached in the end. You can refer to this gist for the patch.

You will have to install node modules before, to make it work. Also, all JS is in the form of Typescript, which needs to be processed to JS before you can make it work.

Conclusion

The module gives you a great kickstart to move any block, rendered by Drupal, to a progressively decoupled block, using React, Angular or Ember as JS framework. However, you may want to extend the module or create your own to render the blocks.

Aug 21 2019
Aug 21

Today we will learn how to migrate content from a XML file into Drupal using the Migrate Plus module. We will show how to configure the migration to read files from the local file system and remote locations. We will also talk about the difference between two data parsers provided the module. The example includes node, images, and paragraphs migrations. Let’s get started.

Example configuration of XML source migration

Note: Migrate Plus has many more features. For example, it contains source plugins to import from JSON files and SOAP endpoints. It provides many useful process plugins for DOM manipulation, string replacement, transliteration, etc. The module also lets you define migration plugins as configurations and create groups to share settings. It offers a custom event to modify the source data before processing begins. In today’s blog post, we are focusing on importing XML files. Other features will be covered in future entries.

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD XML source migration whose machine name is ud_migrations_xml_source. It comes with four migrations: udm_xml_source_paragraph, udm_xml_source_image, udm_xml_source_node_local, and udm_xml_source_node_remote.

You can get the Migrate Plus module using composer: composer require 'drupal/migrate_plus:^5.0'. This will install the 8.x-5.x branch where new development will happen. This branch was created to introduce breaking changes in preparation for Drupal 9. As of this writing, the 8.x-4.x branch has feature parity with the newer branch. If your Drupal site is not composer-based, you can download the module manually.

Understanding the example set up

This migration will reuse the same configuration from the introduction to paragraph migrations example. Refer to that article for details on the configuration: the destinations will be the same content type, paragraph type, and fields. The source will be changed in today's example, as we use it to explain XML migrations. The end result will again be nodes containing an image and a paragraph with information about someone’s favorite book. The major difference is that we are going to read from XML. In fact, three of the migrations will read from the same file. The following snippet shows a reduced version of the file to get a sense of its structure:


<?xml version="1.0" encoding="UTF-8" ?>
<data>
<udm_people>
<unique_id>1</unique_id>
<name>Michele Metts</name>
<photo_file>P01</photo_file>
<book_ref>B10</book_ref>
</udm_people>
<udm_people>
...
</udm_people>
<udm_people>
...
</udm_people>
<udm_book_paragraph>
<book_id>B10</book_id>
<book_details>
<title>The definite guide to Drupal 7</title>
<author>Benjamin Melançon et al.</author>
</book_details>
</udm_book_paragraph>
<udm_book_paragraph>
...
</udm_book_paragraph>
<udm_book_paragraph>
...
</udm_book_paragraph>
<udm_photos>
<photo_id>P01</photo_id>
<photo_url>https://agaric.coop/sites/default/files/pictures/picture-15-1421176712.jpg</photo_url>
<photo_dimensions>
<width>240</width>
<height>351</height>
</photo_dimensions>
</udm_photos>
<udm_photos>
...
</udm_photos>
<udm_photos>
...
</udm_photos>

Note: You can literally swap migration sources without changing any other part of the migration.  This is a powerful feature of ETL frameworks like Drupal’s Migrate API. Although possible, the example includes slight changes to demonstrate various plugin configuration options. Also, some machine names had to be changed to avoid conflicts with other examples in the demo repository.

Migrating nodes from a XML file

In any migration project, understanding the source is very important. For XML migrations, there are two major considerations. First, where in the XML tree hierarchy lies the data that you want to import. It can be at the root of the file or several levels deep in the hierarchy. You use an XPath expression to select a set of nodes from the XML document. In this article, the term element when referring to an XML document node to distinguish it from a Drupal node.  Second, when you get to the set of elements that you want to import, what child elements are going to be made available to the migration. It is possible that each element contains more data than needed. In XML imports, you have to manually include the child elements that will be required for the migration. The following code snippet shows part of the local XML file relevant to the node migration:


<?xml version="1.0" encoding="UTF-8" ?>
<data>
<udm_people>
<unique_id>1</unique_id>
<name>Michele Metts</name>
<photo_file>P01</photo_file>
<book_ref>B10</book_ref>
</udm_people>
<udm_people>
...
</udm_people>
<udm_people>
...
</udm_people>
</data>

The set of elements containing node data lies two levels deep in the hierarchy. Starting with data at the root and then descending one level to udm_people. Each element of this array is an object with four properties:

  • unique_id is the unique identifier for each element returned by the data/udm_people hierarchy.
  • name is the name of a person. This will be used in the node title.
  • photo_file is the unique identifier of an image that was created in a separate migration.
  • book_ref is the unique identifier of a book paragraph that was created in a separate migration.

The following snippet shows the configuration to read a local XML file for the node migration:


source:
  plugin: url
  # This configuration is ignored by the 'xml' data parser plugin.
  # It only has effect when using the 'simple_xml' data parser plugin.
  data_fetcher_plugin: file
  # Set to 'xml' to use XMLReader https://www.php.net/manual/en/book.xmlreader.php
  # Set to 'simple_xml' to use SimpleXML https://www.php.net/manual/en/ref.simplexml.php
  data_parser_plugin: xml
  urls:
    - modules/custom/ud_migrations/ud_migrations_xml_source/sources/udm_data.xml
  # XPath expression. It is common that it starts with a slash (/).
  item_selector: /data/udm_people
  fields:
    - name: src_unique_id
      label: 'Unique ID'
      selector: unique_id
    - name: src_name
      label: 'Name'
      selector: name
    - name: src_photo_file
      label: 'Photo ID'
      selector: photo_file
    - name: src_book_ref
      label: 'Book paragraph ID'
      selector: book_ref
  ids:
    src_unique_id:
      type: integer

The name of the plugin is url. Because we are reading a local file, the data_fetcher_plugin  is set to file and the data_parser_plugin to xml. The urls configuration contains an array of file paths relative to the Drupal root. In the example we are reading from one file only, but you can read from multiple files at once. In that case, it is important that they have a homogeneous structure. The settings that follow will apply equally to all the files listed in urls.

Technical note: Migrate Plus provides two data parser plugins for XML files. xml uses XMLReader while simple_xml uses SimpleXML. The parser to use is configured in the data_parser_plugin configuration. Also note that when you use the xml parser, the data_fetcher_plugin setting is ignored. More details below.

The item_selector configuration indicates where in the XML file lies the set of elements to be migrated. Its value is an XPath expression used to traverse the file hierarchy. In this case, the value is /data/udm_people. Verify that your expression is valid and select the elements you intend to import. It is common that it starts with a slash (/).

fields has to be set to an array. Each element represents a field that will be made available to the migration. The following options can be set:

  • name is required. This is how the field is going to be referenced in the migration. The name itself can be arbitrary. If it contained spaces, you need to put double quotation marks (") around it when referring to it in the migration.
  • label is optional. This is a description used when presenting details about the migration. For example, in the user interface provided by the Migrate Tools module. When defined, you do not use the label to refer to the field. Keep using the name.
  • selector is required. This is another XPath-like string to find the field to import. The value must be relative to the subtree specified by the item_selector configuration. In the example, the fields are direct children of the elements to migrate. Therefore, the XPath expression only includes the element name (e.g., unique_id). If you had nested elements, you could use a slash (/) character to go deeper in the hierarchy. This will be demonstrated in the image and paragraph migrations.

Finally, you specify an ids array of field names that would uniquely identify each record. As already stated, the unique_id field servers that purpose. The following snippet shows part of the process, destination, and dependencies configuration of the node migration:

process:
  field_ud_image/target_id:
    plugin: migration_lookup
    migration: udm_xml_source_image
    source: src_photo_file
destination:
  plugin: 'entity:node'
  default_bundle: ud_paragraphs
migration_dependencies:
  required:
    - udm_xml_source_image
    - udm_xml_source_paragraph
  optional: []

The source for the setting the image reference is src_photo_file. Again, this is the name of the field, not the label nor selector. The configuration of the migration lookup plugin and dependencies point to two XML migrations that come with this example. One is for migrating images and the other for migrating paragraphs.

Migrating paragraphs from a XML file

Let’s consider an example where the elements to migrate have many levels of nesting. The following snippets show part of the local XML file and source plugin configuration for the paragraph migration:


<?xml version="1.0" encoding="UTF-8" ?>
<data>
  <udm_book_paragraph>
    <book_id>B10</book_id>
    <book_details>
      <title>The Definitive Guide to Drupal 7</title>
      <author>Benjamin Melançon et al.</author>
    </book_details>
  </udm_book_paragraph>
  <udm_book_paragraph>
    ...
  </udm_book_paragraph>
  <udm_book_paragraph>
    ...
  </udm_book_paragraph>
</data>
source:
  plugin: url
  # This configuration is ignored by the 'xml' data parser plugin.
  # It only has effect when using the 'simple_xml' data parser plugin.
  data_fetcher_plugin: file
  # Set to 'xml' to use XMLReader https://www.php.net/manual/en/book.xmlreader.php
  # Set to 'simple_xml' to use SimpleXML https://www.php.net/manual/en/ref.simplexml.php
  data_parser_plugin: xml
  urls:
    - modules/custom/ud_migrations/ud_migrations_xml_source/sources/udm_data.xml
  # XPath expression. It is common that it starts with a slash (/).
  item_selector: /data/udm_book_paragraph
  fields:
    - name: src_book_id
      label: 'Book ID'
      selector: book_id
    - name: src_book_title
      label: 'Title'
      selector: book_details/title
    - name: src_book_author
      label: 'Author'
      selector: book_details/author
  ids:
    src_book_id:
      type: string

The plugin, data_fetcher_plugin, data_parser_plugin and urls configurations have the same values as in the node migration. The item_selector and ids configurations are slightly different to represent the path to paragraph elements and the unique identifier field, respectively.

The interesting part is the value of the fields configuration. Taking data/udm_book_paragraph as a starting point, the records with paragraph data have a nested structure. Particularly, the book_details element has two children: title and author. To refer to them, the selectors are book_details/title and book_details/author, respectively. Note that you can go as many level deeps in the hierarchy to find the value that should be assigned to the field. Every level in the hierarchy could be separated by a slash (/).

In this example, the target is a single paragraph type. But a similar technique can be used to migrate multiple types. One way to configure the XML file is having two children. paragraph_id would contain the unique identifier for the record. paragraph_data would contain a child element to specify the paragraph type. It would also have an arbitrary number of extra child elements with the data to be migrated. In the process section, you would iterate over the children to map the paragraph fields.

The following snippet shows part of the process configuration of the paragraph migration:

process:
  field_ud_book_paragraph_title: src_book_title
  field_ud_book_paragraph_author: src_book_author

Migrating images from a XML file

Let’s consider an example where the elements to migrate have more data than needed. The following snippets show part of the local XML file and source plugin configuration for the image migration:


<!--?xml version="1.0" encoding="UTF-8" ?-->
<data>
  <udm_photos>
<photo_id>P01</photo_id>
<photo_url>https://agaric.coop/sites/default/files/pictures/picture-15-1421176712.jpg</photo_url>
<photo_dimensions>
      <width>240</width>
      <height>351</height>
    </photo_dimensions>
  </udm_photos>
  <udm_photos>
    ...
  </udm_photos>
  <udm_photos>
    ...
  </udm_photos>
</data>
source:
  plugin: url
  # This configuration is ignored by the 'xml' data parser plugin.
  # It only has effect when using the 'simple_xml' data parser plugin.
  data_fetcher_plugin: file
  # Set to 'xml' to use XMLReader https://www.php.net/manual/en/book.xmlreader.php
  # Set to 'simple_xml' to use SimpleXML https://www.php.net/manual/en/ref.simplexml.php
  data_parser_plugin: xml
  urls:
    - modules/custom/ud_migrations/ud_migrations_xml_source/sources/udm_data.xml
  # XPath expression. It is common that it starts with a slash (/).
  item_selector: /data/udm_photos
  fields:
    - name: src_photo_id
      label: 'Photo ID'
      selector: photo_id
    - name: src_photo_url
      label: 'Photo URL'
      selector: photo_url
  ids:
    src_photo_id:
      type: string

The following snippet shows part of the process configuration of the image migration:

process:
  psf_destination_filename:
    plugin: callback
    callable: basename
    source: src_photo_url

The plugin, data_fetcher_plugin, data_parser_plugin and urls configurations have the same values as in the node migration. The item_selector and ids configurations are slightly different to represent the path to image elements and the unique identifier field, respectively.

The interesting part is the value of the fields configuration. Taking data/udm_photos as a starting point, the elements with image data have extra children that are not used in the migration. Particularly, the photo_dimensions element has two children representing the width and height of the image. To ignore this subtree, you simply omit it from the fields configuration. In case you wanted to use it, the selectors would be photo_dimensions/width and photo_dimensions/height, respectively.

XML file location

Important: What is described in this section only applies when you use either (1) the xml data parser or (2) the simple_xml parser with the file data fetcher.

When using the file data fetcher plugin, you have three options to indicate the location to the XML files in the urls configuration:

  • Use a relative path from the Drupal root. The path should not start with a slash (/). This is the approach used in this demo. For example, modules/custom/my_module/xml_files/example.xml.
  • Use an absolute path pointing to the XML location in the file system. The path should start with a slash (/). For example, /var/www/drupal/modules/custom/my_module/xml_files/example.xml.
  • Use a fully-qualified URL to any built-in wrapper like http, https, ftp, ftps, etc. For example, https://understanddrupal.com/xml-files/example.xml.
  • Use a custom stream wrapper.

Being able to use stream wrappers gives you many more options. For instance:

Migrating remote XML files

Important: What is described in this section only applies when you use the http data fetcher plugin.

Migrate Plus provides another data fetcher plugin named http. Under the hood, it uses the Guzzle HTTP Client library. You can use it to fetch files using any protocol supported by curl like http, https, ftp, ftps, sftp, etc. In a future blog post we will explain this data fetcher in more detail. For now, the udm_xml_source_node_remote migration demonstrates a basic setup for this plugin. Note that only the data_fetcher_plugindata_parser_plugin, and urls configurations are different from the local file example. The following snippet shows part of the configuration to read a remote XML file for the node migration:

source:
  plugin: url
  data_fetcher_plugin: http
  # 'simple_xml' is configured to be able to use the 'http' fetcher.
  data_parser_plugin: simple_xml
  urls:
    - https://sendeyo.com/up/d/478f835718
  item_selector: /data/udm_people
  fields: ...
  ids: ...

And that is how you can use XML files as the source of your migrations. Many more configurations are possible when you use the simple_xml parser with the http fetcher. For example, you can provide authentication information to get access to protected resources. You can also set custom HTTP headers. Examples will be presented in a future entry.

XMLReader vs SimpleXML in Drupal migrations

As noted in the module’s README file, the xml parser plugin uses the XMLReader interface to incrementally parse XML files. The reader acts as a cursor going forward on the document stream and stopping at each node on the way. This should be used for XML sources which are potentially very large. On the other than, the simple_xml parser plugin uses the SimpleXML interface to fully parse XML files. This should be used for XML sources where you need to be able to use complex XPath expressions for your item selectors, or have to access elements outside of the current item element via XPath.

What did you learn in today’s blog post? Have you migrated from XML files before? If so, what challenges have you found? Did you know that you can read local and remote files? Did you know that the data_fetcher_plugin configuration is ignored when using the xml data parser? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series is made possible thanks to these generous sponsors. Contact us if your organization would like to support this documentation project, whether it is the migration series or other topics.

Next: Adding HTTP request headers and authentication to remote JSON and XML in Drupal migrations

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors: Drupalize.me by Osio Labs has online tutorials about migrations, among other topics, and Agaric provides migration trainings, among other services.  Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Aug 20 2019
Aug 20

Welcome to Mediacurrent’s Open Waters, a podcast about open source solutions. In this episode, we catch up with Cristina Chumillas. Cristina comes from the design world and is passionate about front-end development. She works at Lullabot (though when we recorded this, she worked at Ymbra) and has been involved in the Drupal community for years, contributing with code, design, and organizing events. Her contributions to Drupal Core are mainly focused on front-end, design and UX. Nowadays, she's a co-organizer of the Drupal Admin UI & JS Modernization Initiative and a Drupal core UX maintainer.

Audio Download Link

Project Pick

 Claro

Interview with Cristina Chumillas

  1. Tell us about yourself: What is your role, who do you work for, and where are you from?
  2. You are a busy woman, what events have you recently attended and/or are scheduled to attend in the near future?
  3. Which Drupal core initiatives are you currently contributing to?
  4. How does a better admin theme UI help site owners?  
  5. What are the main goals?
  6. Is this initiative sponsored by anyone? 
  7. Who is the target for the initiative? 
  8. How is the initiative organized? 
  9. What improvements will it bring in a short/mid/long term?
  10. How can people get involved in helping with these initiatives?

Quick-takes

  •  Cristina contributed to the Out Of The Box initiative for a while, together with podcast co-host Mario
  • 3 reasons why Drupal needs a better admin theme UI: Content Productivity, savings, less frustration
  • Main goals: We have 2 separate paths: the super-fancy JS app that will land in an undefined point in the future and Claro as the new realistic & releasable short term work that will introduce improvements on each release.
  • Why focus on admin UI?  We’re focusing on the content author's experience because that’s one of the main pain points mentioned in an early survey we did last year.)
  • How is the initiative organized? JS, UX&User studies, New design system (UI), Claro (new theme)
  • What improvements will it bring in a short/mid/long term? Short: New theme/UI, Mid: editor role with specific features, autosave, Long: JS app. 

That’s it for today’s show, thanks for joining us!  Looking for more useful tips, technical takeaways, and creative insights? Visit mediacurrent.com/podcast for more episodes and to subscribe to our newsletter.

Aug 20 2019
Aug 20

Firstly - what does End-of-Life mean?

When your version of Drupal reaches end of life, it doesn’t mean that your website will stop working immediately. It does, however, mean that you will be running a version that is no longer supported. Practically speaking, it means that there will cease to be any security updates provided for Drupal 7 core by the Drupal security team. Without security updates your website and hosting infrastructure could be vulnerable to new exploits/hackers. 

What are the options for your Drupal 7 website?

By far the best option is to upgrade your Drupal 7 site to the most current version of Drupal prior to the end-of-life date. At the time of writing, we are building all our new projects on Drupal 8 and that’s really the only option at this point. Fortunately we’ve gone through the upgrade process with a number of clients already and have some tools at our disposal to make the process as streamlined as possible. 

Now you may be asking: Shouldn’t I just wait until Drupal 9 is released? I don’t want to have to upgrade again so soon after I upgrade to 8? The short answer is…..No. 

Due to the modern architecture employed by Drupal 8, the upgrade path to Drupal 9 (and beyond) is vastly improved and will be a much smoother process. In Dries Buytaert's (Drupal's Founder) words, “Drupal 9 will be released in 2020, and it will be an easy upgrade.

If the cost of an upgrade from Drupal 7 to Drupal 8 or 9 is prohibitive for your organization, another option would be to leave your site on Drupal 7 and secure a commercial Long-Term Support agreement (while saving your shekels for an upgrade). There will be approved third-party commercial vendors offering LTS security support for Drupal 7 core (and the major contributed modules) just as there are now for Drupal 6 but there are no guarantees for how long Drupal 7 will be supported by LTS vendors. By going this route you would also be missing out on some of the great benefits of Drupal 8 which include:
 

  • Modern architecture that will allow for much easier future upgrades
  • Enhanced authoring experience with inline editing & layout builder
  • Core support for responsive images
  • Core support for multilingual websites
     

What are the options for your Drupal 8 website?

If you’ve already invested in an upgrade or a fresh build on Drupal 8, then you’re in a great position! Because Drupal 9 is being built on the same architecture as Drupal 8, the upgrade path is straightforward. The most frequently used contributed modules will be usable in both Drupal 8 and 9 which is not how it worked with major version releases in the past where we were waiting months or sometimes years for modules to be ported over to the latest version of Drupal. 

You will not be looking at a full rebuild of your site or waiting for contributed modules to be updated for compatibility with Drupal 9. Drupal 9 is really just Drupal 8 with updates for third-party dependencies and with deprecated code removed.

So if your site is already on Drupal 8, the best thing you can do to make your site easy to upgrade is to keep your Drupal 8 site up-to-date. That means staying on top of the minor version releases and security updates. That way when the time comes, an upgrade to Drupal 9 will be akin to a Drupal 8 minor version upgrade. This is great news! 
 

How can Fuse help?

We’d love to hear from you in any of the following scenarios:

  • You have questions about your Drupal site that this post hasn’t answered for you 
  • You have a Drupal 7 site that needs to be upgraded and want to discuss the options
  • You have a Drupal site that needs ongoing security monitoring and updates
  • You are starting on a new Drupal project and require migration of content from another Drupal site or CMS
Aug 20 2019
Aug 20

Open Source is increasingly popular and many companies actively contribute to Open Source projects. A lot of research is ongoing on the topic, “why contributing is valuable for companies” and Dries also wrote a blogpost about "The investment case for employing a Drupal core contributor".

In this blogpost I want to describe, why 1xINTERNET contributes and what is our rationale behind it. 

Then I will put this in perspective of contributions to the Drupal project. 

Finally I’m interested in hearing from others how they are contributing and how they calculate the value of contribution. 

What’s your math?

Community work

In 2018 we spent a lot of time organising Drupal Europe together with other community members. We also actively participated in Drupal camps as well as in various boards such as the Drupal Association board, the board of the German Drupal Business Association, and the board of the Icelandic Drupal Association. We help promote Drupal in Germany and Austria by organising the Splash Awards and we encourage our employees to actively participate in Drupal events.
In 2018 we had on average 20 employees. When we calculate the time spent on community activities, it adds up to ~5% of our workforce which is equivalent to ~1 full-time-equivalent (FTE).

Sponsorships and memberships

In 2018 we invested in the following events and memberships:

  • Sub sponsor at Frontend United in Utrecht, Netherlands
  • Gold sponsor at Drupal Dev Days in Lisbon, Portugal
  • Bronze sponsor at DrupalCamp Essen, Germany
  • Co-organisation of Drupal Europe in Darmstadt, Germany
  • Organisation member of the German Drupal Association
  • Founding member of the German Drupal Business Association
  • Premium Supporting Partner of the Drupal Association
  • Donation to the Promote Drupal fund

In total we spent ~1% of our total budget in sponsorships and memberships related to the Drupal project.

Source code contribution

We also actively contribute source code to the Drupal Project. Some projects we strongly support. For others we contribute patches.
We support over 13 projects and have over 406 credits counts. We are currently ranked nr. 42 of the organisations that contribute most back to Drupal. We are really proud of this considering the size of our company is currently “only” 28 employees.

How much did we contribute in 2018?

If we add up our efforts in community work, sponsorships and memberships, and source code distribution, 1xINTERNET contributes an equivalent to ~7,5% of our annual budget to Drupal.

Why do we contribute?

Visibility and partnerships

1xINTERNET has a lot of visibility in the Drupal ecosystem. We can easily transport our marketing messages and position ourselves as experts delivering ambitious Drupal and React projects. Because we are recognised as experts, we cooperate with some of the largest organisations using Drupal and get to work on challenging projects.

Recruitment

Team event in Conil, SpainFor us it is easy to recruit Drupal talent because other members of the community notice our contribution and want to participate in shaping some of the important parts of the Drupal project. Our employees see benefits for personal growth as well as increasing their skill sets by working on the Drupal project. 

Also our team is diverse with 30% female staff, over 15 nationalities, and 5 different religious views. Our company language is ("broken") English and we have offices in Germany (Frankfurt and Berlin), South of Spain, and Iceland.

Because we can recruit top talent, we can grow our team the way we want to have it. We have a very strong collaborative spirit within our company and friendships between team members. This all helps us to build the best team and would not be possible without the visibility through our contribution.

Project delivery and development expertise

Contributing to Drupal has direct impact on the quality of our workforce.

Through source code contribution, our developers constantly increase their development skills, as their code is being reviewed by other developers. From this our clients benefit directly, because we can deliver higher quality software solutions.

The same is true for community work, because giving public talks, mentoring others, or organising events helps our employees increase their personal, organisational, and management skills.

Sales

Lastly we directly benefit with sales. We actively get invited to participate in tenders. We also get direct client requests for Drupal development, digital consulting, Drupal trainings, and Drupal audits. 

Do we contribute enough? What is our math?

Given the fact we spend 7,5% of our budget which leads to having strong advantages in visibility, recruitment, sales and collaboration, we feel that this is a sensible spending.

However, we don’t know how our advantages would change, if we contributed differently.

Questions we ask ourselves are:

  • Does it make sense to spend 7,5% of the total budget into Drupal?
  • Would we get the same kinds of benefits if we spend less on contribution and spend more on marketing activities? 
  • Would we benefit more, if we contributed more code and did less sponsoring?
  • Or should we maybe spend even of our total budget to further increase our competitive advantages?
  • What about our competitors? We often ask ourselves, why they should benefit from our contribution while contributing much less to the Drupal project?

When trying to answer these questions we have also looked at research. There is a lot of research going on at Harvard and other universities. Most of the findings confirm our observations, how we as a company benefit from contribution. 

In “Learning by Contributing: Gaining Competitive Advantage Through Contribution to Crowdsourced Public Goods” the author Frank Nagle from Havard writes:

Quote from Frank Nagle

Is this model scalable? If we spend 15% of our budget in 2019, will we see even more increase in 2020? If this works so well for us, why are other companies not doing the same?

We can start by looking at other companies in the current Drupal contribution ecosystem to see if we can find a pattern or a solution to make this scalable to both grow the ecosystem as well as the contribution to the project.
 

What's your math?

Call to action - what's your math?

In this blogpost we have laid down how much our company contributed to Drupal in 2018 and how we calculate the value of our contribution. We have analysed source code contributions to Drupal, community work and sponsorships.

We see a lot of great companies contributing to Drupal, so many people are thinking about how much to contribute.

We don’t know if we should contribute more? Or if we should contribute differently?

In order to make better decisions, we need your input. We want to learn how you value contribution. Please share, what your math is.

I think this is a valuable discussion to have in our community. It is important for all members to understand the value of contributing, so we can grow as companies and together let Drupal prosper into the future.

Please respond with a blog post and let me know either via twitter at @baddysonja,  send me an email to [email protected] or write a comment here below. I will share your posts and link them in this post.

Aug 20 2019
Aug 20

It is no mystery why Drupal has been the chosen one for over a million diverse organizations all across the globe. Unsurprisingly, the reason behind the success of this open-source software is the devoted Drupal community. A diverse group of individuals who relentlessly work towards making Drupal stronger and more powerful every single day! To them, Drupal isn’t just a web CMS platform - Drupal is a Religion. A religion that unites everyone who believe that giving back is the only way to move forward. Where contributing to the Drupal project gives them meaning and purpose.

Recently, I had the privilege of interacting with a few of the most decorated and remarkable members of the Drupal community - who also happen to be Drupal’s top contributors. I questioned them about the reason(s) behind them contributing to Drupal and what do they do to make a difference. Their responses were incredible, honest and unfeigned.

Adrian_Cid_Almaguer

Adrian Cid Almaguer

Senior Drupal Developer. Acquia Certified Grand Master - Drupal 8

quotes I use Drupal every day and my career in the last years are focused to it, so I want to work with something that I feel comfortable and that meets my needs. If I find errors or something that can be done in a better way in projects I´m using or in the Drupal Core, I open an issue in the project queue and if I have the knowledge and the time, I create a patch for it. This is a way I can says THANKS to the Drupal community.

The strength of Drupal is the community and the contributes modules you can use to create your project, one person can’t create and maintain all the modules you will need, but if several of us give ourselves the task of doing it, all will be more easy, and is not just code, we need documentation, we need examples, translations and many other things in the community, the only way to do this is if each of the Drupal user give at least a small contribution to the community. So, when I contribute to Drupal, I’m helping you to have time to contribute to something that I may need in the future.

I maintain many Drupal modules, so basically the main contributions are create, update and migrate Drupal modules, but I contribute too in other areas. I contribute translating Drupal to the Spanish language and moderating the user translations, I create patches for some projects I do not maintain, sometimes I review some patches in the issue queue, I write and update modules documentation, I make some contributions creating tests for Drupal modules, I give support to the community in the Slack channels and in the Drupal Stack-exchange site and help new contributors to learn how to contribute projects to Drupal in the correct way. And as I’m a former teacher, I participate in regional Drupal events promoting how and why is important to contribute to Drupal projects and how to do it.

I will love to maintain a Drupal core module but I don’t know if I will have the time to do it, so for the moment I will continue migrating to Drupal 8, evolving and having up to date the modules I maintain.

Alex_Moreno

Alex Moreno

Technical Architect at Acquia

quotesContributing to open source is not just a good and healthy habit for the communities. It is also a healthy habit for your own projects and your self-improvement. Contributing validates your knowledge opening your knowledge to everyone else. So you can get feedback that helps yourself to improve, and also ensures that your project is taking the right direction. For example when patching other contributed modules with fixes or improvements.

I enjoy writing code. My main contributions have been always on that direction. Although more recently I have been also helping on other tasks, like Spanish translations in Drupal 8 Umami.

Baddy Sonja Breidert.

Baddy Sonja Breidert

Co-Founder of 1xINTERNET

quotes One of the reasons why I contribute to Drupal is to make Drupal more known in my area, get more people involved, attract new users, etc. I do my bit in contributing to the Drupal project by organising events like Drupal Europe and Drupal Camps in Germany and Iceland.

It is extremely gratifying to see new people from all over the world join the Drupal community - be it as developers, designers, volunteers, event organisers, testers or for example writing documentation. There are so many different ways to contribute!

And what happens over and over again is that people originally come for a very specific purpose, say a project they want to launch, and then stay in the community just because it is such a friendly, diverse and welcoming place! My work in the board of the Drupal Association confirms the old slogan over and over again: Come for the code, stay for the community!

Daniel_Wehner

Daniel Wehner

Senior Drupal Engineer at Times Higher Education

quotesUnlike many other projects the Drupal community tries to create a sustainable environment. Both from the technical site, but probably on the long run more important from the community side. Initiatives like Drupal Diversity & Inclusion lead the foundation for a project which won't just go away like many others

Jacob_Rockowitz>

Jacob Rockowitz

Drupal developer. Built and maintains the Webform module for Drupal 8

quotesContributing to open source software provides me with an endless collaborative challenge. My professional livelihood is tied to the success of Drupal which inspires me to give something back to the Drupal community. Contributing to Drupal also provides me with an intellectual and social hobby where I get to interact with new people every day.

Everyone has a personal groove/style for building software. After 20 years of writing software, I have come to accept that I like working towards a single goal/project, which is the Webform module for Drupal 8. At the same time, I also have learned that building open source software is more than just contributing code; it is about supporting and creating a community around the code. Supporting the Drupal community has led to also write documentation, blog about Drupal, Webform, and sustainability, present at conferences, and address the bigger picture around building and maintaining software

Joel_Pittet

Joel Pittet

Web Coder. Drupal 8 Theme System Co-maintainer

quotesI feel that I should give back to ensure the tools I use keep working. Monetarily or with my time. And with Drupal it’s a bit of both:

I started submitting patches for the Twig initiative for Drupal core, then mentoring and talks at DrupalCons and camps, followed by some contrib patches, then offered to co-maintain some commerce modules, which snowballed into more and more contrib module co-maintaining, mostly for ones I use at work.

I pay the Drupal Association individual membership to help the teams for all the Drupal.orgwork and event work they do.

Joachim_Noreiko

Joachim Noreiko

Freelance Drupal developer. Built and Maintains Drupal Code Builder

quotesI guess, I like fixing stuff, I like to code a bit in my spare time, I like to contribute to Drupal, and as a freelancer, it’s good to be visible in the community.

Lately I’ve actually been feeling a bit demotivated. I’ve been contributing to core a bit, but it’s always an uphill struggle getting beyond an initial patch. I maintain a few contrib modules, and my Drupal Code Builder tool as well.

Joris_Vercammen

Joris Vercammen (borisson)

Drupal developer, Search API + Facets

quotesBeing able to pull so many awesome modules for free really makes the work we all do in building good solutions for our customers a lot easier. This system doesn’t work without some of us putting things (code/time/blogposts/…) back into it. The Drupal community has given me a lot of things unrelated to just the software as well (really awesome friends, a better job, the ability to travel all over Europe, etc.). To enable others that come after me to have a similar experience, I think that it is important to give back, as long as it fits in the schedule.

Most of my contributions are under the form of code. I try to do some mentoring but while that is a lot more effective, it is really hard and I’m not that great at it, yet. I’m mostly interested in the Search API ecosystem because that’s what I got roped in to when I started contributing. A lot of my core contributions are for blockers (of blockers of blockers) for things that we need. I try to focus a little bit on the Facets module, since that is what I’m responsible for, but it’s not always easy or the most fun to do. Especially since I’ve still not built a Drupal 8 site with facets on it.

Malabya_Tiwari

Malabya

Open-source evangelist. Drupal Practice Head at Specbee

quotesCommunity. That’s what motivates me to contribute. The feeling I get when someone uses your code or module or theme is great. Which is a good drive to motivate for more contributions. Drupal being an open-source software, it is where it is just of the contributions by thousands of contributors. So, when we use Drupal it is our responsibility to contribute back to the software to make it even better for a wider reach

Apart from contributing modules, theme & distributions I help in organising local meetups in Bangalore and mentoring new developers to contribute and begin their contribution journey from the root level. This gives me immense pleasure when I can help someone to introduce to the world of Drupal and make them understand about the importance of contributions and community. Going forward, I would definitely strive towards introducing Drupal to students giving them a career choice and bring in more members to the Drupal community.

Nick_Wilde

Nick Wilde

Drupal developer at Taoti Creative

quotesMy main motivation has always been improving what I use - first OS contribution before my Drupal days was a bug-fix for an abandoned at the time project that was impairing my Modding of TES-III Morrowind ;). I like the challenges and benefits of working in a community. Code reviews both that I've done and those done on my code have been incredibly important to my growth as a developer. I also have used it as a portfolio/career advancement method, although that is important it is only of tertiary importance to me. Seeing a test go green or a getting confirmation that a bug is fixed is incredibly satisfying to me personally. Also, I believe if you use an open source project especially professionally, contributing back is the right thing.

My level of contributions vary a fair bit depending on my personal and professional level of busy, but mostly through contrib module maintenance/patch submissions. Also in the last year or so, I've been getting into a lot more mentorship roles - both in my new company and within the broader community. Restarted my local Drupal meetup and am doing presentations there regularly.

Rachel_Norfolk

Rachel Norfolk

Community Liaison at Drupal Association

quotes Contribution for me is, at least partly, a selfish act. I have learned so much from some of the best people in the industry, simply by following along and helping where I can. I have also built up an amazing network of people who, because they know I help others, are more prepared to help me when I need it. Both code and other ways of contributing. I’m occasionally in the Drupal core issue queues, I help mentor others and I get involved in community issues.

Renato_Goncalves

Renato Goncalves

Software Engineer at CI&T's Drupal Competence Office ()

quotesMy first motivation to contribute to the Drupal community is helping others that have the same requirement as mine. To be honest, I get very happy when someone uses my community code in their projects. I'm glad to know that I'm helping people. When I'm developing a new feature I check if my solution can be useful to other projects and that way I create my code using a generic way. - Usually, I'm the first to reuse the code several times. I think this is important to make Drupal a powerful and collaborative framework. I liked my first experience using the framework because for each requirement of my project, Drupal has a solution. I think contributing to the community is important for that. More and more new people are going to use the framework, and consequently new contributors, and in that way, it becomes increasingly powerful and efficient. An example of this is the Drupal Security Team, where they work hard to ensure that Drupal is a secure framework. I'm making contributions at the same time I delivery projects. Today I write my code in a generic way, that is, the code can be reused in other times. A good example of this model is the Janrain Connect project. This project is official in the community (contrib project) and my team and I w hard using 100% of the generic code, so we can reuse this code on other cases.

When we need to make some improvement in the code, the first point is checking a way to make this improvement using a generic solution. Using this approach we can help our project and help the community. In this way, we are contributing to making an organized and agile framework. The goal is that other people don't need to re-write code. It is a way of transforming the framework into a collaborative model.

Thomas_Seidl

Thomas Seidl

Drupal developer, “The Search API Guy”

quotesMy motivation comes from several sources: First off, I just like programming, and while fixing bugs, writing tests or giving support isn’t always fun, a lot of the time working on my modules is. It’s just one of my hobbies in that regard. Then, with my modules running on more than 100,000 sites (based on the report), there’s both a sense of accomplishment and responsibility – I feel proud in providing functionality for so many sites, and while, as a volunteer, I don’t feel directly responsible for them, I still want to help improve them where I can, take away pain points and ensure they keep running. And lastly, having a popular, well-maintained module is also the base of my business as a freelancer: it not only provides marketing for my abilities, but also the very market of users who want customizations. So, maintaining and improving my modules is also, indirectly, important for my income, even though the vast majority of my contributed work is unpaid.

Apart from participating in coding standards discussions, I almost exclusively contribute by maintaining my modules (and, increasingly rarely, adding new ones) – fixing bugs, adding features, answering support requests, etc. I sometimes also provide patches for other modules, but generally only when I’m paid to do so. (“My modules” being Search API and its add-on modules Database Search, Autocomplete, Saved Searches and, for D7 only, Solr, Pages, Location and Multi-Index Searches.)

And Lastly....

It’s not just brands that have adopted Drupal as their CMS – they are the cream of brands. From NASA to the Emmy Awards. From Harvard University to eBay. From Twitter to the New York State. These brands have various reasons to choose Drupal as their Content Management System. Drupal’s adaptability to any business process, advanced UX and UI capabilities for an interactive and personalized experience, load-time optimization functionalities, easy content authoring and management, high-security standards, the API-first architecture and so much more!

The major reason why Drupal is being accepted and endorsed by more than a million websites today is because Drupal is always ahead of the curve. Especially since Drupal adopted a continuous innovation model wherein updated versions are released every 6-months with seamless upgrade paths. All of this is possible because of the proactive and ever-evolving Drupal community. The goals for their contributions may vary - from optimizing projects for personal/professional success to creating an impact on others or simply to gain more experience. Either way, they are making a difference and taking Drupal to the next level every time they contribute. Thanks to all the contributors who are making Drupal a better place.

I’d like to end with an excerpt from Dries - “It’s really the Drupal community and not so much the software that makes the Drupal project what it is. So fostering the Drupal community is actually more important than just managing the code base.”

Warmly thanking all the mentioned contributors for helping me put this article together.

drupal community infographic
Aug 19 2019
Aug 19

This blog has been re-posted and edited with permission from Dries Buytaert's blog.

Low-code and no-code tools for the web are on a decade-long rise; they enable self-service for marketers, and allow developers to focus on innovation.

Low code no code

A version of this article was originally published on Devops.com.

Twelve years ago, I wrote a post called Drupal and Eliminating Middlemen. For years, it was one of the most-read pieces on my blog. Later, I followed that up with a blog post called The Assembled Web, which remains one of the most read posts to date.

The point of both blog posts was the same: I believed that the web would move toward a model where non-technical users could assemble their own sites with little to no coding experience of their own.

This idea isn't new; no-code and low-code tools on the web have been on a 25-year long rise, starting with the first web content management systems in the early 1990s. Since then no-code and low-code solutions have had an increasing impact on the web. Examples include:

While this has been a long-run trend, I believe we're only at the beginning.

Trends driving the low-code and no-code movements

According to Forrester Wave: Low-Code Development Platforms for AD&D Professionals, Q1 2019, In our survey of global developers, 23% reported using low-code platforms in 2018, and another 22% planned to do so within a year..

Major market forces driving this trend include a talent shortage among developers, with an estimated one million computer programming jobs expected to remain unfilled by 2020 in the United States alone.

What is more, the developers who are employed are often overloaded with work and struggle with how to prioritize it all. Some of this burden could be removed by low-code and no-code tools.

In addition, the fact that technology has permeated every aspect of our lives — from our smartphones to our smart homes — has driven a desire for more people to become creators. As the founder of Product HuntRyan Hoover, said in a blog post: "As creating things on the internet becomes more accessible, more people will become makers."

But this does not only apply to individuals. Consider this: the typical large organization has to build and maintain hundreds of websites. They need to build, launch and customize these sites in days or weeks, not months. Today and in the future, marketers can embrace no-code and low-code tools to rapidly develop websites.

Abstraction drives innovation

As discussed in my middleman blog post, developers won't go away. Just as the role of the original webmaster (FTP hand-written HTML files, anyone?) has evolved with the advent of web content management systems, the role of web developers is changing with the rise of low-code and no-code tools.

Successful no-code approaches abstract away complexity for web development. This enables less technical people to do things that previously could only be done by developers. And when those abstractions happen, developers often move on to the next area of innovation.

When everyone is a builder, more good things will happen on the web. I was excited about this trend more than 12 years ago, and remain excited today. I'm eager to see the progress no-code and low-code solutions will bring to the web in the next decade.

Aug 19 2019
Aug 19

Experience

My experience with healthcare, Drupal, and webforms

For the past 20 years, I have worked in healthcare helping Memorial Sloan Kettering Cancer Center (MSKCC) evolve their digital platform and patient experience. About ten years ago, I persuaded MSKCC to switch to Drupal 6, which was followed by a migration to Drupal 8. More recently, I have become the maintainer of the Webform module for Drupal 8. Now, I want to leverage my experience and expertise in healthcare, webforms, and Drupal, to start exploring how we can improve patient and caregiver’s digital experience related to online appointment requests.

It’s important that we understand the problem/challenge of requesting an appointment online, examine how hospitals are currently solving this problem, and then offer some recommendations and ways to improve existing approaches. Instead of writing one very long blog post, I’m going to break up this discussion into a series of three blog posts. This initial post is going to address the patient journey and experience around an appointment request form.

These blog posts are not Drupal-specific, but my goal is to create and share an exemplary "Request an appointment" form template for the Webform module for Drupal 8.

Improving patient and caregiver’s digital experience

Improving the patient and caregiver digital experience is a very broad, massive, and challenging topic. Personally, my goal when working with doctors, researcher, and caregivers is…

Make it easier for patients to find the care they need as well as make it easier for caregivers to give the care that patients need.

Making things "easy" for patients and caregivers in healthcare is easier said than done. Healthcare systems are complex machines with multiple players dealing with different siloed technologies. Complexity in healthcare leads to a lot of frustration, which has become expected and accepted when entering into a doctor's office or hospital, or going online to file an insurance claim. The best way to make something easier is to simplify it.

Patients and caregivers want a simpler digital experience

In technology, the biggest disruptors of different industries share a commonality of making things simpler. Ride-sharing services, like Uber and Lyft, have disrupted the taxi and limousine industry by making it simple and easy to book-a-ride using a phone. In healthcare, services like ZocDoc, empower patients to book doctor appointments on a computer or phone. Healthcare systems and hospitals need to recognize that a simpler digital and in-person experience is what consumers want. Simpler digital experiences need to be thought about and provided by all aspects of a healthcare organization. Stepping back, we need to remember that digital is one part of a patient’s journey through a healthcare system.

Waiting room

Waiting room

Where does a patient’s journey begin?

There are dozens of doors that a patient may enter to get care; it could be a referral from a friend, an ambulance pulling into an ER, a banner advertisement, or a comprehensive search on the internet. Online searches inevitably lead to a hospital's website. A hospital's website is where many patients digital journeys begin.

A hospital's website is its digital entrance to the hospital

The notion that a hospital should envision their website as another entrance to the building gives people a tangible comparison for a patient's digital experience. The hospital should view the digital and the physical experiences as equally important. A hospital's lobby and website both need to be clean, organized, and easy to navigate. When a patient walks into a hospital or clicks on a link, we want them to know where they need to go as well as how to get the care they need. Both the website and the lobby need to have clear, straightforward instructions lest they find they have a frustrated and annoyed patient in their midst.

To get care, patients need to provide information about themselves; Who are they? What is wrong? Do they have insurance? The questions are answered using paper or digital form, which begins a patient's journey.

Forms are important to healthcare

Healthcare is driven by information. This information needs to be collected and stored. The more information that can be collected about a patient, the easier it is to provide them with the care they need. At the same time, it is also important to add that it is equally important to collect the "right" information about a patient at the "appropriate time." Obtaining the "right" information requires asking the "right" questions and collecting the most complete and accurate answers. Knowing when is the "appropriate time" to ask a question is even more challenging. For example, collecting a patient's entire medical history is not required to book a patient's first appointment. Many patient's digital experiences begin with an appointment form.

Knowing when is the "appropriate time" to ask a question is more challenging. For example, collecting a patient's entire medical history is not required to book a patient's first appointment. Many patient's digital experiences begin with an appointment form.

An appointment form begins or ends a patient's journey

Many years ago, while discussing the usability of some minor aspect of a request an appointment, I noted that:

If someone wants to request an appointment, they are going to fill out this form no matter what.

There is some truth to this statement because patients have lots of patience (pun intended) when dealing with healthcare. Would an unusable request an appointment form ultimately end a patient's journey? Probably not.

The hospital that provides the cleanest, simplest, and most accessible digital experience will distinguish themselves.

Appointment forms set the tone for a patient's experience with a hospital.

Types of appointment form

Booking an appointment online is not an easy task. Most hospital websites offer a "request an appointment'" form that collects a potential patient's contact information, which leads to someone calling the patient back to "schedule an appointment." Services like ZocDoc, allow patients to "schedule an appointment" without having to interact with a physical person. Even though we are primarily talking about a patient digital journey, we can't forget that people still pick up the phone and call to schedule an appointment. We should never underestimate the power of a phone call. For example, parents of pediatric patients need the caring voice of a live person telling them everything is going to be okay.

We are going to focus on a hospital's request for an appointment, which is used to collect the information needed to for a patient to schedule an appointment.

Recommendations

Recommendations for all forms

Before diving into healthcare specific forms, it helps to step back and understand what makes a form easy to understand and use. Drupal and the Webform module follows industry standard best practices around ensuring that forms are accessible to people with disabilities, available to mobile and desktop users, and provide a clean and predictable layout with standard error validation. For example, in the Webform module for Drupal, forms are laid out vertically with top-aligned labels, which have been shown make it easier for users to complete forms.

My top three general recommendations for building webform are

Keep it simple. This will reduce frustration and ensure that users will complete the webform.

Group and organize labels and inputs. The visual and information layout of a form matters. Grouping related inputs with proper labeling help users to understand what information is needed to complete a form.

Communicate what information is expected, required, and optional. Users need visual indicators that show how to complete a form and with clear error messages when data is missing.

Using the Webform module, or any other enterprise form builder, can significantly help when building accessible and usable forms. The challenge for a building a request an appointment form is asking the right questions using the best approach.

The best approach for a request an appointment form

Figuring out who is filling out the form should be the first question on any request appointment form. Knowing this single piece of information makes it possible to present the user with a form that asks the right questions. Once we know who someone is, a form can be customized, or we can direct the user to the appropriate form. For example, international appointment requests tend to require additional contact information, including callback times, which can be better addressed in a dedicated form.

Request appointment forms ask questions that lead to conversations.

Request an appointment forms are conversations

Appointment request forms result in a call back from a person, which results in conversation. Forms are essentially a digital conversation with an end user. Appointment request forms should start with general questions and lead to more specific information. The three questions that need to be answered are…

Not surprising these questions are identical to what a nurse or doctor asks when in-taking a new patient. These questions lead to the collection of different types of information which can be organized into groupings. For example, the answer to "Who are you?" is mainly contact information and the answer to "What brought you here today?" is healthcare information. Appointment requests form should guide patients through the discussion about their specific health-related issue.

In-person, when a nurse or doctor asks a question, based on the patient's answer, they will choose which question should or should not be asked next. Online forms can provide a similar user experience by using conditional logic to hide and show questions based on previous answers. Conditional logic can also be used to determine which questions should be required vs optional.

If a request an appointment form is a digital conversation, nothing about this experience should be frustrating. We want to ensure that a patient can book an appointment. With this goal in mind, every appointment form should include a phone number that a patient or caregiver can call to schedule an appointment.

Once the conversation has ended via the patient submitting the form, make sure to confirm their appointment request. Then, tell the patient what to expect next.

How top US hospitals approach their online appointment request form?

My next blog post is going to examine how top US hospitals (ranked by US News) are handling online appointment requests. Before reading my evaluations and opinions, you might want to visit the Mayo Clinic, Cleveland Clinic, Johns Hopkins Hospital, and Massachusetts General's appointment request form and ask yourselves if I was a new patient…

  • How would I feel when filling out these request an appointment form?

  • Are these appointment request forms providing a good digital experience?

  • What type of patient journey would one expect at this institution?

Almost done…

We just sent you an email. Please click the link in the email to confirm your subscription!

OK

Aug 19 2019
Aug 19

In the previous two blog posts, we learned to migrate data from JSON and XML files. We presented to configure the migrations to fetch remote files. In today's blog post, we will learn how to add HTTP request headers and authentication to the request. . For HTTP authentication, you need to choose among three options: Basic, Digest, and OAuth2. To provide this functionality, the Migrate API leverages the Guzzle HTTP Client library. Usage requirements and limitations will be presented. Let's begin.

Example config to add HTTP headers and authentication

Migrate Plus architecture for remote data fetching

The Migrate Plus module provides an extensible architecture for importing remote files. It makes use of different plugin types to fetch file, add HTTP authentication to the request, and parse the response. The following is an overview of the different plugins and how they work together to allow code and configuration reuse.

Source plugin

The url source plugin is at the core of the implementation. Its purpose is to retrieve data from a list of URLs. Ingrained in the system is the goal to separate the file fetching from the file parsing. The url plugin will delegate both tasks to other plugin types provided by Migrate Plus.

Data fetcher plugins

For file fetching, you have two options. A general-purpose file fetcher for getting files from the local file system or via stream wrappers. This plugin has been explained in detail on the posts about JSON and XML migrations. Because it supports stream wrapper, this plugin is very useful to fetch files from different locations and over different protocols. But it has two major downsides. First, it does not allow setting custom HTTP headers nor authentication parameters. Second, this fetcher is completely ignored if used with the xml or soap data parser (see below).

The second fetcher plugin is http. Under the hood, it uses the Guzzle HTTP Client library. This plugin allows you to define a headers configuration. You can set it to a list of HTTP headers to send along with the request. It also allows you to use authentication plugins (see below). The downside is that you cannot use stream wrappers. Only protocols supported by curl can be used: http, https, ftp, ftps, sftp, etc.

Data parsers plugins

Data parsers are responsible for processing the files considering their type: JSON, XML, or SOAP. These plugins let you select a subtree within the file hierarchy that contains the elements to be imported. Each record might contain more data than what you need for the migration. So, you make a second selection to manually indicate which elements will be made available to the migration. Migrate plus provides four data parses, but only two use the data fetcher plugins. Here is a summary:

  • json can use any of the data fetchers. Offers an extra configuration option called include_raw_data. When set to true, in addition to all the fields manually defined, a new one is attached to the source with the name raw. This contains a copy of the full object currently being processed.
  • simple_xml can use any data fetcher. It uses the SimpleXML class.
  • xml does not use any of the data fetchers. It uses the XMLReader class to directly fetch the file. Therefore, it is not possible to set HTTP headers or authentication.
  • xml does not use any data fetcher. It uses the SoapClient class to directly fetch the file. Therefore, it is not possible to set HTTP headers or authentication.

The difference between xml and simple_xml were presented in the previous article.

Authentication plugins

These plugins add authentication headers to the request. If correct, you could fetch data from protected resources. They work exclusively with the http data fetcher. Therefore, you can use them only with json and simple_xml data parsers. To do that, you set an authentication configuration whose value can be one of the following:

  • basic for HTTP Basic authentication.
  • digest for HTTP Digest authentication.
  • oauth2 for OAuth2 authentication over HTTP.

Below are examples for JSON and XML imports with HTTP headers and authentication configured. The code snippets do not contain real migrations. You can also find them in the ud_migrations_http_headers_authentication directory of the demo repository https://github.com/dinarcon/ud_migrations.

Important: The examples are shown for reference only. Do not store any sensitive data in plain text or commit it to the repository.

JSON and XML Drupal migrations with HTTP request headers and Basic authentication.

source:
  plugin: url
  data_fetcher_plugin: http
  # Choose one data parser.
  data_parser_plugin: json|simple_xml

  urls:
    - https://understanddrupal.com/files/data.json

  item_selector: /data/udm_root

  # This configuration is provided by the http data fetcher plugin.
  # Do not disclose any sensitive information in the headers.
  headers:
    Accept-Encoding: 'gzip, deflate, br'
    Accept-Language: 'en-US,en;q=0.5'
    Custom-Key: 'understand'
    Arbitrary-Header: 'drupal'

  # This configuration is provided by the basic authentication plugin.
  # Credentials should never be saved in plain text nor committed to the repo.
  autorization:
    plugin: basic
    username: totally
    password: insecure

  fields:
    - name: src_unique_id
      label: 'Unique ID'
      selector: unique_id
    - name: src_title
      label: 'Title'
      selector: title
  ids:
    src_unique_id:
      type: integer
process:
  title: src_title
destination:
  plugin: 'entity:node'
  default_bundle: page

JSON and XML Drupal migrations with HTTP request headers and Digest authentication.

source:
  plugin: url
  data_fetcher_plugin: http
  # Choose one data parser.
  data_parser_plugin: json|simple_xml

  urls:
    - https://understanddrupal.com/files/data.json

  item_selector: /data/udm_root

  # This configuration is provided by the http data fetcher plugin.
  # Do not disclose any sensitive information in the headers.
  headers:
    Accept: 'application/json; charset=utf-8'
    Accept-Encoding: 'gzip, deflate, br'
    Accept-Language: 'en-US,en;q=0.5'
    Custom-Key: 'understand'
    Arbitrary-Header: 'drupal'

  # This configuration is provided by the digest authentication plugin.
  # Credentials should never be saved in plain text nor committed to the repo.
  autorization:
    plugin: digest
    username: totally
    password: insecure

  fields:
    - name: src_unique_id
      label: 'Unique ID'
      selector: unique_id
    - name: src_title
      label: 'Title'
      selector: title
  ids:
    src_unique_id:
      type: integer
process:
  title: src_title
destination:
  plugin: 'entity:node'
  default_bundle: page

JSON and XML Drupal migrations with HTTP request headers and OAuth2 authentication.

source:
  plugin: url
  data_fetcher_plugin: http
  # Choose one data parser.
  data_parser_plugin: json|simple_xml

  urls:
    - https://understanddrupal.com/files/data.json

  item_selector: /data/udm_root

  # This configuration is provided by the http data fetcher plugin.
  # Do not disclose any sensitive information in the headers.
  headers:
    Accept: 'application/json; charset=utf-8'
    Accept-Encoding: 'gzip, deflate, br'
    Accept-Language: 'en-US,en;q=0.5'
    Custom-Key: 'understand'
    Arbitrary-Header: 'drupal'

  # This configuration is provided by the oauth2 authentication plugin.
  # Credentials should never be saved in plain text nor committed to the repo.
  autorization:
    plugin: oauth2
    grant_type: client_credentials
    base_uri: https://understanddrupal.com
    token_url: /oauth2/token
    client_id: some_client_id
    client_secret: totally_insecure_secret

  fields:
    - name: src_unique_id
      label: 'Unique ID'
      selector: unique_id
    - name: src_title
      label: 'Title'
      selector: title
  ids:
    src_unique_id:
      type: integer
process:
  title: src_title
destination:
  plugin: 'entity:node'
  default_bundle: page

What did you learn in today’s blog post? Did you know the configuration names for adding HTTP request headers and authentication to your JSON and XML requests? Did you know that this was limited to the parsers that make use of the http fetcher? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors: Drupalize.me by Osio Labs has online tutorials about migrations, among other topics, and Agaric provides migration trainings, among other services.  Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Aug 19 2019
Aug 19

Making sure your website is accessible is becoming a necessity - and with all the right reasons. The web is for everyone and, as such, everyone should be able to use it effectively, no matter their physical ability. Sites that are inaccessible automatically prevent a large number of people from using them.

We’ve already written a series of blog posts on Drupal and accessibility - you can check them out here: part 1 & part 2. As you can probably glean from these two posts, Drupal offers a lot of accessibility features out-of-the-box, e.g. the requirement of alt text for images in Drupal 8 (another strong case, by the way, for migrating to Drupal 8 ASAP). 

The second part of the series also takes a look at a few contributed modules with which you can further improve the accessibility of a Drupal website. During the time since the blog post’s publication, however, there have been many more accessibility-focused modules contributed to the Drupal project - and these are what we’ll take a closer look at in this post. 

Accessibility toolkit (& Accessibility)

While only available for Drupal 7, the Accessibility toolkit (the a11y module) is an invaluable resource for Drupal developers that are tasked with building user-friendly and accessible sites. It allows for: dyslexic font support, high contrast mode, inverted colors mode and text scaling. 

On top of that, it also provides support for simulating specific disabilities. Since it’s quite difficult for an able-bodied person to put themselves in the shoes of a disabled person, these simulations greatly help developers to feel empathy by reproducing the symptoms of certain disabilities such as dyslexia or colorblindness. 

If you’re looking for a module with similar capabilities that can also be used in Drupal 8, the Accessibility module is the one closest to the a11y module - it’s geared more towards content editors and site maintainers, though. It provides a set of available accessibility tests that check the content published by your editors and other users for any accessibility errors, such as a missing alt text (granted, with Drupal 8 this is already automatic). 

So, for a Drupal 7 site, these two modules can be employed in tandem: one is used for ensuring accessibility in development, while the other is used in the live environment to make sure that the content and design meet accessibility standards. Just a disclaimer, though: the Accessibility module is not covered by Drupal’s security advisory policy, since it uses the QUAIL jQuery plugin which is no longer supported.

A11y

Accessibility

Accessibility Scanner

Accessibility Scanner is a relatively new module; the first development version was released in March, while the latest alpha version was released just about two months ago (June 20). With this module, you can use Drupal together with achecker to perform web accessibility scans directly in the Drupal admin interface. 

Accessibility Scanner

Style Switcher

The Style Switcher module provides incredibly useful functionality for visitors that suffer from color blindness. It allows themers to create themes with alternate stylesheets, and site builders to add other alternate stylesheets right in the admin section. 

A site visitor is then presented with all those styles as links in a block, and they can choose the one that they prefer, e.g. one with the optimal contrast for their specific type of color blindness.

The module is available for both Drupal 7 and 8, but the Drupal 8 version is still only in alpha.

Style Switcher

Block ARIA Landmark Roles

This module was already mentioned in part 2 of our series on web accessibility in Drupal; it’s available for Drupal 7 and 8. It allows you to assign ARIA landmark roles and/or ARIA labels to a block, which makes it easier for screen readers and other assistive technologies to identify the type and purpose of a certain piece of content. This greatly simplifies site navigation for visitors using such technologies. 

Block ARIA Landmark Roles

Text Resize

While it’s quite easy to resize the text of a page using the keyboard (‘ctrl’ and either ‘+’ or ‘-’), not everyone browsing the web is aware of that. The Text Resize module, available for both Drupal 7 and Drupal 8, allows visitors to change the font size of a text through a special block. It also comes with a ‘reset’ option which has to be enabled from the admin page.

Text Resize

Automatic Alternative Text

With this Drupal 8 module, you can automatically generate an alt text for an image for which the user hasn’t provided any. This is done using the Microsoft Azure Cognitive Services API.

It provides one or more descriptions of an image which are ordered according to their confidence. The default descriptions are in English, but it is also possible to translate them into other languages. 

Providing an alternative text is crucial for blind or visually impaired visitors using screen readers, as it is pretty much the only means for them to take in the full content of a page. On top of that, images with the provided alt text are more SEO-friendly and thus help with your site's search engine ranking.

Even though Drupal 8 demands alt text by default for content creators, content submitted by users should also include it, and this module enables just that.

Automatic Alternative Text

Fluidproject UI Options

The UI Options module by Fluid enables users to modify a page’s font size, line height, font style, contrast and link style according to their preferences. All changes made are retained thanks to cookies. 

The module does have some limitations, however. Bootstrap themes, for example, need some additional CSS for font-sizing and line heights to work as they should, and elements that use CSS gradients can’t have their contrast settings changed. 

Fluidproject UI Options

htmLawed

This is a very useful module not just in the context of accessibility, but also security. It restricts and purifies HTML code so that it complies with the site administrator policy and standards and security best practices. 

Using this module, you’re able to autocorrect and beautify HTML markup as well as restrict HTML elements, attributes and URL protocols in the input. Moreover, it also balances tags and ensures that HTML elements are properly nested, transforms deprecated tags and attributes, etc. 

htmLawed

HTML Purifier

A very similar module to the just mentioned htmLawed, the HTML Purifier filter library is again perfect for meeting both security and accessibility requirements. It removes malicious code from your website while also ensuring W3C standards compliance. 

HTML Purifier is a great fit for Drupal as it works really well with WYSIWYG editors. With it, you get a lot of options, such as custom fonts, tables, inline styling, and many more. It’s available both for Drupal 7 and 8.

HTML Purifier

Conclusion

This was our list of modules for Drupal 7 and 8 that take care of different aspects of web accessibility. Depending on what security measures you’ve already implemented and what your team’s best practices are, you likely won’t need to employ every single module on this list.

Still, we wanted to give an overview of different options so that you can pick and choose the one that best fits your needs. These modules provide accessibility resources for both developers and content editors, as well as visitors using the site, so you’re sure to find a combination that works for you.

If you're still experiencing accessibility issues or are in need of a complete accessibility overhaul, give us a shout out and let our experienced and proven developers help you make your site accessible to everyone.
 

Aug 19 2019
Aug 19

Low-code and no-code tools for the web are on a decade-long rise; they enable self-service for marketers, and allow developers to focus on innovation.

Low code no code

A version of this article was originally published on Devops.com.

Twelve years ago, I wrote a post called Drupal and Eliminating Middlemen. For years, it was one of the most-read pieces on my blog. Later, I followed that up with a blog post called The Assembled Web, which remains one of the most read posts to date.

The point of both blog posts was the same: I believed that the web would move toward a model where non-technical users could assemble their own sites with little to no coding experience of their own.

This idea isn't new; no-code and low-code tools on the web have been on a 25-year long rise, starting with the first web content management systems in the early 1990s. Since then no-code and low-code solutions have had an increasing impact on the web. Examples include:

While this has been a long-run trend, I believe we're only at the beginning.

Trends driving the low-code and no-code movements

According to Forrester Wave: Low-Code Development Platforms for AD&D Professionals, Q1 2019, In our survey of global developers, 23% reported using low-code platforms in 2018, and another 22% planned to do so within a year..

Major market forces driving this trend include a talent shortage among developers, with an estimated one million computer programming jobs expected to remain unfilled by 2020 in the United States alone.

What is more, the developers who are employed are often overloaded with work and struggle with how to prioritize it all. Some of this burden could be removed by low-code and no-code tools.

In addition, the fact that technology has permeated every aspect of our lives — from our smartphones to our smart homes — has driven a desire for more people to become creators. As the founder of Product Hunt Ryan Hoover said in a blog post: As creating things on the internet becomes more accessible, more people will become makers..

But this does not only apply to individuals. Consider this: the typical large organization has to build and maintain hundreds of websites. They need to build, launch and customize these sites in days or weeks, not months. Today and in the future, marketers can embrace no-code and low-code tools to rapidly develop websites.

Abstraction drives innovation

As discussed in my middleman blog post, developers won't go away. Just as the role of the original webmaster has evolved with the advent of web content management systems, the role of web developers is changing with the rise of low-code and no-code tools.

Successful no-code approaches abstract away complexity for web development. This enables less technical people to do things that previously could only by done by developers. And when those abstractions happen, developers often move on to the next area of innovation.

When everyone is a builder, more good things will happen on the web. I was excited about this trend more than 12 years ago, and remain excited today. I'm eager to see the progress no-code and low-code solutions will bring to the web in the next decade.

August 19, 2019

2 min read time

Aug 19 2019
Aug 19

This is a short story on an interesting problem we were having with the Feeds module and Feeds directory fetcher module in Drupal 7.

Background on the use of Feeds

Feeds for this site is being used to ingest XML from a third party source (Reuters). The feed perhaps ingests a couple of hundred articles per day. There can be updates to the existing imported articles as well, but typically they are only updated the day the article is ingested.

Feeds was working well for over a few years, and then all of a sudden, the ingests started to fail. The failure was only on production, whilst the other (lower environments) the ingestion worked as expected.

The bizarre error

On production we were experiencing the error during import:

PDOStatement::execute(): MySQL server has gone away database.inc:2227 [warning] 
PDOStatement::execute(): Error reading result set's header [warning] 
database.inc:2227PDOException: SQLSTATE[HY000]: General error: 2006 MySQL server has [error]

The error is not so much that the database server is not alive, more so that PHP's connection to the database has been severed due to exceeding MySQL's wait_timeout value.

The reason why this would occur on only production happens on Acquia typically when you need to read and write to the shared filesystem a lot. As lower environments, the filesystem is local disk (as the environments are not clustered) the access is a lot faster. On production, the public filesystem is a network file share (which is slower).

Going down the rabbit hole

Working out why Feeds was wanting to read and/or write many files from the filesystem was the next question, and immediately one thing stood out. The shear size of the config column in the feeds_source table:

mysql> SELECT id,SUM(char_length(config))/1048576 AS size FROM feeds_source GROUP BY id;
+-------------------------------------+---------+
| id                                  | size    |
+-------------------------------------+---------+
| apworldcup_article                  |  0.0001 |
| blogs_photo_import                  |  0.0003 |
| csv_infographics                    |  0.0002 |
| photo_feed                          |  0.0002 |
| po_feeds_prestige_article           |  1.5412 |
| po_feeds_prestige_gallery           |  1.5410 |
| po_feeds_prestige_photo             |  0.2279 |
| po_feeds_reuters_article            | 21.5086 |
| po_feeds_reuters_composite          | 41.9530 |
| po_feeds_reuters_photo              | 52.6076 |
| example_line_feed_article           |  0.0002 |
| example_line_feed_associate_article |  0.0001 |
| example_line_feed_blogs             |  0.0003 |
| example_line_feed_gallery           |  0.0002 |
| example_line_feed_photo             |  0.0001 |
| example_line_feed_video             |  0.0002 |
| example_line_youtube_feed           |  0.0003 |
+-------------------------------------+---------+
What 52 MB of ASCII looks like in a single cell.

Having to deserialize 52 MB of ASCII in PHP is bad enough.

The next step was dumping the value of the config column for a single row:

drush --uri=www.example.com sqlq 'SELECT config FROM feeds_source WHERE id = "po_feeds_reuters_photo"' > /tmp/po_feeds_reuters_photo.txt
Get the 55 MB of ASCII in a file for analysis

Then open the resulting file in vim:

"/tmp/po_feeds_reuters_photo.txt" 1L, 55163105C
Vim struggles to open any file that has 55 million characters on a single line

And sure enough, inside this config column was a reference to every single XML file ever imported, a cool ~450,000 files.

a:2:{s:31:"feeds_fetcher_directory_fetcher";a:3:{s:6:"source";s:23:"private://reuters/pass1";s:5:"reset";i:0;
s:18:"feed_files_fetched";a:457065:{
s:94:"private://reuters/pass1/topnews/2018-07-04T083557Z_1_KBN1JU0WQ_RTROPTC_0_US-CHINA-AUTOS-GM.XML";i:1530693632;
s:94:"private://reuters/pass1/topnews/2018-07-04T083557Z_1_KBN1JU0WR_RTROPTT_0_US-CHINA-AUTOS-GM.XML";i:1530693632;
s:96:"private://reuters/pass1/topnews/2018-07-04T083557Z_1_LYNXMPEE630KJ_RTROPTP_0_USA-TRADE-CHINA.XML";i:1530693632;
s:97:"private://reuters/pass1/topnews/2018-07-04T083617Z_147681_KBE99T04E_RTROPTT-LNK_0_OUSBSM-LINK.XML";i:1530693632;
s:102:"private://reuters/pass1/topnews/2018-07-04T083658Z_1_KBN1JU0X2_RTROPTT_0_JAPAN-RETAIL-ZOZOTOWN-INT.XML";i:1530693632
457,065 is the array size in feed_files_fetched

So this is the root cause of the problem, Drupal is attempting to stat() ~450,000 files that do not exist, and these files are mounted on a network file share. This process took longer than MySQL's wait_timeout and MySQL closed the connection. When Drupal finally wanted to talk to the database, it was not to be found.

Interesting enough, the problem of the config column running out of space came up in 2012, and "the solution" was just to change the type of the column. Now you can store 4GB of content in this 1 column. In hindsight, perhaps this was not the smartest solution.

Also in 2012, you see the comment from @valderama:

However, as feed_files_fetched saves all items which were already imported, it grows endless if you have a periodic import.

Great to see we are not the only people having this pain.

The solution

The simple solution to limp by is to increase the wait_timeout value of your Database connection. This gives Drupal more time to scan for the previously imported files prior to importing the new ones.

$databases['default']['default']['init_commands'] = [
  'wait_timeout' => "SET SESSION wait_timeout=1500",
];
Increasing MySQL's wait_timeout in Drupal's settings.php.

As you might guess, this is not a good long term solution for sites with a lot of imported content, or content that is continually being imported.

Instead we opted to do a fairly quick update hook that would loop though all of the items in the feed_files_fetched key, and unset the older items.

<?php

/**
 * @file
 * Install file.
 */

/**
 * Function to iterate through multiple strings.
 *
 * @see https://www.sitepoint.com/community/t/strpos-with-multiple-characters/2004/2
 * @param $haystack
 * @param $needles
 * @param int $offset
 * @return bool|int
 */
function multi_strpos($haystack, $needles, $offset = 0) {
  foreach ($needles as $n) {
    if (strpos($haystack, $n, $offset) !== FALSE) {
      return strpos($haystack, $n, $offset);
    }
  }
  return false;
}

/**
 * Implements hook_update_N().
 */
function example_reuters_update_7001() {
  $feedsSource = db_select("feeds_source", "fs")
    ->fields('fs', ['config'])
    ->condition('fs.id', 'po_feeds_reuters_photo')
    ->execute()
    ->fetchObject();

  $config = unserialize($feedsSource->config);

  // We only want to keep the last week's worth of imported articles in the
  // database for content updates.
  $cutoff_date = [];
  for ($i = 0; $i < 7; $i++) {
    $cutoff_date[] = date('Y-m-d', strtotime("-$i days"));
  }

  watchdog('FeedSource records - Before trimmed at ' . time(), count($config['feeds_fetcher_directory_fetcher']['feed_files_fetched']));

  // We attempt to match based on the filename of the imported file. This works
  // as the files have a date in their filename.
  // e.g. '2018-07-04T083557Z_1_KBN1JU0WQ_RTROPTC_0_US-CHINA-AUTOS-GM.XML'
  foreach ($config['feeds_fetcher_directory_fetcher']['feed_files_fetched'] as $key => $source) {
    if (multi_strpos($key, $cutoff_date) === FALSE) {
      unset($config['feeds_fetcher_directory_fetcher']['feed_files_fetched'][$key]);
    }
  }

  watchdog('FeedSource records - After trimmed at ' . time(), count($config['feeds_fetcher_directory_fetcher']['feed_files_fetched']));

  // Save back to the database.
  db_update('feeds_source')
    ->fields([
      'config' => serialize($config),
    ])
    ->condition('id', 'po_feeds_reuters_photo', '=')
    ->execute();
}

Before the code ran, there were > 450,000 items in the array, and after we are below 100. So a massive decrease in database size.

More importantly, the importer now runs a lot quicker (as it is not scanning the shared filesystem for non-existent files).

Aug 18 2019
Aug 18

Learn about On-page SEO and which elements of your web page are important to increase your online visibility. Let's start!

On-page SEO is much more than title tags, meta descriptions and valuable content. Here is my actionable guide for digital marketers. I am an SEO Specialist and teamed up with one of my colleagues – a Content Marketing Specialist – for this article. Have fun reading it.

On-page SEO is about creating relevant signals to let search engines know what your page is about. Which improves the website’s ranking in search results.

There are no IT skills needed to implement on-page recommendations as most CMS have an extension for it. For example, if you use WordPress, download the Yoast SEO plugin, or add the Metatag module to Drupal.

On-Page SEO: Hypothetical case study

How to create those relevant signals? Let’s take the example of a florist. StarFlo is located in Lausanne and Zurich, Switzerland. StarFlo has a website in three languages (French, German and English). The flower shop decided to create a specific product page for wedding, in English. A product page is designed to provide information to users about a product and/or a service.

Find relevant keywords with the right search intent

The first step is to define keywords with the highest potential. The goal is to select words, which help to increase the ranking of the wedding product page.
Here are some examples of keywords (non-exhaustive list):

  • “wedding flowers lausanne”
  • “wedding flowers zurich”
  • “wedding table decorations”
  • “wedding bouquet”
  • “rose bouquet bridal”
  • “winter wedding flowers”
  • “wedding floral packages”
  • “orchid wedding bouquet”
  • “wedding flowers shop”

We will take the monthly volume of English keywords in Switzerland into consideration, because we are focusing on a flower shop located in Lausanne and Zurich whose product page is in English.

According to the image below, “wedding table decorations” and “wedding bouquet” have a higher volume (column Search) and a low difficulty score (column KD). Therefore, it could probably make sense to use those keywords. However, you need to investigate further.

If you check Google search results for the keyword “wedding table decorations”, you see a lot of images coming from Pinterest. People who are looking for “wedding table decorations” are looking for ideas and inspiration. As a result, “Wedding table decoration” might be a great blog post topic. As FloStar wants to create a product page, we suggest using “wedding flower shop” as a primary keyword, even if this keyword has a lower volume than “wedding table decorations”. The intent of the people searching “wedding flowers shop” is to buy wedding flowers. The intent of the new product page of FloStar is to sell wedding flowers. Therefore the goal is to align both the intent of the target public and the intent of the product page with this keyword.
Once you have the keywords, optimize the content of the page

On-page SEO structural elements

Title tags, H1, H2, and images are part of the on-page structural elements that communicate with search engines

Title tag best practices: clear and easy to understand

The title tag, is the page title and must contain the keyword in less than 60 characters (600 pixels). Ideally, the title tag is unambiguous and easy to understand. You define the title tag individually for each page.

For example:

Wedding flowers shop in Zurich & Lausanne | StarFlo

You do not need to end your title tag with your brand name. However, it helps to build awareness, even without raising the volume of clicks.

Meta description best practices: a short description with a call to action

The meta description describes the content of a page and appears in the search results. The purpose of the meta description is to help the user choose the right page among the results in Google Search. It must be clear, short and engaging. You have 160 characters at your disposal.

We recommend finishing your meta description with a clear call-to-action. Use a verb to describe what you want your target audience to do.

For example:

StarFlo is a flower shop located in Lausanne & Zurich which designs traditional & modern wedding flower arrangements. See our unique wedding creations.

SEO URL’s best practices

The URL is the address of your website. Its name describes both the content of the page and encompasses the page in the overall site map. The URL should contain the keyword and be short.
The structure of the URL is usually governed by rules in the CMS you are using.
Examples for StarFlo landing page about wedding flowers:
✔︎ https://starflo.ch/wedding-flowers
https://starflo.ch/node/357

Use secondary keywords to reinforce the semantic of your page

Startflow wants to be listed top for “wedding flower shop” and “Lausanne”. You can help this page improve its ranking by also using secondary keywords. Secondary keywords are keywords that relate to your primary keyword.

Ask yourself: what questions are your target audience looking to answer by searching for these keywords? What valuable information can you provide to help them?
Your text content must offer added value for your target audience. To ensure this, create a list of topics. In the case of StarFLo, you can include secondary keywords such as “wedding bouquet” and “wedding table decorations”. It may seem odd that the keyword used as the primary keyword has a lower volume than the secondary keywords, but it makes sense in this context. Because these secondary keywords reinforce the semantic of the page.

In the “wedding bouquet” section, you can give some examples of “Bridesmaid bouquets”, “Bridal bouquets” and “Maid of Honor bouquets”, as well as other services or products related to the proposed bouquets.

SEO H1 & H2 tags best practices: structure the text with several titles

A structured text with titles and subtitles is easier to read. Furthermore, titles support your organic referencing as they are considered strong signals by search engines. Start by defining your titles H1 and H2. Use only one H1. Your titles should be clear and descriptive. Avoid generic or thematic titles.

Here is an example:

  • H1: StarFlo, wedding flower shop specialized in nuptial floral design in Lausanne, Zurich & the surrounding area
  • H2: Outstanding wedding table decorations created by our wedding flower specialist in Lausanne & Zurich
  • H2: Wedding bouquet for the bride in Lausanne & Zurich
  • H2: Best seasonal flowers for your wedding

On-page content best practices: Write a text longer than 300 words

Keep in mind these three key points when you write your text:

  • Anything under 300 words is considered thin content.
  • Make sure that your primary keyword is part of the first 100 words in your text.
  • Structure your text with titles and subtitles to help your readers. Moreover, as said above H1 & H2 are strong signals

Images & videos best practices: Define file names, alt-texts and captions

Search engines don’t scan the content of a video or an image (yet). Search engines scan the content of file names, alt-texts and captions only.
Define a meaningful alt-text for each image and video. The alt-text should include your keyword in the file name. Google can then grasp what the image shows. Remember that you wish the website to load fast, so you may compress images.

SEO Internal linking best practices: create a thematic universe within your website using internal links

When writing your text, try to create links to other pages on your website. You can add links in the text or in teasers to race attention on more (or related) topics.

From a content point of view, when you link pages of your own website, you add value to your target audience as their attention is drawn to other pages of interest. Furthermore, the audience may stay longer on your website. Moreover, creating links gives the search engine a better understanding of the website and creates a thematic universe. Topics within such a universe will be preferred by search engines. Thematic universes help Google determine the importance of a page.

From an SEO point of view, internal linking is very important. Because it implies a transfer of authority between pages. A website with high domain authority will appear higher in the search engine results. Usually, homepages have the highest authority. In the case of StarFLo, you could add a hyperlink that connects the homepage to the wedding page. We also recommend adding hyperlinks between pages. For instance, you are writing about winter wedding flowers on your wedding page, and you have a dedicated page about seasonal bouquets. You could add a hyperlink from the wedding page to the seasonal flower page.

The result: the homepage will transfer the authority to the wedding page and the wedding page to the seasonal flower page. For each transfer of authority, there will be a slight dumping factor. This means that if a page has an authority of 10 when it links to another page, the authority transferred will be for example 8.5.

Outbound links Best practices: add relevant content

Link your content to external sources, when it makes sense. For example, StartFlo provided the floral decorations for a wedding in the Lausanne Cathedral. You can add a link to the website of Lausanne’s Cathedral while mentioning.

Bonus: write SEO-optimized blog posts with strong keywords

After publishing your product page, create more entry points to your website. For example, you can write blog posts about your main subject using powerful keywords.

Answer the needs of your readers

When we did the keyword research for StarFlo, we identified a list of topics connected to the main topic. As a reminder, when we were looking at wedding flowers, we discovered that people were very interested in wedding table decorations. We also noticed that people looked for different kinds of bouquets (types of flowers, etc.). You could, for instance, create a page about winter wedding flowers and use these related keywords on it. This strategy helps to define blog post topics.

On the winter wedding flowers page, you could describe the local flowers available in the winter months, the flowers that go best together, etc.

In this case, each of your pages should focus on a different keyword. If two pages are optimized for the same keyword, they compete with each other.

Prioritize your writing according to your business

Once you have a list of topics, it’s good practice not to start writing all at once. We recommend creating an editorial plan. Be honest with yourself: how many hours per week can you dedicate to writing? How long do you need to write a 500-word article? How long do you need to find or create suitable images?

Start with the strongest keywords and the topic with the highest priority for your business.

Here is an example of prioritization:

  • “Wedding table decoration”
  • “Wedding bouquet”
  • “Winter wedding flowers”
  • “Winter wedding floral packages”

If you start writing in September and the branding guidelines of your shop include ‘local’, ‘sustainable’ and ‘proximity’. You will, therefore, write about “Winter wedding flowers” first.

You decide to focus on:

  • “Winter wedding flowers”
  • “Winter wedding floral packages”

As a wrap-up, we prepared the checklist below for you.

Checklist

  • Main keyword is defined
  • Topic brings value to the target public
  • Meta Description and Title Page are written and contain the keyword
  • URL contains the keyword
  • H1 contains the keyword, at the beginning, if possible
  • Text contains a keyword density of 3%
  • Introduction and last paragraph have a particularly high keyword density
  • File names of photos and videos contain the keyword
  • Alt-Text of photos and videos contain the keyword
  • Photo captions contain the keyword
  • Page contains links to other pages on the site
  • Page contains links to valuable external resources

What’s next

On-page SEO is an important part of SEO. However, it’s not the only aspect. Technical SEO has also a tremendous impact. We work on a hands-on blog post about technical SEO. Reach out to us if you wish to be notified when our guide will be ready! Moreover, don’t miss our next SEO/ content meet-up taking place on the 26th of September. We are going to explain how to perform a keyword research. Contact our content expert if you want to be part of the meet up.

If you want to have a personalized workshop about on-page SEO or just want to increase your ranking on Google contact our SEO team:
for English, German and French.

Aug 18 2019
Aug 18

Building intelligent applications is not just about the technical challenge. How does AI become reality? We share here one of our tools for crafting intelligent, data-intensive software solutions.

While creating applications that heavily rely on data, I’ve gotten excited about many projects which soon ended up in the graveyard of good ideas.

Have you experienced this chicken-and-egg situation: no one wants to give you money to develop something until they know how good it will be, but you can’t promise anything before experimenting with the data first?

Smart Application Canvas

To get out of this impasse fast, Liip's Data team has developed a pragmatic "recipe" for data-intensive applications: the Smart Application Canvas.

It's simple and takes around 5 minutes: whenever we get started on a project idea, we ask the four questions below. And then we ask them again at every iteration to see if the answers have changed in the meantime.

It doesn't matter which of the four you start with - the trick is to have at least a rough answer to all four questions before investing more time and energy into an idea (for example, building a prototype, pitching a project proposal, etc). As long as one of these "ingredients" remain entirely blank, the project has a slim chance of succeeding.

>> Download this template as a PDF: template_data-application-idea_v1-0.pdf

*Released under a "Creative Commons" license (Attribution / ShareAlike). This means that you're free to use and change it, provided you mention www.liip.ch and share any modified version under the same terms.*

USE - Who are the users? What job does it do well for them?

Are you looking for a way to attract more new users, keep the current ones happy, or solve an annoying issue with an existing service? Do you want to make employees' life easier or to provide better public services for citizens?

Who to ask: Designers, product managers, line managers, frontline staff who are always in contact with users

Example: Customers can get immediate help 24/7 through a chatbot on our Facebook page.

Why: Avoid building something that nobody wants

BUSINESS CASE - Who might want to pay for this? How do they benefit in return?

Should the application help a person or an organization to save money, save time, or to generate more income? Should it help them to achieve something that isn't simply isn't possible with the current alternatives? Even with intangible benefits like happiness or social status, it's worth thinking of how much the current alternatives cost: this will give you an idea of how much you can afford to invest in the development of the application.

Who to ask: Businesspeople, salespeople, executives who have a bird's eye view of the most pressing concerns in their organization

Example: Online retailer XYZ can win more return customers thanks to a better after-sales service.

Why: Get someone to invest or give you a budget. The business benefits of innovative technology aren’t straightforward at first. Do you remember the time when smartphones were new and everyone thought they needed a native app just “because it’s the future”? By now it’s much easier to say whether an app is worth the cost, but what about a chatbot?

TECHNOLOGY - What main hardware / software are necessary to get the job done?

What is possible with current technology and with how much effort? Or how can a new technology be applied in a "real-world" application? The key part of the question here is "to get the job done". When innovative technologies come along, it's tempting to try to solve too many problems at once: if there are several possible use cases, pick one. You can always change it later if you see that it doesn't work.

Who to ask: Software engineers, data scientists, hardware engineers if applicable

Example: Facebook page, chatbot framework, natural language processing (NLP), search

Why: Avoid a very costly, never-ending implementation

DATA - What numbers, text, images are necessary for it to work?

Does your organization have a lot of great data but doesn't know what to do with it? What kind of data should you start collecting? Which external data sources can you use (e.g. open data)? Where is the data saved and in which format (e.g. in a database, in PDF files, in emails, ...)?

Who to ask: People who work with the data on a daily basis (e.g. Customer Care team), database specialist, data scientist

Example: Collection of past customer questions and matching support staff answers (e.g. in emails or in an issue tracking system)

Why: What makes software "smart" is data, rather than “only” predefined rules. There might be no / not enough data, its quality might not be sufficient, or it might be difficult to access because of security or technical issues. This makes the outcome of development unpredictable. Whether our data is good enough, we can only learn by trial-and-error.

Examples

Here are a few examples of early-stage ideas by participants of the Open Data Forum 2019. The goal was to come up with applications that make use of open data.

Some canvases already carry notes on the biggest risks the idea faces, so that this can be worked on as a next step.

Acknowledgements

The following were sources of inspiration:

  • Lean Canvas by Ash Maurya
  • Business Model Canvas by Strategyzer AG
  • Machine Learning Canvas by Louis Dorard
  • Migros's "Bananita" roulade cake enjoyed on a particularly creative coffee break
  • Our clients and colleagues in the past year with whom we've tested and refined our way of working

We're constantly working on improving our methods and would love to hear about your experience with the Smart Application Canvas. Drop us a line at [email protected].

Aug 18 2019
Aug 18

Today we will learn how to migrate content from a JSON file into Drupal using the Migrate Plus module. We will show how to configure the migration to read files from the local file system and remote locations. The example includes node, images, and paragraphs migrations. Let’s get started.

Example configuration of JSON source migration

Note: Migrate Plus has many more features. For example, it contains source plugins to import from XML files and SOAP endpoints. It provides many useful process plugins for DOM manipulation, string replacement, transliteration, etc. The module also lets you define migration plugins as configurations and create groups to share settings. It offers a custom event to modify the source data before processing begins. In today’s blog post, we are focusing on importing JSON files. Other features will be covered in future entries.

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD JSON source migration whose machine name is ud_migrations_json_source. It comes with four migrations: udm_json_source_paragraph, udm_json_source_image, udm_json_source_node_local, and udm_json_source_node_remote.

You can get the Migrate Plus module using composer: composer require 'drupal/migrate_plus:^5.0'. This will install the 8.x-5.x branch where new development will happen. This branch was created to introduce breaking changes in preparation for Drupal 9. As of this writing, the 8.x-4.x branch has feature parity with the newer branch. If your Drupal site is not composer-based, you can download the module manually.

Understanding the example set up

This migration will reuse the same configuration from the introduction to paragraph migrations example. Refer to that article for details on the configuration: the destinations will be the same content type, paragraph type, and fields. The source will be changed in today's example, as we use it to explain JSON migrations. The end result will again be nodes containing an image and a paragraph with information about someone’s favorite book. The major difference is that we are going to read from JSON. In fact, three of the migrations will read from the same file. The following snippet shows a reduced version of the file to get a sense of its structure:

{
  "data": {
    "udm_people": [
      {
        "unique_id": 1,
        "name": "Michele Metts",
        "photo_file": "P01",
        "book_ref": "B10"
      },
      {...},
      {...}
    ],
    "udm_book_paragraph": [
      {
        "book_id": "B10",
        "book_details": {
          "title": "The definite guide to Drupal 7",
          "author": "Benjamin Melançon et al."
        }
      },
      {...},
      {...}
    ],
    "udm_photos": [
      {
        "photo_id": "P01",
        "photo_url": "https://agaric.coop/sites/default/files/pictures/picture-15-1421176712.jpg",
        "photo_dimensions": [240, 351]
      },
      {...},
      {...}
    ]
  }
}

Note: You can literally swap migration sources without changing any other part of the migration.  This is a powerful feature of ETL frameworks like Drupal’s Migrate API. Although possible, the example includes slight changes to demonstrate various plugin configuration options. Also, some machine names had to be changed to avoid conflicts with other examples in the demo repository.

Migrating nodes from a JSON file

In any migration project, understanding the source is very important. For JSON migrations, there are two major considerations. First, where in the file hierarchy lies the data that you want to import. It can be at the root of the file or several levels deep in the hierarchy. Second, when you get to the array of records that you want to import, what fields are going to be made available to the migration. It is possible that each record contains more data than needed. For improved performance, it is recommended to manually include only the fields that will be required for the migration. The following code snippet shows part of the local JSON file relevant to the node migration:

{
  "data": {
    "udm_people": [
      {
        "unique_id": 1,
        "name": "Michele Metts",
        "photo_file": "P01",
        "book_ref": "B10"
      },
      {...},
      {...}
    ]
  }
}

The array of records containing node data lies two levels deep in the hierarchy. Starting with data at the root and then descending one level to udm_people. Each element of this array is an object with four properties:

  • unique_id is the unique identifier for each record within the data/udm_people hierarchy.
  • name is the name of a person. This will be used in the node title.
  • photo_file is the unique identifier of an image that was created in a separate migration.
  • book_ref is the unique identifier of a book paragraph that was created in a separate migration.

The following snippet shows the configuration to read a local JSON file for the node migration:

source:
  plugin: url
  data_fetcher_plugin: file
  data_parser_plugin: json
  urls:
    - modules/custom/ud_migrations/ud_migrations_json_source/sources/udm_data.json
  item_selector: data/udm_people
  fields:
    - name: src_unique_id
      label: 'Unique ID'
      selector: unique_id
    - name: src_name
      label: 'Name'
      selector: name
    - name: src_photo_file
      label: 'Photo ID'
      selector: photo_file
    - name: src_book_ref
      label: 'Book paragraph ID'
      selector: book_ref
  ids:
    src_unique_id:
      type: integer

The name of the plugin is url. Because we are reading a local file, the data_fetcher_plugin  is set to file and the data_parser_plugin to json. The urls configuration contains an array of file paths relative to the Drupal root. In the example, we are reading from one file only, but you can read from multiple files at once. In that case, it is important that they have a homogeneous structure. The settings that follow will apply equally to all the files listed in urls.

The item_selector configuration indicates where in the JSON file lies the array of records to be migrated. Its value is an XPath-like string used to traverse the file hierarchy. In this case, the value is data/udm_people. Note that you separate each level in the hierarchy with a slash (/).

fields has to be set to an array. Each element represents a field that will be made available to the migration. The following options can be set:

  • name is required. This is how the field is going to be referenced in the migration. The name itself can be arbitrary. If it contained spaces, you need to put double quotation marks (") around it when referring to it in the migration.
  • label is optional. This is a description used when presenting details about the migration. For example, in the user interface provided by the Migrate Tools module. When defined, you do not use the label to refer to the field. Keep using the name.
  • selector is required. This is another XPath-like string to find the field to import. The value must be relative to the location specified by the item_selector configuration. In the example, the fields are direct children of the records to migrate. Therefore, only the property name is specified (e.g., unique_id). If you had nested objects or arrays, you would use a slash (/) character to go deeper in the hierarchy. This will be demonstrated in the image and paragraph migrations.

Finally, you specify an ids array of field names that would uniquely identify each record. As already stated, the unique_id field servers that purpose. The following snippet shows part of the process, destination, and dependencies configuration of the node migration:

process:
  field_ud_image/target_id:
    plugin: migration_lookup
    migration: udm_json_source_image
    source: src_photo_file
destination:
  plugin: 'entity:node'
  default_bundle: ud_paragraphs
migration_dependencies:
  required:
    - udm_json_source_image
    - udm_json_source_paragraph
  optional: []

The source for the setting the image reference is src_photo_file. Again, this is the name of the field, not the label nor selector. The configuration of the migration lookup plugin and dependencies point to two JSON migrations that come with this example. One is for migrating images and the other for migrating paragraphs.

Migrating paragraphs from a JSON file

Let’s consider an example where the records to migrate have many levels of nesting. The following snippets show part of the local JSON file and source plugin configuration for the paragraph migration:

{
  "data": {
    "udm_book_paragraph": [
      {
        "book_id": "B10",
        "book_details": {
          "title": "The definite guide to Drupal 7",
          "author": "Benjamin Melançon et al."
        }
      },
      {...},
      {...}
    ]
}
source:
  plugin: url
  data_fetcher_plugin: file
  data_parser_plugin: json
  urls:
    - modules/custom/ud_migrations/ud_migrations_json_source/sources/udm_data.json
  item_selector: data/udm_book_paragraph
  fields:
    - name: src_book_id
      label: 'Book ID'
      selector: book_id
    - name: src_book_title
      label: 'Title'
      selector: book_details/title
    - name: src_book_author
      label: 'Author'
      selector: book_details/author
  ids:
    src_book_id:
      type: string

The plugin, data_fetcher_plugin, data_parser_plugin and urls configurations have the same values as in the node migration. The item_selector and ids configurations are slightly different to represent the path to paragraph records and the unique identifier field, respectively.

The interesting part is the value of the fields configuration. Taking data/udm_book_paragraph as a starting point, the records with paragraph data have a nested structure. Notice that book_details is an object with two properties: title and author. To refer to them, the selectors are book_details/title and book_details/author, respectively. Note that you can go as many level deeps in the hierarchy to find the value that should be assigned to the field. Every level in the hierarchy would be separated by a slash (/).

In this example, the target is a single paragraph type. But a similar technique can be used to migrate multiple types. One way to configure the JSON file is to have two properties. paragraph_id would contain the unique identifier for the record. paragraph_data would be an object with a property to set the paragraph type. This would also have an arbitrary number of extra properties with the data to be migrated. In the process section, you would iterate over the records to map the paragraph fields.

The following snippet shows part of the process configuration of the paragraph migration:

process:
  field_ud_book_paragraph_title: src_book_title
  field_ud_book_paragraph_author: src_book_author

Migrating images from a JSON file

Let’s consider an example where the records to migrate have more data than needed. The following snippets show part of the local JSON file and source plugin configuration for the image migration:

{
  "data": {
    "udm_photos": [
      {
        "photo_id": "P01",
        "photo_url": "https://agaric.coop/sites/default/files/pictures/picture-15-1421176712.jpg",
        "photo_dimensions": [240, 351]
      },
      {...},
      {...}
    ]
  }
}
source:
  plugin: url
  data_fetcher_plugin: file
  data_parser_plugin: json
  urls:
    - modules/custom/ud_migrations/ud_migrations_json_source/sources/udm_data.json
  item_selector: data/udm_photos
  fields:
    - name: src_photo_id
      label: 'Photo ID'
      selector: photo_id
    - name: src_photo_url
      label: 'Photo URL'
      selector: photo_url
  ids:
    src_photo_id:
      type: string

The plugin, data_fetcher_plugin, data_parser_plugin and urls configurations have the same values as in the node migration. The item_selector and ids configurations are slightly different to represent the path to image records and the unique identifier field, respectively.

The interesting part is the value of the fields configuration. Taking data/udm_photos as a starting point, the records with image data have extra properties that are not used in the migration. Particularly, the photo_dimensions property contains an array with two values representing the width and height of the image, respectively. To ignore this property, you simply omit it from the fields configuration. In case you wanted to use it, the selectors would be photo_dimensions/0 for the width and photo_dimensions/1 for the height. Note that you use a zero-based numerical index to get the values out of arrays. Like with objects, a slash (/) is used to separate each level in the hierarchy. You can go as far as necessary in the hierarchy.

The following snippet shows part of the process configuration of the image migration:

process:
  psf_destination_filename:
    plugin: callback
    callable: basename
    source: src_photo_url

JSON file location

When using the file data fetcher plugin, you have three options to indicate the location to the JSON files in the urls configuration:

  • Use a relative path from the Drupal root. The path should not start with a slash (/). This is the approach used in this demo. For example, modules/custom/my_module/json_files/example.json.
  • Use an absolute path pointing to the CSV location in the file system. The path should start with a slash (/). For example, /var/www/drupal/modules/custom/my_module/json_files/example.json.
  • Use a stream wrapper.

Being able to use stream wrappers gives you many more options. For instance:

  • Files located in the public, private, and temporary file systems managed by Drupal. This leverages functionality already available in Drupal core. For example: public://json_files/example.json.
  • Files located in profiles, modules, and themes. You can use the System stream wrapper module or apply this core patch to get this functionality. For example, module://my_module/json_files/example.json.
  • Files located in remote servers including RSS feeds. You can use the Remote stream wrapper module to get this functionality. For example, https://understanddrupal.com/json-files/example.json.

Migrating remote JSON files

Migrate Plus provides another data fetcher plugin named http. You can use it to fetch files using the http and https protocols. Under the hood, it uses the Guzzle HTTP Client library. In a future blog post we will explain this data fetcher in more detail. For now, the udm_json_source_node_remote migration demonstrates a basic setup for this plugin. Note that only the data_fetcher_plugin and urls configurations are different from the local file example. The following snippet shows part of the configuration to read a remote JSON file for the node migration:

source:
  plugin: url
  data_fetcher_plugin: http
  data_parser_plugin: json
  urls:
    - https://api.myjson.com/bins/110rcr
  item_selector: data/udm_people
  fields: ...
  ids: ...

And that is how you can use JSON files as the source of your migrations. Many more configurations are possible. For example, you can provide authentication information to get access to protected resources. You can also set custom HTTP headers. Examples will be presented in a future entry.

What did you learn in today’s blog post? Have you migrated from JSON files before? If so, what challenges have you found? Did you know that you can read local and remote files? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors: Drupalize.me by Osio Labs has online tutorials about migrations, among other topics, and Agaric provides migration trainings, among other services.  Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Aug 17 2019
Aug 17

Today we will learn how to migrate content from a Comma-Separated Values (CSV) file into Drupal. We are going to use the latest version of the Migrate Source CSV module which depends on the third-party library league/csv. We will show how configure the source plugin to read files with or without a header row. We will also talk about a new feature that allows you to use stream wrappers to set the file location. Let’s get started.

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD CSV source migration whose machine name is ud_migrations_csv_source. It comes with three migrations: udm_csv_source_paragraph, udm_csv_source_image, and udm_csv_source_node.

You can get the Migrate Source CSV module is using composer: composer require drupal/migrate_source_csv. This will also download its dependency: the league/csv library. The example assumes you are using 8.x-3.x branch of the module, which requires composer to be installed. If your Drupal site is not composer-based, you can use the 8.x-2.x branch. Continue reading to learn the difference between the two branches.

Understanding the example set up

This migration will reuse the same configuration from the introduction to paragraph migrations example. Refer to that article for details on the configuration: the destinations will be the same content type, paragraph type, and fields. The source will be changed in today's example, as we use it to explain JSON migration. The end result will again be nodes containing an image and a paragraph with information about someone’s favorite book. The major difference is that we are going to read from JSON.

Note that you can literally swap migration sources without changing any other part of the migration. This is a powerful feature of ETL frameworks like Drupal’s Migrate API. Although possible, the example includes slight changes to demonstrate various plugin configuration options. Also, some machine names had to be changed to avoid conflicts with other examples in the demo repository.

Migrating CSV files with a header row

In any migration project, understanding the source is very important. For CSV migrations, the primary thing to consider is whether or not the file contains a row of headers. Other things to consider are what characters to use as delimiter, enclosure, and escape character. For now, let’s consider the following CSV file whose first row serves as column headers:

unique_id,name,photo_file,book_ref
1,Michele Metts,P01,B10
2,Benjamin Melançon,P02,B20
3,Stefan Freudenberg,P03,B30

This file will be used in the node migration. The four columns are used as follows:

  • unique_id is the unique identifier for each record in this CSV file.
  • name is the name of a person. This will be used as the node title.
  • photo_file is the unique identifier of an image that was created in a separate migration.
  • book_ref is the unique identifier of a book paragraph that was created in a separate migration.

The following snippet shows the configuration of the CSV source plugin for the node migration:

source:
  plugin: csv
  path: modules/custom/ud_migrations/ud_migrations_csv_source/sources/udm_people.csv
ids: [unique_id]

The name of the plugin is csv. Then you define the path pointing to the file itself. In this case, the path is relative to the Drupal root. Finally, you specify an ids array of columns names that would uniquely identify each record. As already stated, the unique_id column servers that purpose. Note that there is no need to specify all the columns names from the CSV file. The plugin will automatically make them available. That is the simplest configuration of the CSV source plugin.

The following snippet shows part of the process, destination, and dependencies configuration of the node migration:

process:
  field_ud_image/target_id:
    plugin: migration_lookup
    migration: udm_csv_source_image
    source: photo_file
destination:
  plugin: 'entity:node'
  default_bundle: ud_paragraphs
migration_dependencies:
  required:
    - udm_csv_source_image
    - udm_csv_source_paragraph
optional: []

Note that the source for the setting the image reference is photo_file. In the process pipeline you can directly use any column name that exists in the CSV file. The configuration of the migration lookup plugin and dependencies point to two CSV migrations that come with this example. One is for migrating images and the other for migrating paragraphs.

Migrating CSV files without a header row

Now let’s consider two examples of CSV files that do not have a header row. The following snippets show the example CSV file and source plugin configuration for the paragraph migration:

B10,The definite guide to Drupal 7,Benjamin Melançon et al.
B20,Understanding Drupal Views,Carlos Dinarte
B30,Understanding Drupal Migrations,Mauricio Dinarte

source:
  plugin: csv
  path: modules/custom/ud_migrations/ud_migrations_csv_source/sources/udm_book_paragraph.csv
  ids: [book_id]
  header_offset: null
  fields:
    - name: book_id
    - name: book_title
- name: 'Book author'

When you do not have a header row, you need to specify two more configuration options. header_offset has to be set to null. fields has to be set to an array where each element represents a column in the CSV file. You include a name for each column following the order in which they appear in the file. The name itself can be arbitrary. If it contained spaces, you need to put quotes (') around it. After that, you set the ids configuration to one or more columns using the names you defined.

In the process section you refer to source columns as usual. You write their name adding quotes if it contained spaces. The following snippet shows how the process section is configured for the paragraph migration:

process:
  field_ud_book_paragraph_title: book_title
field_ud_book_paragraph_author: 'Book author'

The final example will show a slight variation of the previous configuration. The following two snippets show the example CSV file and source plugin configuration for the image migration:

P01,https://agaric.coop/sites/default/files/pictures/picture-15-1421176712.jpg
P02,https://agaric.coop/sites/default/files/pictures/picture-3-1421176784.jpg
P03,https://agaric.coop/sites/default/files/pictures/picture-2-1421176752.jpg

source:
  plugin: csv
  path: modules/custom/ud_migrations/ud_migrations_csv_source/sources/udm_photos.csv
  ids: [photo_id]
  header_offset: null
  fields:
    - name: photo_id
      label: 'Photo ID'
    - name: photo_url
label: 'Photo URL'

For each column defined in the fields configuration, you can optionally set a label. This is a description used when presenting details about the migration. For example, in the user interface provided by the Migrate Tools module. When defined, you do not use the label to refer to source columns. You keep using the column name. You can see this in the value of the ids configuration.

The following snippet shows part of the process configuration of the image migration:

process:
  psf_destination_filename:
    plugin: callback
    callable: basename
source: photo_url

CSV file location

When setting the path configuration you have three options to indicate the CSV file location:

  • Use a relative path from the Drupal root. The path should not start with a slash (/). This is the approach used in this demo. For example, modules/custom/my_module/csv_files/example.csv.
  • Use an absolute path pointing to the CSV location in the file system. The path should start with a slash (/). For example, /var/www/drupal/modules/custom/my_module/csv_files/example.csv.
  • Use a stream wrapper. This feature was introduced in the 8.x-3.x branch of the module. Previous versions cannot make use of them.

Being able to use stream wrappers gives you many options for setting the location to the CSV file. For instance:

  • Files located in the public, private, and temporary file systems managed by Drupal. This leverages functionality already available in Drupal core. For example: public://csv_files/example.csv.
  • Files located in profiles, modules, and themes. You can use the System stream wrapper module or apply this core patch to get this functionality. For example, module://my_module/csv_files/example.csv.
  • Files located in remote servers including RSS feeds. You can use the Remote stream wrapper module to get this functionality. For example, https://understanddrupal.com/csv-files/example.csv.

CSV source plugin configuration

The configuration options for the CSV source plugin are very well documented in the source code. They are included here for quick reference:

  • path is required. It contains the path to the CSV file. Starting with the 8.x-3.x branch, stream wrappers are supported.
  • ids is required. It contains an array of column names that uniquely identify each record.
  • header_offset is optional. The index of record to be used as the CSV header and the thereby each record's field name. It defaults to zero (0) because the index is zero-based. For CSV files with no header row the value should be set to null.
  • fields is optional. It contains a nested array of names and labels to use instead of a header row. If set, it will overwrite the column names obtained from header_offset.
  • delimiter is optional. It contains one character column delimiter. It defaults to a comma (,). For example, if your file uses tabs as delimiter, you set this configuration to \t.
  • enclosure is optional. It contains one character used to enclose the column values. Defaults to double quotation marks (").
  • escape is optional. It contains one character used for character escaping in the column values. It defaults to a backslash (****).

Important: The configuration options changed significantly between the 8.x-3.x and 8.x-2.x branches. Refer to this change record for a reference of how to configure the plugin for the 8.x-2.x.

And that is how you can use CSV files as the source of your migrations. Because this is such a common need, it was considered to move the CSV source plugin to Drupal core. The effort is currently on hold and it is unclear if it will materialize during Drupal 8’s lifecycle. The maintainers of the Migrate API are focusing their efforts on other priorities at the moment. You can read this issue to learn about the motivation and context for offering functionality in Drupal core.

Note: The Migrate Spreadsheet module can also be used to migrate data from CSV files. It also supports Microsoft Office Excel and LibreOffice Calc (OpenDocument) files. The module leverages the PhpOffice/PhpSpreadsheet library.

What did you learn in today’s blog post? Have you migrated from CSV files before? Did you know that it is now possible to read files using stream wrappers? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors: Drupalize.me by Osio Labs has online tutorials about migrations, among other topics, and Agaric provides migration trainings, among other services. Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Aug 16 2019
Aug 16

After a couple months off, SC DUG met this month with a presentation on super cheap Drupal hosting.

Chris Zietlow from Mindgrub, Will Jackson from Kanopi Studios, and I all gave short talks very cheap ways to host Drupal 8.

[embedded content]

Chris opened by talking about using AWS Micro servers. Will shared a solution using a Raspberry Pi for a fully wireless server. I closed the discussion with a review of using Drupal Tome on Netlify.

We all worked from a loose set of rules to help keep us honest and prevent overlapping:

Rules for Cheap D8 Hosting Challenge

The goal is to figure out the cheapest D8 hosting that would actually function for a project, even if it is deeply irresponsible to actually use.

Rules

  1. It has to actually work for D8 (so modern PHP version, working database, etc),
  2. You do not actually have to spend the money, but you do need to know all the steps required to make it work.
  3. It needs to honor the TOS for any networks and services you use (no illegal network taps – legal hidden taps are fair game).
  4. You have to share your idea with the other players so we don’t have two people propose the same solution (first-come-first-serve on ideas).

Reporting

Be prepared to talk for about 5 minutes on how your solution would work.  Your talk needs to include:

  1. Estimated Monthly cost for the first year.
  2. Steps required to make it work.
  3. Known weaknesses.

If you have a super cheap hosting solution for Drupal 8 we’d love to hear about it.

Aug 16 2019
Aug 16

[embedded content]

In this video tutorial, I show you how to get up and running with Xdebug and PHPStorm using Docksal. Xdebug is a great way to debug Drupal sites. Using this tool, you can examine render arrays, evaluate expressions, determine variable paths, and more. By using Docksal with Xdebug, it takes some of the complexity out of setting this up natively on your machine.

Before I learned how to use Xdebug, I was using Kint to debug in Drupal 8. However, From my experience, Kint is really slow performance-wise and depending on what you are trying to debug, it can take a bit for a page to load. By contrast, Xdebug is instantaneous and has sped up my workflow considerably.

Resources

Tags

Aug 15 2019
Aug 15

Today we will present an introduction to paragraphs migrations in Drupal. The example consists of migrating paragraphs of one type, then connecting the migrated paragraphs to nodes. A separate image migration is included to demonstrate how they are different. At the end, we will talk about behavior that deletes paragraphs when the host entity is deleted. Let’s get started.

Example mapping for paragraph reference field.

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD paragraphs migration introduction whose machine name is ud_migrations_paragraph_intro. It comes with three migrations: ud_migrations_paragraph_intro_paragraph, ud_migrations_paragraph_intro_image, and ud_migrations_paragraph_intro_node. One content type, one paragraph type, and four fields will be created when the module is installed.

Note: Configuration placed in a module’s config/install directory will be copied to Drupal’s active configuration. And if those files have a dependencies/enforced/module key, the configuration will be removed when the listed modules are uninstalled. That is how the content type, the paragraph type, and the fields are automatically created and deleted.

You can get the Paragraph module is using composer: composer require drupal/paragraphs. This will also download its dependency: the Entity Reference Revisions module. If your Drupal site is not composer-based, you can get the code for both modules manually.

Understanding the example set up

The example code creates one paragraph type named UD book paragraph (ud_book_paragraph). It has two “Text (plain)” fields: Title (field_ud_book_paragraph_title) and Author (field_ud_book_paragraph_author). A new UD Paragraphs (ud_paragraphs) content type is also created. This has two fields: Image (field_ud_image) and Favorite book (field_ud_favorite_book) containing references to images and book paragraphs imported in separate migrations. The words in parenthesis represent the machine names of the different elements.

The paragraph migration

Migrating into a paragraph type is very similar to migrating into a content type. You specify the source, process the fields making any required transformation, and set the destination entity and bundle. The following code snippet shows the source, process, and destination sections:

source:
  plugin: embedded_data
  data_rows:
    - book_id: 'B10'
      book_title: 'The definite guide to Drupal 7'
      book_author: 'Benjamin Melançon et al.'
    - book_id: 'B20'
      book_title: 'Understanding Drupal Views'
      book_author: 'Carlos Dinarte'
    - book_id: 'B30'
      book_title: 'Understanding Drupal Migrations'
      book_author: 'Mauricio Dinarte'
  ids:
    book_id:
      type: string
process:
  field_ud_book_paragraph_title: book_title
  field_ud_book_paragraph_author: book_author
destination:
  plugin: 'entity_reference_revisions:paragraph'
  default_bundle: ud_book_paragraph

The most important part of a paragraph migration is setting the destination plugin to entity_reference_revisions:paragraph. This plugin is actually provided by the Entity Reference Revisions module. It is very important to note that paragraphs entities are revisioned. This means that when you want to create a reference to them, you need to provide two IDs: target_id and target_revision_id. Regular entity reference fields like files, images, and taxonomy terms only require the target_id. This will be further explained with the node migration.

The other configuration that you can optionally set in the destination section is default_bundle. The value will be the machine name of the paragraph type you are migrating into. You can do this when all the paragraphs for a particular migration definition file will be of the same type. If that is not the case, you can leave out the default_bundle configuration and add a mapping for the type entity property in the process section.

You can execute the paragraph migration with this command: drush migrate:import
ud_migrations_paragraph_intro_paragraph
. After running the migration, there is not much you can do to verify that it worked. Contrary to other entities, there is no user interface, available out of the box, that lists all paragraphs in the system. One way to verify if the migration worked is to manually create a View that shows paragraphs. Another way is to query the database directly. You can inspect the tables that store the paragraph fields’ data. In this example, the tables would be:

  • paragraph__field_ud_book_paragraph_author for the current author.
  • paragraph__field_ud_book_paragraph_title for the current title.
  • paragraph_r__8c3a9563ac for all the author revisions.
  • paragraph_r__3fa7e9863a for all the title revisions.

Each of those tables contains information about the bundle (paragraph type), the entity id, the revision id, and the migrated field value. Table names are derived from the machine names of the fields. If they are too long, the field name will be hashed to produce a shorter table name. Having to query the database is not ideal. Unfortunately, the options available to check if a paragraph migration worked are limited at the moment.

The node migration

The node migration will serve as the host for both referenced entities: images and paragraphs. The image migration is very similar to the one explained in a previous article. This time, the focus will be the paragraph migration. Both of them are set as dependencies of the node migration, so they need to be executed in advance. The following snippet shows how the source, destinations, and dependencies are set:

source:
  plugin: embedded_data
  data_rows:
    - unique_id: 1
      name: 'Michele Metts'
      photo_file: 'P01'
      book_ref: 'B10'
    - unique_id: 2
      name: 'Benjamin Melançon'
      photo_file: 'P02'
      book_ref: 'B20'
    - unique_id: 3
      name: 'Stefan Freudenberg'
      photo_file: 'P03'
      book_ref: 'B30'
  ids:
    unique_id:
      type: integer
destination:
  plugin: 'entity:node'
  default_bundle: ud_paragraphs
migration_dependencies:
  required:
    - ud_migrations_paragraph_intro_image
    - ud_migrations_paragraph_intro_paragraph
  optional: []

Note that photo_file and book_ref both contain the unique identifier of records in the image and paragraph migrations, respectively. These can be used with the migration_lookup plugin to map the reference fields in the nodes to be migrated. ud_paragraphs is the machine name of the target content type.

The mapping of the image reference field follows the same pattern than the one explained in the article on migration dependencies. Using the migration_lookup plugin, you indicate which is the migration that should be searched for the images. You also specify which source column contains the unique identifiers that match those in the image migration. This operation will return a single value: the file ID (fid) of the image. This value can be assigned to the target_id subfield of field_ud_image to establish the relationship. The following code snippet shows how to do it:

field_ud_image/target_id:
  plugin: migration_lookup
  migration: ud_migrations_paragraph_intro_image
  source: photo_file

Paragraph field mappings

Before diving into the paragraph field mapping, let’s think about what needs to be done. Paragraphs are revisioned entities. To make a reference to them, you need two IDs: their entity id and their entity revision id. These two values need to be assigned to two subfields of the paragraph reference field: target_id and target_revision_id respectively. You have to come up with a process pipeline that complies with this requirement. There are many ways to do it, and the specifics will depend on your field configuration. In this example, the paragraph reference field allows an unlimited number of paragraphs to be associated, but only of one type: ud_book_paragraph. Another thing to note is that even though the field allows you to add as many paragraphs as you want, the example migrates exactly one paragraph.

With those considerations in mind, the mapping of the paragraph field will be a two step process. First, use the migration_lookup plugin to get a reference to the paragraph. Second, use the fetched values to set the paragraph reference subfields. The following code snippet shows how to do it:

pseudo_mbe_book_paragraph:
  plugin: migration_lookup
  migration: ud_migrations_paragraph_intro_paragraph
  source: book_ref
field_ud_favorite_book:
  plugin: sub_process
  source:
    - '@pseudo_mbe_book_paragraph'
  process:
    target_id: '0'
    target_revision_id: '1'

The first step is a normal migration_lookup procedure. The important difference is that instead of getting a single value, like with images, the paragraph lookup operation will return an array of two values. The format is like [3, 7] where the 3 represents the entity id and the 7 represents the entity revision id of the paragraph. Note that the array keys are not named. To access those values, you would use the index of the elements starting with zero (0). This will be important later. The returned array is stored in the pseudo_mbe_book_paragraph pseudofield.

The second step is to set the target_id and target_revision_id subfields. In this example, field_ud_favorite_book is the machine name paragraph reference field. Remember that it is configured to accept an arbitrary number of paragraphs, and each will require passing an array of two elements. This means you need to process an array of arrays. To do that, you use the sub_process plugin to iterate over an array of paragraph references. In this example, the structure to iterate over would be like this:

[
  [3, 7]
]

Let’s dissect how to do the mapping of the paragraph reference field. The source configuration of the sub_process plugin contains an array of paragraph references. In the example, that array has a single element: the '@pseudo_mbe_book_paragraph' pseudofield. The quotes (') and at sign (@) are required to reuse an element that appears before in the process pipeline. Then, in the process configuration, you set the subfields for the paragraph reference field. It is worth noting that at this point you are iterating over a list of paragraph references, even if that list contains only one element. If you had more than one paragraph to migrate, whatever you defined in process will apply to all of them.

The process configuration is an array of subfield mappings. The left side of the assignment is the name of the subfield you want to set. The right side of the assignment is an array index of the paragraph reference being processed. Remember that this array does not have named-keys, so you use their numerical index to refer to them. The example sets the target_id subfield to the element in the 0 index and the target_revision_id subfield to the element in the one 1 index. Using the example data, this would be target_id: 3 and target_revision_id: 7. The quotes around the numerical indexes are important. If not used, the migration will not find the indexes and the paragraphs will not be associated. The end result of this operation will be something like this:

'field_ud_favorite_book' => array (1) [
  array (2) [
    'target_id' => string (1) "3"
    'target_revision_id' => string (1) "7"
  ]
]

There are three ways to run the migrations: manually, executing dependencies, and using tags. The following code snippet shows the three options:

# 1) Manually.
$ drush migrate:import ud_migrations_paragraph_intro_image
$ drush migrate:import ud_migrations_paragraph_intro_paragraph
$ drush migrate:import ud_migrations_paragraph_intro_node

# 2) Executing depenpencies.
$ drush migrate:import ud_migrations_paragraph_intro_node --execute-dependencies

# 3) Using tags.
$ drush migrate:import --tag='UD Paragraphs Intro'

And that is one way to map paragraph reference fields. In the end, all you have to do is set the target_id and target_revision_id subfields. The process pipeline that gets you to that point can vary depending on how your paragraphs are configured. The following is a non-exhaustive list of things to consider when migrating paragraphs:

  • How many paragraphs types can be referenced?
  • How many paragraphs instances are being migrated? Is this a multivalue field?
  • Do paragraphs have translations?
  • Do paragraphs have revisions?

Do migrated paragraphs disappear upon node rollback?

Paragraphs migrations are affected by a particular behavior of revisioned entities. If the host entity is deleted, and the paragraphs do not have translations, the whole paragraph gets deleted. That means that deleting a node will make the referenced paragraphs’ data to be removed. How does this affect your migration workflow? If the migration of the host entity is rollback, then the paragraphs will be removed, the migrate API will not know about it. In this example, if you run a migrate status command after rolling back the node migration, you will see that the paragraph migration indicated that there are no pending elements to process. The file migration for the images will report the same, but in that case, the images will remain on the system.

In any migration project, it is common that you do rollback operations to test new field mappings or fix errors. Thus, chances are very high that you will stumble upon this behavior. Thanks to Damien McKenna for helping me understand this behavior and tracking it to the rollback() method of the EntityReferenceRevisions destination plugin. So, what do you do to recover the deleted paragraphs? You have to rollback both migrations: node and paragraph. And then, you have to import the two again. The following snippet shows how to do it:

# 1) Rollback both migrations.
$ drush migrate:rollback ud_migrations_paragraph_intro_node
$ drush migrate:rollback ud_migrations_paragraph_intro_paragraph

# 2) Import both migrations againg.

$ drush migrate:import ud_migrations_paragraph_intro_paragraph
$ drush migrate:import ud_migrations_paragraph_intro_node

What did you learn in today’s blog post? Have you migrated paragraphs before? If so, what challenges have you found? Did you know paragraph reference fields require two subfields to be set? Did you that deleting the host entity also deletes referenced paragraphs? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Aug 15 2019
Aug 15

Drupal Camp Colorado was located a mile high at the Aurora King Center, but the topics reached deep beyond the development of technology. Sessions and hallway-track conversations explored the human side of our lovely technology-centric project and I enjoyed the extra depth of those talks.

Connecting With People

Normally camps and cons are crazy for me - packed with marketing, sales, meetings, organizing stuff, herding cats, and volunteering. Experiencing Colorado Camp primarily as an attendee allowed me the space to revisit why I was originally drawn to the Drupal community. 

I met old friends and got to meet people in person that I only knew by online reputation. In a good way, of course! Shout out to the Lullabot podcasters Chris Albrecht and Matt Kleve for providing such excellent content over the years! It was glorious meeting folks from the Denver area - the whole vibe of the camp and the Colorado Drupalers is genuinely happy and inviting. The organizing team did a great job and put on a high quality and information packed event. If you have a chance, check it out next year!

A few attendees at the camp were brand new to Drupal and Drupal Camp Colorado was their first event! Sharing the newcomers’ experience sparked memories from other long-term community members about their first event.

Do you remember your first Drupal event?

I hope that when you read this question, memories of your first Drupal event may surface and you take a moment to revisit the feelings, location, people, and experiences.  

A few of us rekindled our memories of our first Drupal event. Matt Kleve from Lullabot pulled out a picture of a bunch of the Colorado Camp folks at a table from 2007 hacking away at a table at a restaurant - some who are still active in the Drupal community. Wow! I got to see the beginnings of the Denver Drupal community. Thank you for sharing, Matt!

I remember going to a small San Francisco Drupal Users Group (SFDug) in John Faber’s co-working space. It was 2008? 2007? The date is blurry. Kiernan Lal from Acquia and John Faber had a couple of six packs of beer and snacks for the ten or so people who attended. At that time I was an “active lurker” trying to wrap my head around the Drupal Community. Coming from the Big Enterprise and Proprietary Software Giants environment, I was in disbelief that people could be so open and sharing. What was wrong with the Drupal community? Why are they so genuinely nice? After attending multiple events with the same inclusive group of people, the answer was clear to me: the community is full of giving individuals and genuine good will...with fun technology!

Why do we still have Drupal camps?

Conversations then explored why we still come to Drupal events - user groups, regional camps, or the big conventions. Everyone’s personal stories are different and each provides color to why volunteers and organizations continue to create a thriving social community.  

At the surface, we come to learn, teach, and collaborate on the Drupal software platform. We also explore Drupal-adjacent technologies and the human aspects of being within a community and in a technology profession.

Our events are also a place of human connection. They provide a venue for a reunion of old friends, an opportunity to see remote coworkers in person, a time to collaborate with diverse people, and sometimes those star-struck moments when you realize you are standing next to Drupal core contributors in the coffee line or you are meeting the person who wrote the module you love and use so often.

Meeting kind, giving, silly, smart, passionate people is why I personally love the Drupal ecosystem and one of the main reasons Drupal was the platform of choice for my professional career path.

Technology Development

Beyond the connections between sessions I was able to sit in and listen to other experts give share their stories of struggle and success. I was enjoying being a listener again, and taking in all the knowledge that my community has to offer. 

Feeds UI in Drupal 8 

The extended Feeds module ecosystem from Drupal 7 provided a user interface for content editors to import data without having to program migrations. The non-developer import features have been a big gap in the Drupal 8 system. 

Perhaps the true business value of a non-developer based import system isn’t fully realized by the community or supporting organizations yet. Having a non-developer way to import content will be a huge help those transitioning from the end of life Drupal 7 to a Drupal 8 platform - especially within the smaller and midsize implementation. The session “Feeds UI + Migrate Engine = Dream Migrations and Imports” by Irina Zacks and Martin Keerman from Tableau talks about just that.

I have to admit, the Feeds UI / Migrate UI movement would not exist without the pure passion and boundless energy of Irina. She has been championing this effort for years and is finally gaining traction and support, but the efforts still needs help!

Layout Builder and Site Building in Drupal

Kris Vanderwater (ElipseGc) gave us an update on the progress of Layout Builder in “Site Building 2.0: How Layout Builder is Changing Everything”. Layout Builder in core is a collaboration of all the layout-centric module maintainers of times past (e.g., Panels, Display Suite) to provide a unified way to provide the features of the previous layout systems. It is still an ongoing work in progress but is really ready for use by the greater community. My biggest takeaway from this session is “Layout Everything!” - content types, nodes, blocks, and entities. You can even layout a single taxonomy term page! 

Human Development

Challenges Turn into Strengths

Matthew Saunders shared his personal stories of life’s challenges that were turned into opportunity. Matthew’s path is unique but his experiences represent situations and challenges we may have had in the past or come up against in the future. “Pivot-Points - Recognizing Opportunity, Turning Challenges to Strengths” was incredibly touching and personal. Thank you, Matthew, for sharing your story with the community. The session provided an important moment to reflect upon the fact that everyone may be experiencing challenges that other people don’t see.

Exploring & Designing Your Career Path

A job is something one does for money, a career is connected opportunities and fuels one's future. Humans on a career path experience moments when they feel like their direction needs to alter. Money, work-life happiness, interests, and work environments are all catalysts and reasons to revisit and map your career path. My session, “What happens next? Explore and Design Your Professional Path”, is relevant for all levels and roles within the workforce. I’ve personally used the industry standard topics, tools, and methods presented in the session, even as recently as June when Hook 42 had its seven-year birthday. As an organization we continuously incorporate the tools and techniques to mentor our employee’s growth paths.

Dad and Daughter Cover Drupal.org Profiles

“Extreme Makeover: Drupal.org Profile Edition” was presented by the father-daughter team of Greg Marshall and Amanda Marshall. The session provides reasoning why one should represent your online resume and contribution profile on Drupal.org. A how-to of the profile edit pages were demonstrated, including the new changes from the diversity and inclusion team. 

I could see the impact this session has on our community instantly. While there, a new community member made his Drupal.org profile with the great instruction provided by Greg and Amanda. This session would be a great addition to a new-drupaler’s orientation of Drupal.org. 

There was a loving and supporting banter between the two punctuated by the occasional Dad Joke, which I am no stranger to. The combination provided good will, good humor, and caring fatherly advice that is now extended beyond the Marshall family to the Drupal community.

Great Experiences

I'm thankful to have been able to attend this camp as an attendee and really touch-base with my inner Drupal community member again. Connecting with those I have known for ages, and learning how new people are being introduced to the community provided a closeness to the community that I was losing sight of. Not in a way where I wasn't part of the community, but in a way where being in the weeds with everyone was a refreshing point of view to obtain again. It is important to take a step back, and remember our roots as we grow within a community that has so many moving pieces. Thank you, again, Drupal Camp Colorado for a wonderful experience. 

Aimee hannaford holding quilt of old drupal camp colorado t-shirts

Image Credit: Chris Albrecht Twitter

Aug 15 2019
Aug 15

The Drupal Community Working Group is happy to announce that we've teamed up with Otter Tech to offer live, monthly, online Code of Conduct enforcement training for Drupal Event organizers and volunteers through the end of 2019. 

The training is designed to provide "first responder" skills to Drupal community members who take reports of potential Code of Conduct issues at Drupal events, including meetups, camps, conventions, and other gatherings. The workshops will be attended by Code of Conduct enforcement teams from other open source events, which will allow cross-pollination of knowledge with the Drupal community.

Each monthly online workshop is the same; community members only have to attend one monthly workshop of their choice to complete the training.  We strongly encourage all Drupal event organizers to consider sponsoring one or two persons' attendance at this workshop.

The monthly online workshops will be presented by Sage Sharp, Otter Tech's CEO and a diversity and inclusion leader in the open source community. From the official description of the workshop, it will include:

  • Practice taking a report of a potential Code of Conduct violation (an incident report)
  • Practice following up with the reported person
  • Instructor modeling on how to take a report and follow up on a report
  • One practice scenario for a report given at an event
  • One practice scenario for a report given in an online community
  • Discussion on bias, microaggressions, personal conflicts, and false reporting
  • Frameworks for evaluating a response to a report
  • 40 minutes total of Q&A time

In addition, we have received a Drupal Community Cultivation Grant to help defray the cost of the workshop for those that need assistance. The standard cost of the workshop is $350, Otter Tech has worked with us to allow us to provide the workshop for $300. To register for the workshop, first let us know that you're interested by completing this sign-up form - everyone who completes the form will receive a coupon code for $50 off the regular price of the workshop.

For those that require additional assistance, we have a limited number of $100 subsidies available, bringing the workshop price down to $200. Subsidies will be provided based on reported need as well as our goal to make this training opportunity available to all corners of our community. To apply for the subsidy, complete the relevant section on the sign-up form. The deadline for applying for the subsidy is end-of-business on Friday, September 6, 2019 - those selected for the subsidy will be notified after this date (in time for the September 9, 2019 workshop).

The workshops will be held on:

  • September 9 (Monday) at 3 pm to 7 pm U.S. Pacific Time / 8 am to 12 pm Australia Eastern Time
  • October 23 (Wednesday) at 5 am to 9 am U.S. Pacific Time / 2 pm to 6 pm Central European Time
  • November 21 (Thursday) at 6 pm to 10 pm U.S. Pacific Time / 1 pm to 5 pm Australia Eastern Time
  • December 4 (Wednesday) at 9 am to 1 pm U.S. Pacific Time / 6 pm to 10 pm Central European Time

Those that successfully complete the training will be (at their discretion) listed on Drupal.org (in the Drupal Community Workgroup section) as a means to prove that they have completed the training. We feel that moving forward, the Drupal community now has the opportunity to have professionally trained Code of Conduct contacts at the vast majority of our events, once again, leading the way in the open source community.

We are fully aware that the fact that the workshops will be presented in English limit who will be able to attend. We are more than interested in finding additional professional Code of Conduct workshops in other languages. Please contact us if you can assist.
 

Aug 15 2019
Aug 15

Drupal is moving to the future and adopts more and more innovative trends. No wonder that high tech engineering leaders trust Drupal and build their sites with it.

Drupal in high-tech: innovative companies + innovative CMS

They have found each other! Thinking about Drupal’s innovative spirit, we want to mention plenty of its capabilities, so here are at least a few:

Great examples of high tech company websites built on Drupal

So let’s learn more about Drupal for high tech company websites by looking at the following examples.

Tesla

Electric cars, solar panels, and renewable energy are the three pillars of expertise of the incredible innovator — Tesla. The high tech giant has also chosen the right CMS — the Tesla’s site is built with Drupal.

Tesla website built with Drupal

Amazing web design with background videos and zooming effects allows customers to see the products almost like in real life and get inspired. Users can select among 30+ regions to see the website version in their native language.Tesla website built with Drupal

While choosing the products, you can specify all parameters and see the changed picture without a page reload. There is also an online payment feature.

Tesla website built with Drupal

General Electric

Discussing Drupal for high tech industry leaders, we are glad to mention General Electric Company (GE). Its innovation builds, powers, moves, and cures the world. And Drupal powers their website where we can learn this and more about their activity.

The site’s users can select among such GE businesses as aviation, power, renewable energy, healthcare, lighting, and so on. The large and handy search bar on the main page also quickly takes them to where they wish. The stylish design of many sections features background videos.

GE website built with Drupal

As 130 countries are currently home to GE operations, users can select and visit their specific site version.

GE website built with Drupal

Iteris

Iteris Inc. produces innovative sensors and other solutions that predict the state of traffic, weather, soil, etc., to boost the agriculture and transportation industries. While Iteris products win high tech innovation awards, Drupal has won their trust as a CMS.

Iteris website built with Drupal

They chose Drupal 8 to revamp their website as part of the rebranding campaign. The site is great from frontend to backend — from the stylish design with the main page’s slider to complex permissions for particular user types.

Iteris website built with Drupal

Pfizer

The Pfizer multinational pharmaceutical corporation uses technology and innovative science for advanced patient care.

Pfizer website built with Drupal

Their website has 50+ country/region and language options. The content is presented in five key categories: “Your Health,” “Our Science,” “Our People,” “Our Purpose”, and “Our products.”

Pfizer website built with Drupal

There is a strong search feature for Pfizer’s products, the option to find the clinical trial results, and much more.

What about your future website?

Hopefully, these examples of high tech company websites built on Drupal have inspired you. They are just a few from a million+ Drupal sites worldwide in various industries — e-commerce, education, business & finance, and so on. In addition to being innovative and powerful, the CMS is very versatile and flexible.

So contact our development team to discuss how Drupal can be helpful in your case or with your website idea!

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web