Aug 12 2019
Aug 12

Sean is a strong believer in the open source community at large, and that working collaboratively is best for creating awesome projects. His community work extends into maintaining and building the BADCamp website build, as well as helping to maintain Docksal, a tool used for managing development environments.

Aug 11 2019
Aug 11

Today we complete the user migration example. In the previous post, we covered how to migrate email, timezone, username, password, and status. This time, we cover creation date, roles, and profile pictures. The source, destination, and dependencies configurations were explained already. Therefore, we are jumping straight to the process transformations in this entry.

Example field mapping for user migration

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD users whose machine name is ud_migrations_users. The two migrations to execute are udm_user_pictures and udm_users. Notice that both migrations belong to the same module. Refer to this article to learn where the module should be placed.

The example assumes Drupal was installed using the standard installation profile. Particularly, we depend on a Picture (user_picture) image field attached to the user entity. The word in parenthesis represents the machine name of the image field.

The explanation below is only for the user migration. It depends on a file migration to get the profile pictures. One motivation to have two migrations is for the images to be deleted if the file migration is rolled back. Note that other techniques exist for migrating images without having to create a separate migration. We have covered two of them in the articles about subfields and constants and pseudofields.

Migrating user creation date

Have a look at the previous post for details on the source values. For reference, the user creation time is provided by the member_since column, and one of the values is April 4, 2014. The following snippet shows how the various user date related properties are set:

created:
  plugin: format_date
  source: member_since
  from_format: 'F j, Y'
  to_format: 'U'
changed: '@created'
access: '@created'
login: '@created'

The created, entity property stores a UNIX timestamp of when the user was added to Drupal. The value itself is an integer number representing the number of seconds since the epoch. For example, 280299600 represents Sun, 19 Nov 1978 05:00:00 GMT. Kudos to the readers who knew this is Drupal's default expire HTTP header. Bonus points if you knew it was chosen in honor of someone’s birthdate. ;-)

Back to the migration, you need to transform the provided date from Month day, year format to a UNIX timestamp. To do this, you use the format_date plugin. The from_format is set to F j, Y which means your source date consists of:

  • The full textual representation of a month: April.
  • Followed by a space character.
  • Followed by the day of the month without leading zeros: 4.
  • Followed by a comma and another space character.
  • Followed by the full numeric representation of a year using four digits: 2014

If the value of from_format does not make sense, you are not alone. It is actually assembled from format characters of the date PHP function. When you need to specify the from and to formats, you basically need to look at the documentation and assemble a string that matches the desired date format. You need to pay close attention because upper and lowercase letters represent different things like Y and y for the year with four-digits versus two-digits respectively. Some date components have subtle variations like d and j for the day with or without leading zeros respectively. Also, take into account white spaces and date component separators. To finish the plugin configuration, you need to set the to_format configuration to something that produces a UNIX timestamp. If you look again at the documentation, you will see that U does the job.

The changed, access, and login entity properties are also dates in UNIX timestamp format. changed indicates when the user account was last updated. access indicates when the user last accessed the site. login indicated when the user last logged in. For brevity, the same value assigned to created is also assigned to these three entity properties. The at sign (@) means copy the value of a previous mapping in the process pipeline. If needed, each property can be set to a different value or left unassigned. None is actually required.

Migrating user roles

For reference, the roles are provided by the user_roles column, and one of the values is forum moderator, forum admin. It is a comma separated list of roles from the legacy system which need to be mapped to Drupal roles. It is possible that the user_roles column is not provided at all in the source. The following snippet shows how the roles are set:

roles:
  - plugin: skip_on_empty
    method: process
    source: user_roles
  - plugin: explode
    delimiter: ','
  - plugin: callback
    callable: trim
  - plugin: static_map
    map:
      'forum admin': administrator
      'webmaster': administrator
    default_value: null

First, the skip_on_empty plugin is used to skip the processing of the roles if the source column is missing. Then, the explode plugin is used to break the list into an array of strings representing the roles. Next, the callback plugin invokes the trim PHP function to remove any leading or trailing whitespace from the role names. Finally, the static_map plugin is used to manually map values from the legacy system to Drupal roles. All of these plugins have been explained previously. Refer to other articles in the series or the plugin documentation for details on how to use and configure them.

There are some things that are worth mentioning about migrating roles using this particular process pipeline. If the comma separated list includes spaces before or after the role name, you need to trim the value because the static map will perform an equality check. Having extraneous space characters will produce a mismatch.

Also, you do not need to map the anonymous or authenticated roles. Drupal users are assumed to be authenticated and cannot be anonymous. Any other role needs to be mapped manually to its machine name. You can find the machine name of any role in its edit page. In the example, only two out of four roles are mapped. Any role that is not found in the static map will be assigned the value null as indicated in the default_value configuration. After processing the null value will be ignored, and no role will be assigned. But you could use this feature to assign a default role in case the static map does not produce a match.

Migrating profile pictures

For reference, the profile picture is provided by the user_photo column, and one of the values is P01. This value corresponds to the unique identifier of one record in the udm_user_pictures file migration, which is part of the same demo module.  It is important to note that the user_picture field is not a user entity property. The field is created by the standard installation profile and attached to the user entity. You can find its configuration in the “Manage fields” tab of the “Account settings” configuration page at /admin/config/people/accounts. The following snippet shows how profile pictures are set:

user_picture/target_id:
  plugin: migration_lookup
  migration: udm_user_pictures
  source: user_photo

Image fields are entity references. Their target_id property needs to be an integer number containing the file id (fid) of the image. This can be obtained using the migration_lookup plugin. Details on how to configure it can be found in this article. You could simply use user_picture as your field mapping because target_id is the default subfield and could be omitted. Also note that the alt subfield is not mapped. If present, its value will be used for the alternative text of the image. But if it is not specified, like in this example, Drupal will automatically generate an alternative text out of the username. An example value would be: Profile picture for user michele.

Technical note: The user entity contains other properties you can write to. For a list of available options, check the baseFieldDefinitions() method of the User class defining the entity. Note that more properties can be available up in the class hierarchy.

And with that, we wrap up the user migration example. We covered how to migrate a user’s mail, timezone, username, password, status, creation date, roles, and profile picture. Along the way, we presented various process plugins that had not been used previously in the series. We showed a couple of examples of process plugin chaining to make sure the migrated data is valid and in the format expected by Drupal.

What did you learn in today’s blog post? Did you know how to process dates for user entity properties? Have you migrated user roles before? Did you know how to import profile pictures? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series is made possible thanks to these generous sponsors. Contact us if your organization would like to support this documentation project, whether the migration series or other topics.

Aug 10 2019
Aug 10

Today we are going to learn how to migrate users into Drupal. The example code will be explained in two blog posts. In this one, we cover the migration of email, timezone, username, password, and status. In the next one, we will cover creation date, roles, and profile pictures. Several techniques will be implemented to ensure that the migrated data is valid. For example, making sure that usernames are not duplicated.

Although the example is standalone, we will build on many of the concepts that had already been covered in the series. For instance, a file migration is included to import images used as profile pictures. This topic has been explained in detail in a previous post, and the example code is pretty similar. Therefore, no explanation is provided about the file migration to keep the focus on the user migration. Feel free to read other posts in the series if you need a refresher.

Example field mapping for user migration

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD users whose machine name is ud_migrations_users. The two migrations to execute are udm_user_pictures and udm_users. Notice that both migrations belong to the same module. Refer to this article to learn where the module should be placed.

The example assumes Drupal was installed using the standard installation profile. Particularly, we depend on a Picture (user_picture) image field attached to the user entity. The word in parenthesis represents the machine name of the image field.

The explanation below is only for the user migration. It depends on a file migration to get the profile pictures. One motivation to have two migrations is for the images to be deleted if the file migration is rolled back. Note that other techniques exist for migrating images without having to create a separate migration. We have covered two of them in the articles about subfields and constants and pseudofields.

Understanding the source

It is very important to understand the format of your source data. This will guide the transformation process required to produce the expected destination format. For this example, it is assumed that the legacy system from which users are being imported did not have unique usernames. Emails were used to uniquely identify users, but that is not desired in the new Drupal site. Instead, a username will be created from a public_name source column. Special measures will be taken to prevent duplication as Drupal usernames must be unique. Two more things to consider. First, source passwords are provided in plain text (never do this!). Second, some elements might be missing in the source like roles and profile picture. The following snippet shows a sample record for the source section:

source:
  plugin: embedded_data
  data_rows:
    - legacy_id: 101
      public_name: 'Michele'
      user_email: '[email protected]'
      timezone: 'America/New_York'
      user_password: 'totally insecure password 1'
      user_status: 'active'
      member_since: 'January 1, 2011'
      user_roles: 'forum moderator, forum admin'
      user_photo: 'P01'
  ids:
    legacy_id:
      type: integer

Configuring the destination and dependencies

The destination section specifies that user is the target entity. When that is the case, you can set an optional md5_passwords configuration. If it is set to true, the system will take an MD5 hashed password and convert it to the encryption algorithm that Drupal uses. For more information password migrations refer to these articles for basic and advanced use cases. To migrate the profile pictures, a separate migration is created. The dependency of user on file is added explicitly. Refer to these articles more information on migrating images and files and setting dependencies. The following code snippet shows how the destination and dependencies are set:

destination:
  plugin: 'entity:user'
  md5_passwords: true
migration_dependencies:
  required:
    - udm_user_pictures
  optional: []

Processing the fields

The interesting part of a user migration is the field mapping. The specific transformation will depend on your source, but some arguably complex cases will be addressed in the example. Let’s start with the basics: verbatim copies from source to destination. The following snippet shows three mappings:

mail: user_email
init: user_email
timezone: user_timezone

The mail, init, and timezone entity properties are copied directly from the source. Both mail and init are email addresses. The difference is that mail stores the current email, while init stores the one used when the account was first created. The former might change if the user updates its profile, while the latter will never change. The timezone needs to be a string taken from a specific set of values. Refer to this page for a list of supported timezones.

name:
  - plugin: machine_name
    source: public_name
  - plugin: make_unique_entity_field
    entity_type: user
    field: name
    postfix: _

The name, entity property stores the username. This has to be unique in the system. If the source data contained a unique value for each record, it could be used to set the username. None of the unique source columns (eg., legacy_id) is suitable to be used as username. Therefore, extra processing is needed. The machine_name plugin converts the public_name source column into transliterated string with some restrictions: any character that is not a number or letter will be converted to an underscore. The transformed value is sent to the make_unique_entity_field. This plugin makes sure its input value is not repeated in the whole system for a particular entity field. In this example, the username will be unique. The plugin is configured indicating which entity type and field (property) you want to check. If an equal value already exists, a new one is created appending what you define as postfix plus a number. In this example, there are two records with public_name set to Benjamin. Eventually, the usernames produced by running the process plugins chain will be: benjamin and benjamin_1.

process:
  pass:
    plugin: callback
    callable: md5
    source: user_password
destination:
  plugin: 'entity:user'
  md5_passwords: true

The pass, entity property stores the user’s password. In this example, the source provides the passwords in plain text. Needless to say, that is a terrible idea. But let’s work with it for now. Drupal uses portable PHP password hashes implemented by PhpassHashedPassword. Understanding the details of how Drupal converts one algorithm to another will be left as an exercise for the curious reader. In this example, we are going to take advantage of a feature provided by the migrate API to automatically convert MD5 hashes to the algorithm used by Drupal. The callback plugin is configured to use the md5 PHP function to convert the plain text password into a hashed version. The last part of the puzzle is set, in the process section, the md5_passwords configuration to true. This will take care of converting the already md5-hashed password to the value expected by Drupal.

Note: MD5-hash passwords are insecure. In the example, the password is encrypted with MD5 as an intermediate step only. Drupal uses other algorithms to store passwords securely.

status:
  plugin: static_map
  source: user_status
  map:
    inactive: 0
    active: 1

The status, entity property stores whether a user is active or blocked from the system. The source user_status values are strings, but Drupal stores this data as a boolean. A value of zero (0) indicates that the user is blocked while a value of one (1) indicates that it is active. The static_map plugin is used to manually map the values from source to destination. This plugin expects a map configuration containing an array of key-value mappings. The value from the source is on the left. The value expected by Drupal is on the right.

Technical note: Booleans are true or false values. Even though Drupal treats the status property as a boolean, it is internally stored as a tiny int in the database. That is why the numbers zero or one are used in the example. For this particular case, using a number or a boolean value on the right side of the mapping produces the same result.

In the next blog post, we will continue with the user migration. Particularly, we will explain how to migrate the user creation time, roles, and profile pictures.

What did you learn in today’s blog post? Have you migrated user passwords before, either in plain text or hashed? Did you know how to prevent duplicates for values that need to be unique in the system? Were you aware of the plugin that allows you to manually map values from source to destination? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

Next: Migrating users into Drupal - Part 2

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Aug 09 2019
Aug 09

At Promet Source, conversations with clients and among co-workers tend to revolve around various aspects of compliance, user experience, site navigation, and design clarity. We need a common nomenclature for referring to interface elements, but that leads to the question of who makes this stuff up and what makes these terms stick?
 
I asked that recently, during an afternoon of back-to-back meetings. In separate contexts, “cookies,” “breadcrumbs,” and “hamburgers” were all mentioned as they pertain to the sites we are building for clients. But I got to wondering: what is it about the evolving Web lexicon that seems inordinately slanted towards tasty snacks?
 

One Theory

As we all know, devs and designers work very hard, with incredible focus for long hours at a stretch. Are we trying to inject some fun language that evokes touch, taste, and smell to a web that can feel rather flat sometimes when we are in the trenches?


I couldn’t help but wonder about a potentially unifying theme to cookies, breadcrumbs and the buns that provide the top and bottom horizontal lines of the increasingly ubiquitous hamburger icon. That sparked my curiosity and a bit of research.

Data/Cookie Jar

Let’s start with cookies -- a term that refers to the extraction and storage of user data such as logins, previous searches, activity on a site, and items in a shopping cart.  Almost all Websites use and store cookies on Web browsers.

a stack of chocolate chip cookies

Generally speaking, cookies are designed to inform better and more personalized Web experiences, but they do, of course, give rise to all sorts of privacy and security concerns. 
 
Potential cookie constraints for Websites developed in the United States for a U.S. audience are moving in an uncertain direction. Up to this point, it’s essentially been the Wild West, with few restrictions governing their usage. 
 
In the European Union, it’s a different story. Assorted rules and regulations, collectively known as the “Cookie Law,” have been in place for nearly a decade -- forbidding the tracking of users’ Web activity without their consent. 
 
As is the case with U.S.-based Websites that need to ensure accessibility, compliance with the Cookie Law can be complicated -- requiring rewriting and reconfiguration of code, followed by careful testing to ensure that the site’s code, server and the user’s browser are aligned to prevent cookies from tracking user behavior and collecting information. And another issue that accessibility and cookies have in common: there’s more at stake than compliance. To an increasing degree, users avoid engaging with Websites when they believe that their activity is being tracked by the use of cookies and there’s no question that overall levels of trust appear to be on the decline as privacy concerns increase. This is among the reasons why many websites are starting to give users the option of just saying no to cookies and still allowing them access to the site.

Connecting the Crumbs

Considerably newer to the Web lexicon than cookies, a breadcrumb or breadcrumb trail is a navigational aid in user interfaces designed to help users track their own activity within programs or websites, providing them with a sense of place within the bigger picture of the site. 
 
Breadcrumbs can take different forms. Generally speaking, a breadcrumb trail tracks the order of each page viewed, as a horizontal list below the top headers. This provides a guide for the user to navigate back to any point where they’ve previously been on the site. Think about Grimms’ story of Hansel and Gretel.
 
Breadcrumbs can be very helpful on complex, content-heavy sites. Who among us hasn’t found themselves frustrated in an attempt to navigate back to a page that seems to have temporarily disappeared?

On the Table

Unlike cookies, which for better or for worse, are stored behind the scenes and consumed in a manner that’s usually not known to the user, a breadcrumb trail is out in the open -- right upfront for the user to see and follow. Breadcrumbs are designed solely to enhance the user experience, functioning as a reverse GPS on complex Websites.  
 
As more and more users come to count on breadcrumbs as a navigational aid, we can expect that the demand for them will increase. At the same time, we can expect that usage of cookies will come under increased scrutiny along with a trend toward escalation of privacy concerns and a growing skittishness about how personal information is being shared. At Promet, we consider cookies to be a must-have on any site.

Time for Some Protein

As for the third item in our list of tasty Web terms, the hamburger is essentially all good. This three-line icon that’s started to appear at the top of screens serves as a mini-portal to additional options or pages.

Actual hamburger on the left. A web hamburger icon on the right.

What’s not to love about this feature that takes up so little space on the screen, but opens the door to a trove additional navigation or features for apps and Websites? Fact is, UX/UI trends are constantly evolving, and users vary widely in the pace in which they pick up what’s new and next. The hamburger icon has a lot going for it and it’s not going away.

 

Meet the Search Sandwich

There’s a item on the table and we were just introduced to it by one of our UX savvy clients. As far as I know, it doesn’t have an official name yet, so we affectionately refer to it as the “search sandwich.” It’s an evolved hamburger combined with a search icon to indicate to users that both the navigation menu and the search bar can be accessed from this icon. It looks a bit like a ham sandwich with an olive on top and might make an appearance on a website soon. Stay tuned.
 
So there you have it. Key factors in our Web design world. -- possibly a reflection of a desire to take our high-tech conversations down a notch, with these playful metaphors for elements that we must all learn to identify with whether a designer, developer, or just a web user. They remind us that the Web is a rapidly evolving environment of UI/UX trends -- created and consumed by humans. 
 
Interested in serving up a tasty web experience? Contact us today

Aug 09 2019
Aug 09

At Promet Source, conversations with clients and among co-workers tend to revolve around various aspects of compliance, user experience, site navigation, and design clarity. We need a common nomenclature for referring to interface elements, but that leads to the question of who makes this stuff up and what makes these terms stick?
 
I asked that recently, during an afternoon of back-to-back meetings. In separate contexts, “cookies,” “breadcrumbs,” and “hamburgers” were all mentioned as they pertain to the sites we are building for clients, and I got to wondering: what is it about the evolving Web lexicon that seems inordinately slanted towards tasty snacks?
 

One Theory

As we all know, devs and designers work very hard, with incredible focus for long hours at a stretch. Are we trying to inject some fun language that evokes touch, taste, and smell to a web that can feel rather flat sometimes when we are in the trenches?

And then, I couldn’t help but wonder about a potentially unifying theme to cookies, breadcrumbs and the buns that provide the top and bottom horizontal lines of the increasingly ubiquitous hamburger icon. That sparked my curiosity and a bit of research.

Data/Cookie Jar

Let’s start with cookies -- a term that refers to the extraction and storage of user data such as logins, previous searches, activity on a site, and items in a shopping cart.  Almost all Websites use and store cookies on Web browsers.

a stack of chocolate chip cookies

Generally speaking, cookies are designed to inform better and more personalized Web experiences, but they do, of course, give rise to all sorts of privacy and security concerns. 
 
Potential cookie constraints for Websites developed in the United States for a U.S. audience are moving in an uncertain direction. Up to this point, it’s essentially been the Wild West, with few restrictions governing their usage. 
 
In the European Union, it’s a different story. Assorted rules and regulations, collectively known as the “Cookie Law,” have been in place for nearly a decade -- forbidding the tracking of users’ Web activity without their consent. 
 
As is the case with U.S.-based Websites that need to ensure accessibility, compliance with the Cookie Law can be complicated -- requiring rewriting and reconfiguration of code, followed by careful testing to ensure that the site’s code, server and the user’s browser are aligned to prevent cookies from tracking user behavior and collecting information. And another issue that accessibility and cookies have in common: there’s more at stake than compliance. To an increasing degree, users avoid engaging with Websites when they believe that their activity is being tracked by the use of cookies and there’s no question that overall levels of trust appear to be on the decline as privacy concerns increase. This is among the reasons why many websites are starting to give users the option of just saying no to cookies and still allowing them access to the site.

Connecting the Crumbs

Considerably newer to the Web lexicon than cookies, a breadcrumb or breadcrumb trail is a navigational aid in user interfaces designed to help users track their own activity within programs or websites, providing them with a sense of place within the bigger picture of the site. 
 
Breadcrumbs can take different forms. Generally speaking, a breadcrumb trail tracks the order of each page viewed, as a horizontal list below the top headers. This provides a guide for the user to navigate back to any point where they’ve previously been on the site. Think about Grimms’ story of Hansel and Gretel.
 
Breadcrumbs can be very helpful on complex, content-heavy sites. Who among us hasn’t found themselves frustrated in an attempt to navigate back to a page that seems to have temporarily disappeared?

On the Table

Unlike cookies, which for better or for worse, are stored behind the scenes and consumed in a manner that’s usually not known to the user, a breadcrumb trail is out in the open -- right upfront for the user to see and follow. Breadcrumbs are designed solely to enhance the user experience, functioning as a reverse GPS on complex Websites.  
 
As more and more users come to count on breadcrumbs as a navigational aid, we can expect that the demand for them will increase. At the same time, we can expect that usage of cookies will come under increased scrutiny along with a trend toward escalation of privacy concerns and a growing skittishness about how personal information is being shared. At Promet, we consider cookies to be a must-have on any site.

Time for Some Protein

As for the third item in our list of tasty Web terms, the hamburger is essentially all good. This three-line icon that’s started to appear at the top of screens serves as a mini-portal to additional options or pages.

Actual hamburger on the left. A web hamburger icon on the right.

What’s not to love about this feature that takes up so little space on the screen, but opens the door to a trove additional navigation or features for apps and Websites? Fact is, UX/UI trends are constantly evolving, and users vary widely in the pace in which they pick up what’s new and next. The hamburger icon has a lot going for it and it’s not going away.

 

Meet the Search Sandwich

There’s a item on the table and we were just introduced to it by one of our UX savvy clients. As far as I know, it doesn’t have an official name yet, so we affectionately refer to it as the “search sandwich.” It’s an evolved hamburger combined with a search icon to indicate to users that both the navigation menu and the search bar can be accessed from this icon. It looks a bit like a ham sandwich with an olive on top and might make an appearance on a website soon. Stay tuned.
 
So there you have it. Key factors in our Web design world. -- possibly a reflection of a desire to take our high-tech conversations down a notch, with these playful metaphors for elements that we must all learn to identify with whether a designer, developer, or just a web user. They remind us that the Web is a rapidly evolving environment of UI/UX trends -- created and consumed by humans. 
 
Interested in serving up a tasty web experience? Contact us today

Aug 09 2019
Aug 09

Today we continue the conversation about migration dependencies with a hierarchical taxonomy terms example. Along the way, we will present the process and syntax for migrating into multivalue fields. The example consists of two separate migrations. One to import taxonomy terms accounting for term hierarchy. And another to import into a multivalue taxonomy term field. Following this approach, any node and taxonomy term created by the migration process will be removed from the system upon rollback.

Syntax for multivalue field migration.

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD multivalue taxonomy terms whose machine name is ud_migrations_multivalue_terms. The two migrations to execute are udm_dependencies_multivalue_term and udm_dependencies_multivalue_node. Notice that both migrations belong to the same module. Refer to this article to learn where the module should be placed.

The example assumes Drupal was installed using the standard installation profile. Particularly, a Tags (tags) taxonomy vocabulary, an Article (article) content type, and a Tags (field_tags) field that accepts multiple values. The words in parenthesis represent the machine name of each element.

Migrating taxonomy terms and their hierarchy

The example data for the taxonomy terms migration is fruits and fruit varieties. Each row will contain the name and description of the fruit. Additionally, it is possible to define a parent term to establish hierarchy. For example, “Red grape” is a child of “Grape”. Note that no numerical identifier is provided. Instead, the value of the <code>name</code> is used as a <code>string</code> identifier for the migration. If term names could change over time, it is recommended to have another column that did not change (e.g., an autoincrementing number). The following snippet shows how the source section is configured:

source:
  plugin: embedded_data
  data_rows:
    - fruit_name: 'Grape'
      fruit_description: 'Eat fresh or prepare some jelly.'
    - fruit_name: 'Red grape'
      fruit_description: 'Sweet grape'
      fruit_parent: 'Grape'
    - fruit_name: 'Pear'
      fruit_description: 'Eat fresh or prepare a jam.'
  ids:
    fruit_name:
      type: string

The destination is quite short. The target entity is set to taxonomy terms. Additionally, you indicate which vocabulary to migrate into. If you have terms that would be stored in different vocabularies, you can use the <code>vid</code> property in the process section to assign the target vocabulary. If you write to a single one, the <code>default_bundle</code> key in the destination can be used instead. The following snippet shows how the destination section is configured:

destination:
  plugin: 'entity:taxonomy_term'
  default_bundle: tags

For the process section, three entity properties are set: name, description, and parent. The first two are strings copied directly from the source. In the case of <code>parent</code>, it is an entity reference to another taxonomy term. It stores the taxonomy term id (<code>tid</code>) of the parent term. To assign its value, the <code>migration_lookup</code> plugin is configured similar to the previous example. The difference is that, in this case, the migration to reference is the same one being defined. This sets an important consideration. Parent terms should be migrated before their children. This way, they can be found by the look up operation. Also note that the look up value is the term name itself, because that is what this migration set as the unique identifier in the source section. The following snippet shows how the process section is configured:

process:
  name: fruit_name
  description: fruit_description
  parent:
    plugin: migration_lookup
    migration: udm_dependencies_multivalue_term
    source: fruit_parent

Technical note: The taxonomy term entity contains other properties you can write to. For a list of available options check the baseFieldDefinitions() method of the Term class defining the entity. Note that more properties can be available up in the class hierarchy.

Migrating multivalue taxonomy terms fields

The next step is to create a node migration that can write to a multivalue taxonomy term field. To stay on point, only one more field will be set: the title, which is required by the node entity. Read this change record for more information on how the Migrate API processes Entity API validation. The following snippet shows how the source section is configured for the node migration:

source:
  plugin: embedded_data
  data_rows:
    - unique_id: 1
      thoughtful_title: 'Amazing recipe'
      fruit_list: 'Green apple, Banana, Pear'
    - unique_id: 2
      thoughtful_title: 'Fruit-less recipe'
  ids:
    unique_id:
      type: integer

The fruits column contains a comma separated list of taxonomies to apply. Note that the values match the unique identifiers of the taxonomy term migration. If you had used numbers as migration identifiers there, you would have to use those numbers in this migration to refer to the terms. An example of that was presented in the previous post. Also note that there is one record that has no terms associated. This will be considered during the field mapping. The following snippet shows how the process section is configured for the node migration:

process:
  title: thoughtful_title
  field_tags:
    - plugin: skip_on_empty
      source: fruit_list
      method: process
      message: 'No fruit_list listed.'
    - plugin: explode
      delimiter: ','
    - plugin: migration_lookup
      migration: udm_dependencies_multivalue_term

The title of the node is a verbatim copy of the thoughtful_title column. The Tags fields, mapped using its machine name field_tags, uses three chained process plugins. The skip_on_empty plugin reads the value of the fruit_list column and skips the processing of this field if no value is provided. This is done to accommodate the fact that some records in the source do not specify tags. Note that the method configuration key is set to process. This indicates that only this field should be skipped and not the entire record. Ultimately, tags are optional in this context and nodes should still be imported even if no tag is associated.

The explode plugin allows you to break a string into an array, using a delimiter to determine where to make the cut. Later, this array is passed to the migration_lookup plugin specifying the term migration as the one to use for the look up operation. Again, the taxonomy term names are used here because they are the unique identifiers of the term migration. Note that neither of these plugins has a source configuration. This is because when process plugins are chained, the result of one plugin is sent as source to be transformed by the next one in line. The end result is an array of taxonomy term ids that will be assigned to field_tags. The migration_lookup is able to process single values and arrays.

The last part of the migration is specifying the process section and any dependencies. Refer to this article for more details on setting migration dependencies. The following snippet shows how both are configured for the node migration:

destination:
  plugin: 'entity:node'
  default_bundle: article
migration_dependencies:
  required:
    - udm_dependencies_multivalue_term
  optional: []

More syntactic sugar

One way to set multivalue fields in Drupal migrations is assigning its value to an array. Another option is to set each value manually using field deltas. Deltas are integer numbers starting with zero (0) and incrementing by one (1) for each element of a multivalue field. Although you could set any delta in the Migrate API, consider the field definition in Drupal. It is possible that limits had been set to the number of values a field could hold. You can specify deltas and subfields at the same time. The full syntax is field_name/field_delta/subfield. The following example shows the syntax for a multivalue image field:

process:
  field_photos/0/target_id: source_fid_first
  field_photos/0/alt: source_alt_first
  field_photos/1/target_id: source_fid_second
  field_photos/1/alt: source_alt_second
  field_photos/2/target_id: source_fid_third
  field_photos/2/alt: source_alt_third

Manually setting a multivalue field is less flexible and error-prone. In today’s example, we showed how to accommodate for the list of terms not being provided. Imagine having to that for each delta and subfield combination, but the functionality is there in case you need it. In the end, Drupal offers more syntactic sugar so you can write shorted field mappings. Additionally, there are various process plugins that can handle arrays for setting multivalue fields.

Note: There are other ways to migrate multivalue fields. For example, when using the entity_generate plugin provided by Migrate Plus, there is no need to create a separate taxonomy term migration. This plugin is able to create the terms on the fly while running the import process. The caveat is that terms created this way are not deleted upon rollback.

What did you learn in today’s blog post? Have you ever done a taxonomy term migration before? Were you aware of how to migrate hierarchical entities? Did you know you can manually import multivalue fields using deltas? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether the migration series or other topics.

Aug 09 2019
Aug 09

One of Drupal’s biggest strengths is its data modeling capabilities. You can break the information that you need to store into individual fields and group them in content types. You can also take advantage of default behavior provided by entities like nodes, users, taxonomy terms, files, etc. Once the data has been modeled and saved into the system, Drupal will keep track of the relationship between them. Today we will learn about migration dependencies in Drupal.

As we have seen throughout the series, the Migrate API can be used to write to different entities. One restriction though is that each migration definition can only target one type of entity at a time. Sometimes, a piece of content has references to other elements. For example, a node that includes entity reference fields to users, taxonomy terms, and images. The recommended way to get them into Drupal is writing one migration definition for each. Then, you specify the relationships that exist among them.

Snippet of migration dependency definition

Breaking up migrations

When you break up your migration project into multiple, smaller migrations they are easier to manage and you have more control of process pipeline. Depending on how you write them, you can rest assured that imported data is properly deleted if you ever have to rollback the migration. You can also enforce that certain elements exist in the system before others that depend on them can be created. In today’s example, we are going to leverage the example from the previous post to demonstrate this. The portraits imported in the file migration will be used in the image field of nodes of type article.

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD migration dependencies introduction whose machine name is ud_migrations_dependencies_intro. Last time the udm_dependencies_intro_image was imported. This time udm_dependencies_intro_node will be executed. Notice that both migrations belong to the same module. Refer to this article to learn where the module should be placed.

Writing the source and destination definition

To keep things simple, the example will only write the node title and assign the image field. A constant will be provided to create the alternative text for the images. The following snippet shows how the source section is configured:

source:
  constants:
    PHOTO_DESCRIPTION_PREFIX: 'Photo of'
  plugin: embedded_data
  data_rows:
    - unique_id: 1
      name: 'Michele Metts'
      photo_file: 'P01'
    - unique_id: 2
      name: 'David Valdez'
      photo_file: 'P03'
    - unique_id: 3
      name: 'Clayton Dewey'
      photo_file: 'P04'
  ids:
    unique_id:
      type: integer

Remember that in this migration you want to use files that have already been imported. Therefore, no URLs to the image files are provided. Instead, you need a reference to the other migration. Particularly, you need a reference to the unique identifiers for each element of the file migration. In the process section, this value will be used to look up the portrait that will be assigned to the image field.

The destination section is quite short. You only specify that the target is a node entity and the content type is article. Remember that you need to use the machine name of the content type. If you need a refresher on how this is set up, have a look at the articles in the series. It is recommended to read them in order as some examples expand on topics that had been previously covered. The following snippet shows how the destination section is configured:

destination:
  plugin: 'entity:node'
  default_bundle: article

Using previously imported files in image fields

To be able to reuse the previously imported files, the migrate_lookup plugin is used. Additionally, an alternative text for the image is created using a contact plugin concat plugin. The following snippet shows how the process section is configured:

process:
  title: name
  field_image/target_id:
    plugin: migration_lookup
    migration: udm_dependencies_intro_image
    source: photo_file
  field_image/alt:
    plugin: concat
    source:
      - constants/PHOTO_DESCRIPTION_PREFIX
      - name
    delimiter: ' '

In Drupal, files and images are entity reference fields. That means, they only store a pointer to the file, not the file itself. The pointer is an integer number representing the file ID (fid) inside Drupal. The migration_lookup plugin allows you to query the file migration so imported elements can be reused in node migration.

The migration option indicates which migration to query specifying its migration id. Additionally, you indicate which columns in your source match the unique identifiers of the migration to query. In this case, the values of the photo_file column in udm_dependencies_intro_node matches those of the photo_url column in udm_dependencies_intro_image. If a match is found, this plugin will return the file ID which can be directly assigned to the target_id of the image field. That is how the relationship between the two migrations is established.

Note: The migration_lookup plugin allows you to query more than one migration at a time. Refer to the documentation for details on how to set that up and why you would do it. It also offers additional configuration options.

As a good accessibility practice, an alternative text is set for the image using the alt subfield. Other than that, only the node title is set. And with that, you have two migrations connected between them. If you were to rollback both of them, no file or node would remain in the system.

Being explicit about migration dependencies

The node migration depends on the file migration. That it, it is required for the files to be migrated first before they can be used to as images for the nodes. In fact, in the provided example, if you were to import the nodes before the files, the migration would fail and no node would be created. You can be explicit about migration dependencies. To do it, add a new configuration option to the node migration that lists which migrations it depends on. The following snippet shows how this is configured:

migration_dependencies:
  required:
    - udm_dependencies_intro_image
  optional: []

The migration_dependencies key goes at the root level of the YAML definition file. It accepts two configuration options: required and optional. Both accept an array of migration ids. The required migrations are hard prerequisites. They need to be executed in advance or the system will refuse to import the current one. The optional migrations do not have to be executed in advance. But if you were to execute multiple migrations at a time, the system will run them in the order suggested by the dependency hierarchy. Learn more about migration dependencies in this article. Also, check this comment on Drupal.org in case you have problems where the system reports that certain dependencies are not met.

Now that the dependency among migrations has been explicitly established you have two options. Either import each migration manually in the expected order. Or, import the parent migration using the --execute-dependencies flag. When you do that, the system will take care of determining the order in which all migrations need to be imported. The following two snippets will produce the same result for the demo module:

$ drush migrate:import udm_dependencies_intro_image
$ drush migrate:import udm_dependencies_intro_node
$ drush migrate:import udm_dependencies_intro_node --execute-dependencies

In this example, there are only two migrations, but you can have as many as needed. For example, a node with references to users, taxonomy terms, paragraphs, etc. Also note that the parent entity does not have to be a node. Users, taxonomy terms, and paragraphs are all fieldable entities. They can contain references the same way nodes do. In future entries, we will talk again about migration dependencies and provide more examples.

Tagging migrations

The core Migrate API offers another mechanism to execute multiple migrations at a time. You can tag them. To do that you add a migration_tags key at the root level of the YML definition file. Its value an array of arbitrary tag names to assign to the migration. Once set, you run them using the migrate import command with the --tag flag. You can also rollback migrations per tag. The first snippet shows how to set the tags and the second how to execute them:

migration_tags:
  - UD Articles
  - UD Example
$ drush migrate:import --tag=UD Articles,UD Example
$ drush migrate:rollback --tag=UD Articles,UD Example

It is important to note that tags and dependencies are different concepts. They allow you to run multiple migrations at a time. It is possible that a migration definition file contains both, either, or neither. The tag system is used extensively in Drupal core for migrations related to upgrading to Drupal 8 from previous versions. For example, you might want to run all migrations tagged ‘Drupal 7’ if you are coming from that version. It is possible to specify more than one tag when running the migrate import command separating each with a comma (,).

Note: The Migrate Plus module offers migration groups to organize migrations similarly to how tags work. This will be covered in a future entry. Just keep in mind that tags are provided out of the box by the Migrate API. On the other hand, migrations groups depend on a contributed module.

What did you learn in today’s blog post? Have you used the migration_lookup plugin to query imported elements from a separate migration? Did you know you can set required and optional dependencies? Have you used tags to organize your migrations? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with your colleagues.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether the migration series or other topics.

Aug 08 2019
Aug 08

Our lead community developer, Alona Oneill, has been sitting in on the latest Drupal Core Initiative meetings and putting together meeting recaps outlining key talking points from each discussion. This article breaks down highlights from meetings this past week. You'll find that the meetings, while also providing updates of completed tasks, are also conversations looking for community member involvement. There are many moving pieces as things are getting ramped up for Drupal 9, so if you see something you think you can provide insights on, we encourage you to get involved.

Drupal 9 Readiness (08/05/19)

Meetings are for core and contributed project developers as well as people who have integrations and services related to core. Site developers who want to stay in the know to keep up-to-date for the easiest Drupal 9 upgrade of their sites are also welcome.

  • Usually happens every other Monday at 18:00 UTC.
  • Is done over chat.
  • Happens in threads, which you can follow to be notified of new replies even if you don’t comment in the thread. You may also join the meeting later and participate asynchronously!
  • Has a public Drupal 9 Readiness Agenda anyone can add to.
  • Transcript will be exported and posted to the agenda issue.

Symfony 4/5 compatibility

The issue, Allow Symfony 4 to be installed in Drupal 8, has had a lot of work put into it by Michael Lutz.

Contrib deprecation testing on drupal.org

Gábor Hojtsy grabbed the data and did some analysis and posted his findings to prepare for Drupal 9. The main topic of the post is to stop using drupal_set_message() now.

Examples module Drupal 9 compatibility

Andrey Postnikov posted Kharkov code sprint on where 23 issues were addressed. Follow the example code for developers project for more information. There are still a bunch of issues to review there if anyone is interested!

Module Upgrader Drupal 9 compatibility

Deprecation cleanup status - blockers to Drupal 9 branch opening

Drupal core's own deprecation testing results, there are currently 13 children issues open, most of them need reviews.

Twig 2 upgrade guide and automation and other frontend deprecations tooling

Semantic versioning for contrib projects

Now that we've come to a path forward for how we plan on supporting semver in core (core key using semver + composer.json for advanced features), the Drupal Association is planning on auditing our infrastructure to start implementing semver.

New Drupal 8.8 deprecations that may need to be backported

  • Ryan Aslett and Gábor Hojtsy did some analysis on deprecations in contrib that are new in Drupal 8.8. They found 17% of all contrib deprecated API use now is from either code deprecated in 8.7 or 8.8. Ryan Aslett looked at the toplist of those deprecations and categorized them based on whether the replacements are also introduced in 8.8 or earlier.
  • We're not backporting API's, that carries far too much semver breaking risk. If a contrib maintainer has usages of code that were deprecated in 8.8.x, and they want their module to be 9.x compatible on the day that 9.0 comes out, they can:
    • do internal version checking,
    • open another branch,
    • wait until 9.1 to be 100% compatible with all supported versions, or
    • drop support for 8.7.x.

Renaming core modules, eg. actions

  • There's a meta about renames.
  • Renaming modules can have impacts on other modules that declare a dependency on those modules. 
  • We need some way to prove that a rename doesn't break contrib modules.

Migration Initiative Meeting (08/08/19)

This meeting:

  • Usually happens every Thursday and alternates between 14:00 and 21:00 UTC.
  • Is for core migrate maintainers and developers and anybody else in the community with an interest in migrations.
  • Is done over chat.
  • Happens in threads, which you can follow to be notified of new replies even if you don’t comment in the thread. You may also join the meeting later and participate asynchronously!
  • Has a public migration meeting agenda anyone can add to.
  • Transcript will be exported and posted to the agenda issue.
  • For anonymous comments, start with a bust in silhouette emoji. To take a comment or thread off the record, start with a no entry sign emoji.

Some issues need review:

  1. Add test of D7 term localized source plugin
  2. Migrate D7 synchronized fields
  3. Ensure language is not Null in translation source queries 
  4. Language specific term (i18n_taxonomy) should not rely on entity translation in D7 taxonomy term migration
  5. Migrate D6 and D7 node revision translations to D8 
  6. Migrate D7 i18n taxonomy term language 
  7. Use the lock service for migration locks 
  8. Undeprecate Drupal\comment\Plugin\migrate\source\d6\Comment::prepareComment() and mark as internal
  9. Create Migration Lookup service 
  10. Validate Migration State should load test fixture
  11. Boolean Field On and Off Label not Migrating
  12. Assert plural labels exist on migrate upgrade form
  13. Migrate UI - review help text
Aug 08 2019
Aug 08
Text on the Rosetta Stone

The web is constantly growing, evolving and—thankfully—growing more accessible and inclusive.

It is becoming expected that a user can interact with a website solely via keyboard or have the option to browse in their native language. There are many ways to serve the needs of non-native-language users, but one of the more robust is Drupal Multilingual.

Unlike 3rd party translation plugins like Google Translate or browser translation tools, Drupal's suite of core Multilingual tools allows you to write accurate and accessible translated content in the same manner as you write in your default language content. With no limit on the number languages, settings for right-to-left content, and the ability to translate any and all of your content, Drupal 8 can create a true multi-language experience like never before.

There is, however, a bit of planning and work involved.

Hopefully, this blog series will help smooth the path to truly inclusive content by highlighting some project management, design, site building, and development gotchas, as well as providing some tips and tricks to make the multilingual experience better for everyone. Part one will help you decide if you need multilingual as well as provide some tips on how to plan and budget for it.

Aug 08 2019
Aug 08

For most Drupal projects, patches are inevitable. It’s how we, in the Drupal community, share code. If that scares you, don’t worry-- the community is working hard to move to a pull/merge request workflow. Due to the collaborative nature of Drupal as a thriving open source community and the always growing ecosystem of contrib modules, patches are the ever-evolving glue that can hold a site together.  

Before Drupal 8, you may have seen projects use drush make which is a Drupal specific solution. As part of the “get off the island” movement,  Drupal adopted existing dependency manager Composer. Composer does a decent job alleviating the headaches of managing several sites with different dependencies. However, out of the box Composer will revert patched core files and contrib modules and it is for that reason composer-patches project was created. In this blog post, we are going to review how to set up composer-patches for a composer managed project and how to specify local or remote hosted patches.

The setup

In your favorite command line tool, you will want to add the composer-patches project:

composer require cweagans/composer-patches:~1.0 --update-with-dependencies

With this small change, your project is now set up for success because composer can manage your patches. 

Local patches

Sometimes you will find that you need patch contrib or core specifically for your project and therefore the patch exists locally. Composer-patches can apply that patch for you, we just need to tell it where it is.  Let’s look at an example project that has core patch applied and saved locally in the project root directory ‘patches/core-invalid-config-structures.patch’:
    ...
    "extra": {
      "patches": {
        "drupal/core": {
          "Core Invalid config structures ":"patches/core-invalid-config-structures.patch"
        }
      }
    }

In your composer.json, you will want to add an “extra” section if it doesn’t already exist.  Composer-patches will take the packages listed in “patches” and try to apply any listed patches. In our above example, the package we are patching is “drupal/core”. Patches are declared as follows:

“Patch description”: “path to patch file”

This information will be printed on the command line while composer tries to update the package which makes it important to summarize the patches purpose well.  If you would like to see what this looks like in the wild, take a look at our distribution Rain which leverages a couple of contrib patches.

After manually updating composer.json, it is always a good idea to run composer validate to confirm the json syntax is right.  If you get the green success message run composer update drupal/[projectname], e.g. composer update drupal/core to have the patch applied. 

You will know that the patch is applied based on the output:

patch output

As you can see, the package getting patched is removed, added back and the patch is applied. 

Note: Sometimes I feel like I have to give composer a nudge, always feel comfortable deleting /core, /vendor, or /modules/contrib, but if you delete composer.lock know that your dependencies could update based off your constraints.  Composer.json tracks our package dependencies at certain version constraints while composer.lock is the recipe of computed versions based off those constraints. I have found myself running the following:

rm -rf core && rm -rf modules/contrib && rm -rf vendor
composer install

Remote Patches

When possible we should open issues on Drupal.org and post patches there. That way, the community can work together to solve a problem and usually you’ll get a more reliable, lasting solution. Think about it this way - would you rather only you or your team review a critical patch to your project or hundreds of developers?

To make composer-patches grab a remote patch make the following changes:
    ...
    "extra": {
      "patches": {
        "drupal/core": {

          "#2925890-10: Invalid config structures ":"https://www.drupal.org/files/issues/2018-09-26/2925890-10.patch"
        }
      }
    } 

The only change here is rather than the path to the local patch, we have substituted it for the URL the patch. This will have a similar success message when applied correctly:

remote patches

Tips 

So far, I’ve shown you how to get going with composer-patches project but there are a lot of settings/plugins that can elevate your project.  A feature I turn on for almost all sites is exit on patch failure because it is a big deal when a patch fails.  If you too want to turn this feature on, add the following line to your “extras” section in your composer.json:

"composer-exit-on-patch-failure": true,

I have also found it helpful to add a link back to the original issue in the composer.json patch declaration. Imagine working on a release and one of your patches fail but the only reference you have to the issue is the patch file url? It is times like these that a link to the issue can make your day.  If we made the same change to our example before, it would look like the following:

 "drupal/core": {
          "#2925890-10: Invalid config structures (https://www.drupal.org/project/drupal/issues/2925890)" : "https://www.drupal.org/files/issues/2018-09-26/2925890-10.patch"
        }

Conclusion

Composer-patches is a critical package to any Drupal project managed by Composer. In this blog I showed you how to get started with the project and some of the tips and tricks I’ve learned along the way. How does your team use composer-packages? Do you have a favorite setting that I didn’t mention? Feel free to drop a comment and share what works for you and your team.

Aug 08 2019
Aug 08

If you have a local business — a restaurant, a bar, a dental clinic, a flower delivery service, a lawyer's office, or some other business — you will benefit immensely from a strong online presence. In this post, we will discuss why Drupal 8 is a great choice to build a local business website. Read on to see how numerous Drupal 8’s benefits will play in favor of your local business.

Some stats about why your local business needs a website

Local businesses once used to rely on word-of-mouth marketing. But the new digital era has changed the game. Customers widely use local Google search to find places, services, or products, they trust online customer reviews, and otherwise rely on the Internet for their decisions. 

So consider these stats about how things are going for local businesses in the digital world:

  1. Users rely on search engines for finding local information. According to Google's study, 4 in 5 people do this.
  2. Local searches are also very goal-oriented. The same Google’s study says that 50% of users who performed a local search on their smartphones visited the store within the next 24 hours. 
  3. Mobile local searches are growing like crazy. According to Statista, the number of mobile local searches are forecast to reach 141.9 billion in 2019 compared to 66.5 billion in 2014. At the same time, the number of desktop searches will drop slightly (62.3 and 66.5 billion, respectively).
  4. Smartphone shoppers love local search. Statista also shows that 82% of smartphone shoppers in the US have used their device for local search with the “near me” keyword as of 7/2018.
  5. Customers read reviews for local businesses. According to the study by BrightLocal, 86% of people do this.

Reasons to build a local business website on Drupal 8

  • SEO-friendliness with plenty of useful modules

First of all, local businesses shouldn’t miss their unique opportunity — they need to make the best of SEO. Google has special approaches to local search. When users search by adding a city name or the “near me” keyword, Google lists the best results near the top of the SERPs in a variety of rich ways. Among them:

  1. the Knowledge Panel
  2. locations on the Google Map
  3. carousels with images, news, reviews, etc.

Moreover, users are able to get all the necessary information like your business hours, get your contacts, read a review, get direction, book an appointment, and so on, without even clicking to your website (zero-click SERPs).

Local Google search on a mobile phone

How to get to these results? Among the recommendations are:

  1. provide detailed information in Google My Business listing
  2. have an optimized Knowledge Graph for your website
  3. optimize content using local keywords
  4. optimize content so it fits Google’s rich snippets
  5. and, of course, follow overall general best SEO practices 

Here is where the Drupal 8 CMS can be your very helpful assistant. In addition to being SEO-friendly out-of-box, Drupal 8 has a wealth of useful SEO modules for various purposes. They include:

  1. SEO Checklist
  2. Metatag
  3. XML sitemap
  4. RobotsTxt
  5. Real-time SEO
  6. Pathauto
  7. Redirect
  8. Google Analytics

and many more.

  • Content easy to manage

Unique and relevant content, regularly updated and optimized with local keywords is one of your most important local SEO secrets. The richer the content is, the richer it looks in Google local search results. In addition, trimming your content to fit Google’s rich snippets is the key to optimizing your website for voice search

In Drupal 8, it is easy to create, edit, and present content in attractive ways. Drupal 8 offers you:

  1. quick edits directly on the page
  2. the Media Library to easily enrich your content with images, videos, and audios
  3. handy content previews
  4. drag-and-drop page layouts with Layout Builder
  5. Drupal Views grids, slideshows, and carousels for the attractive content presentation
  6. content revision history
  7. mobile-friendly admin interfaces
  8. content moderation workflows

and much more.  
 

Media Library in Drupal 8
  • Mobile optimization out of box

In addition to the above mobile statistics, we have a few more for you. The mobile share of organic search engine visits has reached 59% in 2019, versus 27% in 2013. So your successful local business absolutely needs a mobile-friendly website.

Here is where Drupal 8 wins hands down. It has been built around a mobile-first approach. The CMS features built-in modules for creating responsive web design — the Responsive Image and the Breakpoint. 

The responsive web design technique allows your website pages to adapt to any user’s screen by showing a different layout. The page elements resize, change their position, or disappear to provide the smoothest viewing experiences for everyone. 

  • Multi-language to attract more guests

Let’s suppose you run local business in your country in your local language. Consider adding English as an international language or another language based on your touristic audience. See how you can attract your city guests as they use Google search.

Drupal 8 is the best option for multilingual websites and allows you to easily add as many languages as you wish. Drupal 8 supports a hundred of them out-of-the-box with the interface translations included. 

Thanks to Drupal 8 Multilingual Initiative (D8MI), Drupal 8 has four powerful modules responsible for every aspect of translation. 

  • High accessibility standards

According to the CDC (Center for Disease Control and Prevention), 26% (1 in 4) adults in the US have some form of disability. This a quarter of your potential customers. Moreover, they are the ones who may need your local services more than others — for example, local delivery.

To be accessible to all users without barriers, your website should adhere to accessibility standards. Drupal 8 focusses on them and offers advanced accessibility features including the use of WAI-ARIA attributes, accessible inline form errors, aural alerts, obligatory ALT text for images, and much more.  

  • Presence in multiple channels

Local businesses often benefit from the digital presence in multiple channels — imagine, for example, a pizza delivery mobile app connected to your website. 

Drupal 8 offers amazing opportunities to exchange your website’s data with third-party applications. It has five powerful built-in modules for creating REST APIs and sharing Drupal data in the JSON, XML, or other formats needed by the apps. 

  • Easy social media integration

It’s no longer possible to successfully manage a business without a social media presence. Drupal 8 allows third-party integration with any systems, and social networks are not an exception. 

It is incredibly easy to add social media icons to your website pages, provide social share buttons for your content, embed social media feeds, and much more. Social media modules in Drupal 8 are very numerous and useful. 

Among them, Easy Social, AddToAny Share Buttons, Social media share, Social Media Links Block and Field, and many more. In addition, there are network-specific modules like Video Embed Instagram, Pinterest Hover button, Facebook Album, and plenty of others.

Social media posts can also be embedded in your content using the Drupal 8 core Media module as a basis. 

Build a local business website on Drupal 8 with us!

The above reasons to build a local business website on Drupal 8 are just a few of a thousand. Contact our Drupal team and let’s discuss in more detail how we can help your local business flourish!

Aug 08 2019
Aug 08

Agiledrop is highlighting active Drupal community members through a series of interviews. Now you get a chance to learn more about the people behind Drupal projects.

In our latest interview, Ricardo Amaro of Acquia reveals how his discovery of Drupal has enabled him to work on projects he enjoys and that make a meaningful impact. Read on to learn more about his contributions and what the Drupal community in Portugal is like. 

1. Please tell us a little about yourself. How do you participate in the Drupal community and what do you do professionally?

My name is Ricardo Amaro. I live with my wife and 2 kids in Lisbon, Portugal. I’ve been working for Acquia since 2011 and recently promoted to Principal Site Reliability Engineer where we deal with all the challenges of helping ~55k Drupal production sites grow every day.

I’ve been contributing in several aspects to the Drupal Community and sometimes that effort goes beyond. An example of that is the published co-authoring of the “Seeking SRE” book (O’Reilly) with my chapter about Machine Learning for SRE, since that main idea came out of a presentation I did at DrupalCon Vienna 2017 explaining how automation and machine learning could help increase reliability on Drupal sites. 

Other projects I’ve initiated in the past within the Drupal community include:

On the local front I founded the Portuguese Drupal Association 8 years ago and I am its current elected president. That same year we organized our first DrupalCampLisbon2011. Nowadays we organize DrupalDays and Camps all over the country and meet regularly on Telegram and video-conferences. Last year we organized DrupalDevDays Lisbon 2018 which was a really good turn out for the entire community.

My main drivers are a passion for Free Software and Digital Rights. That started back in the 90’s when I found myself struggling with the proprietary/closed software available at the time, and installing Linux/Slackware in 1994 was an enlightening moment to my own question “isn’t there a better option?”. But I only switched all my machines to Linux in 2004 and that’s what I’ve used up to now. Because I think the GNU/Free Software ecosystem, where Drupal was able to grow, is fragile and needs to be nourished by all of us.

I have a degree in Arts and a second one in Computer Science & Engineering and I’m now taking a master in Enterprise Information Systems.

Before Acquia, I worked both in the public sector and in the private sector in Portugal, applying Agile techniques and encouraging the DevOps culture. I’ve managed teams, development projects and operations also in South Africa and around Europe. 

2. When did you first come across Drupal? What convinced you to stay, the software or the community, and why?

I came across Drupal in 2008, when searching for an OpenSource CMS software in order to create some Media Publishing sites for the company I was working for back at that time. My role as an IT Director was not easy, since the company was struggling with funding, so Drupal 6 was an amazing tool that enabled us to grow several of the sites and particularly create a self service on our main classified advertisement sites.

I found the Drupal Portuguese community at that time struggling to have a legal entity and to be able to grow and organize events inside the country. Portugal has always been mostly monopolized by large corporations like Microsoft and Oracle, while Free software has always been seen as “experimental” solutions, at best.

I took upon myself the commitment to bring the local Drupal community the pride and success they all deserve. I’ve grown a friendship for each and every person in our community and now I couldn't imagine myself without them, as I couldn't imagine myself without Drupal.

3. What impact has Drupal made on you? Is there a particular moment you remember?

Putting it simply: Drupal changed my life! Drupal brought justification to my values and aspirations. I honestly couldn’t have imagined, in a world that is more and more inclined to monopolistic visions, being able to exercise and contribute to the Free Software community and make a living out of it.

The particular moment I felt this more strongly the first time was around 2011 when some decision makers from one of these large corporations asked me if I could bring my Drupal presentation to them at the time, because they wanted to know what this Drupal thing was all about. So I organized a few of my usual slides and took them with me.

This was in a very fancy Vila in one of the most expensive areas near Lisbon. I did my pitch and by the end they seemed very impressed with what Drupal had to offer for free, so many powerful features, so much commitment. Naturally one of their questions was how they could make their proprietary software, that started having a descent curve, embark on this positive wave of growth. My obvious answer was “release your code as open source”. They looked at me in discredit of course and still invited me for a boat ride which I declined politely. 

I went back home and from time to time thought about that episode until it started to look like a mirage in the past. To my surprise, in the most recent years, that same corporation has started releasing open source code, created community projects and apparently changed their minds… 

4. How do you explain what Drupal is to other, non-Drupal people?

Drupal lets you turn big ideas into digital realities. An innovative web platform for creating engaging digital websites and experiences. Drupal is the world's most popular enterprise-class web content management system. It’s developed by more than 46,000 people that are part of the 1.3 million users registered on drupal.org.

Last year we had about 1,000 companies with 8,000 code contributions and this is reflected in millions of websites with 12% market share, plus an annual growth of 51%. If these people still had some more time I would present them the Drupal Pitch Deck. :)

5. How did you see Drupal evolving over the years? What do you think the future will bring?

From my perspective Drupal has been always growing and even making positive bonds with other Free Software initiatives out there.  One of the most interesting ones happened last year at Drupal Europe 2018 (11-14 Sept)  where we had the founders of RocketChat and Nextcloud met and they ended up announcing a partnership on the 17th of September…  

We should follow that example and support more interaction and collaboration with other projects in our ecosystem. For starters we should make an effort to use tools like RocketChat (see https://drupalchat.me) and grow awareness that companies like Slack have 0, or even less, to do with our values and we don’t gain anything with crossing our arms and letting people be driven there. The future is open, the future is community and inclusion.

6. What are some of the contributions to open source code or to the community that you are most proud of?

For sure the ongoing effort that I do on the Drupal Portuguese Association to keep people motivated, things organized and events happening is the first one. The highlight of this was DrupalDevDays Lisbon 2018. The second one was the DrupalCI which was of major impact for Drupal8’s final release.

7. Is there an initiative or a project in Drupal space that you would like to promote or highlight?

8. Is there anything else that excites you beyond Drupal? Either a new technology or a personal endeavor. 

I’m most excited about Containers and the power behind them. That is only possible because there is Gnu/Linux operating system supporting them. Kubernetes in particular is also of interest since it follows the reasoning of auto-scalability that we need for distributed systems. Drupal is flying to the future already with its headless/decoupled capabilities. I’m seeing containers even being applied to support machine learning algorithms and neural networks. 

Another thing that I’m particularly interested in is investigating better ways to make communities grow and ensure that they have the necessary tools to make that happen.  

My personal endeavor is, in the end, to see my kids grow in a healthy environment, rich in possibilities, and for that I need to keep information available for them and help the Free Software ecosystem stay alive. After all, what else is there that can guarantee our future human independence from “blackboxed” technology? If you can’t see, study or change the source, what role is left for you? 

 Drupal DevDays Lisbon 2018

Aug 08 2019
Aug 08

Back in early 2010, when Jason Grigsby pointed out that simply setting a percentage width on images was not enough, and that you needed to resize these images as well for a better user experience. He pointed out that if you served the right sized images on the original responsive demo site, more than 75% of the weight of those images can be shaved on smaller screens. 

Ever since, the debate on responsive images have evolved in what is the best solution to render the perfect, responsive images without any hassle.

We all know how Drupal 7 does a great job in handling responsive images with its modules. However, with Drupal 8, things are even better now!

Responsive Images in Drupal 8

The Responsive Image module in Drupal 8 provides an image formatter that maps the breakpoint of the original image to render a flawless responsive image using a picture tag.

When we observe how Drupal 8 handles responsive images when compared to Drupal 7, some of the features to be noted are:

Drupal 7 consists of the contributed module picture element, which in the latest version is known as Responsive Images.
In addition to this, Responsive images & Breakpoint modules are a part of the Drupal core in the latest version of the CMS.

The Problem

One of the major problems with the images in web development is, browsers do not know about the images, and are clueless about what sized images are rendering in relation with a viewport of different screens until the CSS & Javascripts are loaded.

However, the browser can know about the environment in which the images are rendering, which includes the size of the viewport and resolution of the screen.

The Solution 

As we mentioned in previous sections, responsive images use picture element which basically has sizes and srcset attributes which play a major role in notifying the browser to choose the best images based on the image style selections.  

So Drupal 8 has done a great job in providing the responsive images module in the core. This will download the lower sized images for the devices with lower screen resolution, resulting in better website load time and improved performance. 

Steps to reproduce

  1. Enable Responsive images and breakpoint module.
  2. Setup the breakpoints for your projects theme.
  3. Setting up the image styles for responsive images
  4. Creating a responsive image style for your theme
  5. Assigning the responsive image style to an image field.

Enable Responsive images and breakpoint module

Since it's a part of drupal 8 core, we will not require any other extra module. All you have to do is enable the responsive images module, since the breakpoint module will be installed with the standard profile. Else enable the breakpoint module.

To enable the module goto->admin->extends select the module and enable the module.

extend page

Setup the breakpoints for your project's theme
 

breakpoints

Setting up the theme’s breakpoint is the most important part for the responsiveness of your site.


If you are using a core theme like bartik , seven, umami or claro, you will already have the breakpoints file and you don’t have to create any new ones. 

However, if you are using a custom theme for your project, it is important that you define the breakpoints in "yourthemename.breakpoints.yml" which can be found in your theme directory, usually found in "/themes/custom/yourthemename".

Each breakpoint will assign the images to media query.  For example images which are rendering in mobile might be smaller i.e width less than 768px, where in medium screens will have a width between 768px to 1024px.


Each breakpoint will have: 

label:  Is the valid label given for the breakpoint.
mediaQuery:  Is the viewport within which the images are rendered.
weight:  For the order of display.
multipliers:  It's a measure of the viewport's device resolution normally 1x will be used for standard sizes and 2x for retina display.

Setting up the image styles for responsive images

Now we will have to create an image style for each of the breakpoints. You can configure your own Drupal 8 image styles at admin->config->media->image-styles. 

Click ‘Add image style’.  Give the valid name for your image style & use scale and crop effect which will provide the cropped images. If the images are stretched, add multiple image style for different viewports.

add image style

Creating a responsive image style for your theme 

This is where you provide the multiple image style options to the browser and let the browser choose the best out of the lot. 

responsive-image-styleresponsive image


To create new responsive Drupal 8  image style navigate to:
Home -> admin- > config-> media->responsive-image-style and click on ‘Add responsive image’. 

Give a valid name for your responsive image style & select the breakpoint group (choose your theme) & assign the image styles to the breakpoints listed 

There are multiple options for the image style configurations

  • Choose single image style: Where you can select the single image style that will be rendered on the particular screen
  • Choose multiple image style: Where you can select the multiple-image style and also specify the viewport width for the image style

At last, there is an option to select a fallback image style. The fallback image style should only appear on the site if an error occurs.

fallback responsive image

Assigning the responsive image style to an image field 

  • Once all the configurations are done, move to the image field by adding the responsive image style.
  • To do that go to the field’s manage display and select the responsive image style which we created.
  • Add content and see the results on the page with a responsive image style.assigning responsiveresponsive image style to an image field

Final Results 

responsive image style to an image field

 The image at a minimum width of 1024px (For large Devices).

minimum width of 1024px

Image at minimum width of 768px (For Medium Devices).

Responsive image style

Image at maximum width 767px (For Small Devices).

Aug 08 2019
Aug 08

With Dries’ latest announcement on the launch of Drupal 9 in 2020, enterprises are in an urgent need to upgrade from Drupal 7 and 8 to version 9.

Drupal 7 and 8 will reach their end of life in November 2021, and those who wish to stick to previous versions might possibly face security challenges.

Eager but unsure what the process would be like? This comprehensive guide aims to simplify the entire Drupal migration process for easy implementation.

Getting Started with the Migration Process

When site is upgraded to Drupal 7, the old database is upgraded to Drupal 7 structure. However, a different approach is followed when the site is upgraded from Drupal 7 to Drupal 8.

Upgrading D7 to D8

Step 1: Take back-up of your website

Start the migration process by making a local copy of your website. As making changes to live site is not recommended, it is a best practice to keep all data safe by taking a backup locally on your machine.

Step 2: Install fresh new site

Install a new Drupal 8 site by downloading the latest version of Drupal 8 from drupal.org.

Drupal 8.7 is the latest release.

Install the latest release of Drupal 8 along with installing dependencies with Composer.

Step 3: Prepare your Drupal 8 website for the migration

Setup a local Drupal 8 website on your machine as a destination website for the migration process.

Step 4: Verifying the modules are in core and enabled

Ensure Migrate, Migrate Drupal and Migrate Drupal UI modules are enabled on your Drupal 8 site. This can be done by navigating to the ‘Extend’ tab of your website and ensuring all the above modules are present in the core.

Now, check the three modules and click ‘Install’ button at the bottom of the page.

1-526074867869569243

 

Step 5: Upgrade your website

Go to your website and append the website address with /upgrade (www.<yourwebsitename>.com/upgrade) and follow the instructions. Now click ‘Continue’ button.

2-10

Step 6: Enter website details

On clicking ‘Continue’ the below screen comes up which asks you for the website credentials, database location and other details.

4-2

 

Step 7: Start the migration

If the database credentials to your source database are correct, the upgrade review page will appear on the Migrate UI. It will show the summary of the upgrade status for all installed modules on the old site.

As a site builder you should carefully review the modules that will not be upgraded and evaluate if your Drupal 8 site will work without the module.

click on ‘Perform Upgrade’ button.

Tip: Don’t proceed and perform the actual upgrade without first installing the missing Drupal 8 module.

Tip: If you get ID conflict warnings

If you manually create a node to the Drupal 8 site before upgrading and the source Drupal 6/7 site has a node with the same ID, the migration system will overwrite the node that was manually created in Drupal 8.

If conflicting IDs are detected, a warning about conflicting IDs will be shown which can be ignored to risk losing data or abort and take an alternative approach.

Depending on the size and types of content/configuration on the source site, the upgrade may take a very long time. Once the process is finished, you are directed to the site's frontpage with messages summarizing the results:

Upgrading D8 to D9

When it comes to migrating to Drupal 9 from Drupal 8, process is quite simpler. As D9 is an extended version of D8, it is much easier to upgrade. Read the complete guide of Drupal 8 to Drupal 9 upgrade to understand the complete process.

Alternate Method: Migration using Drush Command

Upgrading to Drupal 8 using Drush is useful when migrating complex sites as it allows you to run migrations one by one and it allows rollbacks.

If you are using Composer to build your Drupal 8 site, then you may already have Drush installed. However, if not, then you can install Drush from command line as follows:

composer require drush/drush

To migrate using Drush you need to download and enable the contributed modules: Migrate Upgrade, Migrate Plus and Migrate Tools.

Ensure the Drush is up to date (with the command: “drush –version”)

Now it’s time to start the migration through Drush with following drush command

“drush ://user:[email protected]/db — ://example.com –configure-only”

Where the below mentioned values can be with your values in the above command

  • ‘user’ is the username of the source database
  • ‘password’ is the source database user’s password
  • ‘server’ is the source database server
  • ‘db’ is the source database

Now check your migration status (with the command “drush migrate-status”)

Import the data with the command (“drush migrate-import –all”).

After successful migration, go to the structure->migrations to check the status of migration.

5-1

Check the list migration button next to the migration group ‘import from drupal 7’ to view the entire migrated data.

 

6-590

 

After clicking on all upgraded data will be visible. Click to the execute button and data will be imported.

7-2

 

Once you click on the execute button, you will be redirected to the page with below mentioned options.

8-2

Import button imports all previously unprocessed records from the source into destination Drupal objects.

With this we come to an end of our Drupal migration process. If the above steps are followed carefully, a website can be easily migrated to the latest version.

Srijan has more than 35 Acquia certified Drupal experts with expertise in migrating projects to newer versions of Drupal. Contact us to seamlessly get started with the latest Drupal version.

 

Aug 08 2019
Aug 08

We have already covered two of many ways to migrate images into Drupal. One example allows you to set the image subfields manually. The other example uses a process plugin that accomplishes the same result using plugin configuration options. Although valid ways to migrate images, these approaches have an important limitation. The files and images are not removed from the system upon rollback. In the previous blog post, we talked further about this topic. Today, we are going to perform an image migration that will clear after itself when it is rolled back. Note that in Drupal images are a special case of files. Even though the example will migrate images, the same approach can be used to import any type of file. This migration will also serve as the basis for explaining migration dependencies in the next blog post.

Code snippet for file entity migration

File entity migrate destination

All the examples so far have been about creating nodes. The migrate API is a full ETL framework able to write to different destinations. In the case of Drupal, the target can be other content entities like files, users, taxonomy terms, comments, etc. Writing to content entities is straightforward. For example, to migrate into files, the process section is configured like this:

destination:
  plugin: 'entity:file'

You use a plugin whose name is entity: followed by the machine name of your target entity. Other possible values that could be used are user, taxonomy_term, and comment. Remember that each migration definition file can only write to one destination.

Source section definition

The source of a migration is independent of its destination. The following code snippet shows the source definition for the image migration example:

source:
  constants:
    SOURCE_DOMAIN: 'https://agaric.coop'
    DRUPAL_FILE_DIRECTORY: 'public://portrait/'
  plugin: embedded_data
  data_rows:
    - photo_id: 'P01'
      photo_url: 'sites/default/files/2018-12/micky-cropped.jpg'
    - photo_id: 'P02'
      photo_url: ''
    - photo_id: 'P03'
      photo_url: 'sites/default/files/pictures/picture-94-1480090110.jpg'
    - photo_id: 'P04'
      photo_url: 'sites/default/files/2019-01/clayton-profile-medium.jpeg'
  ids:
    photo_id:
      type: string

Note that the source contains relative paths to the images. Eventually, we will need an absolute path to them. Therefore, the SOURCE_DOMAIN constant is created to assemble the absolute path in the process pipeline. Also, note that one of the rows contains an empty photo_url. No file can be created without a proper URL. In the process section we will accommodate for this. An alternative could be to filter out invalid data in a source clean up operation before executing the migration.

Another important thing to note is that the row identifier photo_id is of type string. You need to explicitly tell the system the name and type of the identifiers you want to use. The configuration for this varies slightly from one source plugin to another. For the embedded_data plugin, you do it using the ids configuration key. It is possible to have more than one source column as identifier. For example, if the combination of two columns (e.g. name and date of birth) are required to uniquely identify each element (e.g. person) in the source.

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD migration dependencies introduction whose machine name is ud_migrations_dependencies_intro. The migration to run is udm_dependencies_intro_image. Refer to this article to learn where the module should be placed.

Process section definition

The fields to map in the process section will depend on the target. For files and images, only one entity property is required: uri. It has to be set as a reference to the file using stream wrappers. In this example, the public stream (public://) is used to store the images in a location that is publicly accessible by any visitor to the site. If the file was already in the system and we knew the path the whole process section for this migration could be reduced to two lines:

process:
  uri: source_column_file_uri

That is rarely the case though. Fortunately, there are many process plugins that allow you to transform the available data. When combined with constants and pseudofields, you can come up with creative solutions to produce the format expected by your destination.

Skipping invalid records

The source for this migration contains one record that lacks the URL to the photo. No image can be imported without a valid path. Let’s accommodate for this. In the same step, a pseudofield will be created to extract the name of the file out of its path.

psf_destination_filename:
  - plugin: callback
    callable: basename
    source: photo_url
  - plugin: skip_on_empty
    method: row
    message: 'Cannot import empty image filename.'

The psf_destination_filename pseudofield uses the callback plugin to derive the filename from the relative path to the image. This is accomplished using the basename PHP function. Also, taking advantage of plugin chaining, the system is instructed to skip process the row if no filename could be obtained. For example, because an empty source value was provided. This is done by the skip_on_empty which is also configured log a message to indicate what happened. In this case, the message is hardcoded. You can make it dynamic to include the ID of the row that was skipped using other process plugins. This is left as an exercise to the curious reader. Feel free to share your answer in the comments below.

Tip: To read the messages log during any migration, execute the following Drush command: drush migrate:messages [migration-id].

Creating the destination URI

The next step is to create the location where the file is going to be saved in the system. For this, the psf_destination_full_path pseudofield is used to concatenate the value of a constant defined in the source and the file named obtained in the previous step. As explained before, order is important when using pseudofields as part of the migrate process pipeline. The following snippet shows how to do it:

psf_destination_full_path:
  - plugin: concat
    source:
      - constants/DRUPAL_FILE_DIRECTORY
      - '@psf_destination_filename'
  - plugin: urlencode

The end result of this operation would be something like public://portrait/micky-cropped.jpg. The URI specifies that the image should be stored inside a portrait subdirectory inside Drupal’s public file system. Copying files to specific subdirectories is not required, but it helps with file organizations. Also, some hosting providers might impose limitations on the number of files per directory. Specifying subdirectories for your file migrations is a recommended practice.

Also note that after the URI is created, it gets encoded using the urlencode plugin. This will replace special characters to an equivalent string literal. For example, é and ç will be converted to %C3%A9 and %C3%A7 respectively. Space characters will be changed to %20. The end result is an equivalent URI that can be used inside Drupal, as part of an email, or via another medium. Always encode any URI when working with Drupal migrations.

Creating the source URI

The next step is to create assemble an absolute path for the source image. For this, you concatenate the domain stored in a source constant and the image relative path stored in a source column. The following snippet shows how to do it:

psf_source_image_path:
  - plugin: concat
    delimiter: '/'
    source:
      - constants/SOURCE_DOMAIN
      - photo_url
  - plugin: urlencode

The end result of this operation will be something like https://agaric.coop/sites/default/files/2018-12/micky-cropped.jpg. Note that the concat and urlencode plugins are used just like in the previous step. A subtle difference is that a delimiter is specifying in the concatenation step. This is because, contrary to the DRUPAL_FILE_DIRECTORY constant, the SOURCE_DOMAIN constant does not end with a slash (/). This was done intentionally to highlight things. First, it is important to understand your source data. Second, you can transform it as needed by using various process plugins.

Copying the image file to Drupal

Only two tasks remain to complete this image migration: download the image and assign the uri property of the file entity. Luckily, both steps can be accomplished at the same time using the file_copy plugin. The following snippet shows how to do it:

uri:
  plugin: file_copy
  source:
    - '@psf_source_image_path'
    - '@psf_destination_full_path'
  file_exists: 'rename'
  move: FALSE

The source configuration of file_copy plugin expects an array of two values: the URI to copy the file from and the URI to copy the file to. Optionally, you can specify what happens if a file with the same name exists in the destination directory. In this case, we are instructing the system to rename the file to prevent name clashes. The way this is done is appending the string _X to the filename and before the file extension. The X is a number starting with zero (0) that keeps incrementing until the filename is unique. The move flag is also optional. If set to TRUE it tells the system that the file should be moved instead of copied. As you can guess, Drupal does not have access to the file system in the remote server. The configuration option is shown for completeness, but does not have any effect in this example.

In addition to downloading the image and place it inside Drupal’s file system, the file_copy also returns the destination URI. That is why this plugin can be used to assign the uri destination property. And that’s it, you have successfully imported images into Drupal! Clever use of the process pipeline, isn’t it? ;-)

One important thing to note is an image’s alternative text, title, width, and height are not associated with the file entity. That information is actually stored in a field of type image. This will be illustrated in the next article. To reiterate, the same approach to migrate images can be used to migrate any file type.

Technical note: The file entity contains other properties you can write to. For a list of available options check the baseFieldDefinitions() method of the File class defining the entity. Note that more properties can be available up in the class hierarchy. Also, this entity does not have multiple bundles like the node entity does.

What did you learn in today’s blog post? Had you created file migrations before? If so, had you followed a different approach? Did you know that you can do complex data transformations using process plugins? Did you know you can skip the processing of a row if the required data is not available? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with your colleagues.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether the migration series or other topics.

Aug 07 2019
Aug 07

Today is the start of our membership campaign which runs through August 31. This month, we've asked members to share why they are part of the Association. We hope you find some inspiration from your fellow community and that you'll join and share:

Join membership

Share the campaign

Jiang Zhan's photo holding sign saying why they are a member
"I believe in Drupal's future and I love the opportunity that Drupal provides to people around the world." ~ Jiang Zhan

Leslie's photo holding sign saying why they are a member
"The Drupal Association helps to support the Drupal community.

The Drupal community is the reason most folks start using Drupal and never leave.
Attend a local Drupal camp, meetup, or contribution day to experience how welcoming + helpful the community is." ~ Leslie Glynn

Wil's photo with sign saying why they are a member
"I love developing for…
Working with…
Problem solving with…
Enhancing…
Drupal!" ~ Wil Roboly

Erica's photo with sign saying why they are a member
"I support the people, time, and commitments made by those who build and grow Drupal." ~ Erica Reed

Imre holding siqn with why they are a member
"I believe in socially sustainable ecosystems and Drupal is just that. It's very powerful to have local communities supporting Drupal, with the Drupal Association empowering global communication and a unified message. This must endure!" ~ Imre Gmelig Meijling

Jaideep's photo holding sign saying why they are a member
"I feel this will help others the way Drupal has helped me pursue my career and open up possibilities of learning and exploring.

This is another way of giving back to Drupal." Jaideep Singh Kandari

Help grow our membership by joining to support the global Drupal Association or by sharing this campaign. We're here because we love Drupal and the opportunities that come from being part of this open source project. Thanks for being here and for all that you contribute!

Aug 07 2019
Aug 07

We now know the list of candidates that form the election for the next At-Large Drupal Association Board member. Voting is now open until 16 August, 2019 and everyone who has used their Drupal.org account in the last year is eligible to vote. You do not need to have a Drupal Association Membership.

Cast your vote

Ryan speaking to Drupal Camp London about prioritizing giving back in Drupal. - photo Paul Johnson
Ryan speaking to Drupal Camp London about prioritizing giving back in Drupal. - photo Paul Johnson

To help everyone understand how an At-Large Director serves the Drupal community, we asked Ryan Szrama, currently serving as one of two community-elected board members, to share about his experience in the role.

I was elected to the board in 2017. While it would be my first time serving on a non-profit board, my personal and professional goals were closely aligned with the mission of the Drupal Association. I was happy to serve in whatever capacity would best advance that mission: uniting a global open source community to build, secure, and promote Drupal.

You'd been a contributor to the project for over 10 years by that point. How do you think your experience in the community shaped your time on the board?

Indeed! I'd been developing Ubercart since 2006 and Drupal Commerce since 2010. I had more Drupal development experience than most folks, helping me understand just how important the Association's leadership of DrupalCon and Drupal.org are to our development efforts. The year before joining the board, I also transitioned into a business leadership role, acquiring Commerce Guys in its split from Platform.sh. This exposed me to a whole new set of concerns related to financial management, marketing, sponsorship, etc. that Drupal company owners encounter in their business with the Association.

While the Association is led day-to-day by an Executive Director (Megan Sanicki when I joined, now Heather Rocker), the board is responsible for setting the strategic vision of the organization and managing its affairs through various committees. Most of our board meetings involve reports on things like finances, business performance, staff and recruitment, etc., but we often have strategic topics to discuss - especially at our two annual retreats. My experience as a long-time contributor turned business owner gave me confidence to speak to such topics as Drupal.org improvements, DrupalCon programming, community governance, and more.

What's the Association up to lately? Are there any key initiatives you expect an incoming board member may be able to join?

There's so much going on that it's hard to know where to focus my answer!

The Drupal Association has seen a lot of changes recently impact the business of the business itself. We welcomed a new Executive Director and are beginning to see the impact of her leadership. We've revised our financial reporting, allowing us to more clearly monitor our financial health, and we've streamlined sales, immediately helping us sell sponsorships for DrupalCon Minneapolis more effectively. Additionally, we've launched and continue to develop the Drupal Steward product in partnership with the Security team. This is part of a very important initiative to diversify our revenue for the sake of the Drupal Association's long term health.

From a community growth standpoint, we continue to follow-up on various governance related tasks and initiatives. We're working with Drupal leaders around the world to help launch and support more local Drupal associations. We continue to refine the DrupalCon program and planning process to attract a more diverse audience and create opportunities for a more diverse group of contributors and speakers. The retooling of Drupal.org to integrate GitLab for project is moving forward, lowering the barrier to entry for new project contributors.

Any one of these activities would benefit from additional consideration, insight, and participation by our new board member.

What’s the best part about being on the board?

One of the best things for me is getting to see firsthand just how much the staff at the Drupal Association do to help the community, maintaining the infrastructure that enables our daily, international collaboration. I've long known folks on the development team like Neil Drumm, but I didn't realize just how much time he and other team members have devoted to empowering the entire community until I got a closer look.

I'm also continually inspired by the other board members. Our interactions have helped expand my understanding of Drupal's impact and potential. I've obviously been a fan of the project for many years now, but my time on the board has given me even greater motivation to keep contributing and ensure more people can for years to come.

Thank you, Ryan! Hopefully folks reading this are similarly inspired to keep contributing to Drupal. Our current election is one way everyone can play a part, so I'd encourage you to head over to the nominations page, read up on the motivations and interests of our nominees, ask questions while you can, and…

Cast your vote!

Aug 07 2019
Aug 07

Decoupled Drupal 8 and GatsbyJS webinar

How did the City of Sandy Springs, GA improve information system efficiency with a unified platform? Our webinar shows how we built this city on decoupled Drupal 8, GatsbyJS, and Netlify.

We explore how a “build-your-own” software approach gives Sandy Springs the formula for faster site speed and the ability to publish messages across multiple content channels — including new digital signage.

What You'll Learn

  • The City of Sandy Springs’ challenges and goals before adopting Drupal 8 

  • How Sandy Springs manages multi channel publishing across the website, social media, and a network of digital signage devices. 

  • Benefits gained from Drupal 8 and GatsbyJS, including: a fast, reliable site, hosting costs, and ease of development for their team.  

Webinar Recording and Slides 

We Built This City (On Drupal 8) from Mediacurrent

Speakers

Jason Green, Visual Communications Manager at City of Sandy Springs, and Mediacurrent Senior Director of Front End Development Zack Hawkins share an inside look at the project.

Aug 07 2019
Aug 07

You've probably heard recently that Drupal 9 is coming. Drupal 8.7 was released in May and Drupal 8.8 is planned for December 2019. At the same time, D9 is becoming a hotly discussed topic in the Drupal world. 

Drupal 9’s arrival perfectly fits into the Game of Thrones’ quote — “Brace yourself, winter is coming.” But do you need to brace for D9? It is promised to arrive easily and smoothly. Still, some important preparations are needed. Let’s review them in this post.

Drupal 9 is coming in June 2020

The year of the D9 release became known back in September 2018. Drupal creator Dries Buytaert announced it at Drupal Europe in Darmstadt. Later on, in December, the exact date arrived — Drupal 9 is coming on June 3, 2020!

What will happen to Drupal 7 and Drupal 8? Both D7 and D8 will reach their end-of-life in November 2021. This means the end of official support and no more updates in the functional and security areas. Some companies will come up with extended commercial support, but it’s far better to keep up with the times and upgrade. All the development ideas and innovations will be focused on “the great nine.”

The Drupal creator explained the planned release and end-of-life dates. In a nutshell, D8’s major dependency is the Symfony 3 framework that is reaching end-of-life in November 2021. Drupal 9 will ship with Symfony 4/5. So the Drupal team has to end-of-life Drupal 8 at that time, but they want to give website owners and developers enough time to prepare for Drupal 9 — hence the June 2020 release decision. 

According to the timing, you need to be on Drupal 9 by By November 2021. In the meantime, you should prepare. 

Preparations for Drupal 9 in the coming

1. How to prepare for Drupal 9 if you are on Drupal 8

Hearing that Drupal 9 is coming, many D8 website owners may say “Hey, we just had an epic upgrade from Drupal 7 to Drupal 8, and here we go again!”.

Keep calm — everything is on the right track. Your upgrade from Drupal 8 to Drupal 9 should be instantaneous. D9 will look like the latest version of D8, but without deprecated code and with third-party dependencies updated (Symfony 4/5, Twig 2, and so on).

Dries Buytaert's quote: we are building Drupal 9 in Drupal 8

There are two rules of thumb regarding the Drupal 9 preparations:

1) Using the latest versions of everything

To have a quick upgrade from Drupal 8 to Drupal 9, you need to stick to the newest versions of the core, modules, and themes. According to Gábor Hojtsy, Drupal Initiative Coordinator, you are gradually becoming a D9 user by keeping your D8 website up-to-date.

Gabor Hojtsy's quote: you become a Drupal 9 user by keeping up to date with Drupal 8.

“The great eight” has adopted a continuous innovation model, which means a new minor version every half a year. Our Drupal team is ready to help you with regular and smooth updates.

2) Getting rid of deprecated code

It is also necessary to keep your website clean from the deprecated code. Deprecated code means APIs and functions that have newer alternatives and are marked as deprecated, or obsolete. 

Any module that does not use deprecated code will just continue working in Drupal 9, Dries said.

Dries Buytaert's quote: without deprecated code websites will be ready for Drupal 9

How can you find deprecated code? Here are a few tools that check everything including custom modules:

  • The command-line tool Drupal Check that checks your code for deprecations
  • The Upgrade Status contributed module that offers a graphical interface to check the modules and theme and get the summary

Many deprecations are very easy to replace. You can always rely on our development team to have a thorough check and cleanup from deprecations. 

2. How to prepare for Drupal 9 if you are on Drupal 7

The best way to prepare for Drupal 9 is to upgrade to Drupal 8 now. Even if this might sound like a marketing mantra to you, it has very practical grounds.

There are plenty of reasons to upgrade and no reason to skip Drupal 8. These are words from Dries Buytaert's presentation

Dries Buytaert's presentation: there are many reasons to upgrade to Drupal 8 now

You will enjoy a wealth of Drupal 8’s benefits for business all the time before 2021. And when Drupal 9 arrives, you will just click and move ahead to it!

Gabor Hojtsy's quote: skipping Drupal 8 does not bring benefits

Don’t worry, despite the immense difference between D7 and D8, the 7-to-8 upgrades are getting easier every day. Developers have studied the D7-to-D8 upgrade path well. In addition, very helpful migration modules have recently reached stability in the D8 core.

Your upgrade to Drupal 8 will depend on your website’s custom functionality and overall complexity. In any case, our Drupal developers will take care of making it smooth. 

So make up your mind and upgrade now — welcome to the innovative path that will lead you further to “the great 9.”

Plan for Drupal 9 with us!

Yes, Drupal 9 is coming. No matter which version of Drupal you are using now, we can help you make the right Drupal 9 preparation plan — and fulfill it, of course. Just contact our Drupal experts!

Aug 07 2019
Aug 07

Website Refresh: The Only Thing Missing is a Purring Sound

The Animal Humane Society (AHS), in Minneapolis, Minnesota is the leading animal welfare organization in the Upper Midwest, helping 25,000 dogs, cats and critters in need find loving homes each year, while providing a vast array of services to the community, from low-cost spay and neuter services to dog training to rescuing animals from neglectful and abusive situations. 

TEN7 has been working with AHS since 2008, making piecemeal updates to their website and finding creative solutions for desired changes with a limited budget. In 2016, the Animal Humane Society wanted to reimagine the animalhumanesociety.org website as not just an adoption source, but a resource, an authority, and an advocate for all things related to companion animals and the community that loves them. 

One of the main goals was to include even more information to support pet owners and animal lovers, including more photos, videos and shareable content. Other goals were to integrate the separate Kindest Cut website (a low-cost spay and neuter clinic) into the main site, and improve functionality of the Lost and Found bulletin boards.

“We wanted the user experience on the site to match the user experience when people come to the shelter. That it would be colorful and emotional and warm and inviting, and that it would give people that same wonderful feeling that they have when they walk in the door at the [shelter] and see the puppies and kittens.”—Paul Sorensen, Director of Brand and Communications, Animal Humane Society

To give AHS the increased functionality they desired (like the enhanced image and video capabilities), we embarked on building a complex Drupal 8 site from scratch. It was more than just a one-and-done update, however. Over a nine-year period, the site had evolved from a manually-updated custom CMS to a new Drupal 5 installation, and later Drupal 6. Additional functionality and one-off customizations to the codebase had created a great deal of technical debt, making the site difficult to maintain and support. 

Drupal 8 functionality allowed us to scrap some custom code, while in other cases we were able to replace custom code with contributed modules developed by the Drupal community. 

Integration with PetPoint (the animal information database) under Drupal 6 was challenging, requiring custom code from beginning to end. We were able to use Drupal 8’s built-in functionality to talk to PetPoint in a more standards-based way, which meant far less custom code.

As we were making these updates, we also followed best practices and implemented coding standards for the new site, which reduce the amount of technical debt that was created.

We launched the site in the summer of 2017, and although there were some hiccups, results were immediate: people LOVED the bold photos, video and shareable content. As a result of the site update, more Minnesotans are:

  • Visiting the website and staying longer. Traffic is up 8.5% from the previous year, and the average visit is over four minutes, up 8.6% percent from the previous year
  • Viewing animal profiles, with nearly 4 million views, leading to 10,751 animal adoptions
  • Sharing and responding to AHS content on social media, with double and triple-digit traffic increases on Twitter, Instagram, LinkedIn and Reddit
  • Donating online, with donations driven by site content up 18.2% from the previous year

We continue to support and collaborate with the Animal Humane Society, adding more functionality we couldn’t squeeze in during the big update, like setting up visitor accounts with the ability to “favorite” animals. And we still have to figure out how to make the site purr.

Aug 07 2019
Aug 07

We have presented several examples as part of this migration blog post series. They started very simple and have been increasing in complexity. Until now, we have been rather optimistic. Get the sample code, install any module dependency, enable the module that defines the migration, and execute it assuming everything works on the first try. But Drupal migrations often involve a bit of trial and error. At the very least, it is an iterative process. Today we are going to talk about what happens after import and rollback operations, how to recover from a failed migration, and some tips for writing definition files.

List of drush commands used in drupal migration workflows

Importing and rolling back migrations

When working on a migration project, it is common to write many migration definition files. Even if you were to have only one, it is very likely that your destination will require many field mappings. Running an import operation to get the data into Drupal is the first step. With so many moving parts, it is easy not to get the expected results on the first try. When that happens, you can run a rollback operation. This instructs the system to revert anything that was introduced when then migration was initially imported. After rolling back, you can make changes to the migration definition file and rebuild Drupal’s cache for the system to pick up your changes. Finally, you can do another import operation. Repeat this process until you get the results you expect. The following code snippet shows a basic Drupal migration workflow:

# 1) Run the migration.
$ drush migrate:import udm_subfields

# 2) Rollback migration because the expected results were not obtained.
$ drush migrate:rollback udm_subfields

# 3) Change the migration definition file.

# 4) Rebuild caches for changes to be picked up.
$ drush cache:rebuild

# 5) Run the migration again
$ drush migrate:import udm_subfields

The example above assumes you are using Drush to run the migration commands. Specifically, the commands provided by Migrate Run or Migrate Tools. You pick one or the other, but not both as the commands provided for two modules are the same. If you were to have both enabled, they will conflict with each other and fail.
Another thing to note is that the example uses Drush 9. There were major refactorings between versions 8 and 9 which included changes to the name of the commands. Finally, udm_subfields is the id of the migration to run. You can find the full code in this article.

Tip: You can use Drush command aliases to write shorter commands. Type drush [command-name] --help for a list of the available aliases.

Technical note: To pick up changes to the definition file, you need to rebuild Drupal’s caches. This is the procedure to follow when creating the YAML files using Migrate API core features and placing them under the migrations directory. It is also possible to define migrations as configuration entities using the Migrate Plus module. In those cases, the YAML files follow a different naming convention and are placed under the config/install directory. For picking up changes, in this case, you need to sync the YAML definition using configuration management workflows. This will be covered in a future entry.

Stopping and resetting migrations

Sometimes, you do not get the expected results due to an oversight in setting a value. On other occasions, fatal PHP errors can occur when running the migration. The Migrate API might not be able to recover from such errors. For example, using a non-existent PHP function with the callback plugin. Give it a try by modifying the example in this article. When these errors happen, the migration is left in a state where no import or rollback operations could be performed.

You can check the state of any migration by running the drush migrate:status command. Ideally, you want them in Idle state. When something fails during import or rollback, you would get the Importing or Rolling back states. To get the migration back to Idle, you stop the migration and reset its status. The following snippet shows how to do it:

# 1) Run the migration.
$ drush migrate:import udm_process_intro

# 2) Some non recoverable error occurs. Check the status of the migration.
$ drush migrate:status udm_process_intro

# 3) Stop the migration.
$ drush migrate:stop udm_process_intro

# 4) Reset the status to idle.
$ drush migrate:reset-status udm_process_intro

# 5) Rebuild caches for changes to be picked up.
$ drush cache:rebuild

# 6) Rollback migration because the expexted results were not obtained.
$ drush migrate:rollback udm_process_intro

# 7) Change the migration definition file.

# 8) Rebuild caches for changes to be picked up.
$ drush cache:rebuild

# 9) Run the migration again.
$ drush migrate:import udm_process_intro

Tip: The errors thrown by the Migrate API might not provide enough information to determine what went wrong. An excellent way to familiarize yourselves with the possible errors is by intentionally braking working migrations. In the example repository of this series, there are many migrations you can modify. Try anything that comes to mind: not leaving a space after a colon (:) in a key-value assignment; not using proper indentation; using wrong subfield names; using invalid values in property assignments; etc. You might be surprised by how Migrate API deals with such errors. Also, note that many other Drupal APIs are involved. For example, you might get a YAML file parse error, or an Entity API save error. When you have seen an error before, it is usually faster to identify the cause and fix it in the future.

What happens when you rollback a Drupal migration?

In an ideal scenario, when a migration is rolled back, it cleans after itself. That means, it removes any entity that was created during the import operation: nodes, taxonomy terms, files, etc. Unfortunately, that is not always the case. It is very important to understand this when planning and executing migrations. For example, you might not want to leave taxonomy terms or files that are no longer in use. Whether any dependent entity is removed or not has to do with how plugins or entities work.

For example, when using the file_import or image_import plugins provided by Migrate File, the created files and images are not removed from the system upon rollback. When using the entity_generate plugin from Migrate Plus, the create entity also remains in the system after a rollback operation.

In the next blog post, we are going to start talking about migration dependencies. What happens with dependent migrations (e.g., files and paragraphs) when the migration for host entity (e.g., node) is rolled back? In this case, the Migrate API will perform an entity delete operation on the node. When this happens, referenced files are kept in the system, but paragraphs are automatically deleted. For the curious, this behavior for paragraphs is actually determined by its module dependency: Entity Reference Revisions. We will talk more about paragraphs migrations in future blog posts.

The moral of the story is that the behavior migration system might be affected by other Drupal APIs. And in the case of rollback operations, make sure to read the documentation or test manually to find out when migrations clean after themselves and when they do not.

Note: The focus of this section was content entity migrations. The general idea can be applied to configuration entities or any custom target of the ETL process.

Re-import or update migrations

We just mentioned that Migrate API issues an entity delete action when rolling back a migration. This has another important side effect. Entity IDs (nid, uid, tid, fid, etc.) are going to change every time you rollback an import again. Depending on auto generated IDs is generally not a good idea. But keep it in mind in case your workflow might be affected. For example, if you are running migrations in a content staging environment, references to the migrated entities can break if their IDs change. Also, if you were to manually update the migrated entities to clean up edge cases, those changes would be lost if you rollback and import again. Finally, keep in mind test data might remain in the system, as described in the previous section, which could find its way to production environments.

An alternative to rolling back a migration is to not execute this operation at all. Instead, you run an import operation again using the update flag. This tells the system that in addition to migrating unprocessed items from the source, you also want to update items that were previously imported using their current values. To do this, the Migrate API relies on source identifiers and map tables. You might want to consider this option when your source changes overtime, when you have a large number of records to import, or when you want to execute the same migration many times on a schedule.

Note: On import operations, the Migrate API issues an entity save action.

Tips for writing Drupal migrations

When working on migration projects, you might end up with many migration definition files. They can set dependencies on each other. Each file might contain a significant number of field mappings. There are many things you can do to make Drupal migrations more straightforward. For example, practicing with different migration scenarios and studying working examples. As a reference to help you in the process of migrating into Drupal, consider these tips:

  • Start from an existing migration. Look for an example online that does something close to what you need and modify it to your requirements.
  • Pay close attention to the syntax of the YAML file. An extraneous space or wrong indentation level can break the whole migration.
  • Read the documentation to know which source, process, and destination plugins are available. One might exist already that does exactly what you need.
  • Make sure to read the documentation for the specific plugins you are using. Many times a plugin offer optional configurations. Understand the tools at your disposal and find creative ways to combine them.
  • Look for contributed modules that might offer more plugins or upgrade paths from previous versions of Drupal. The Migrate ecosystem is vibrant, and lots of people are contributing to it.
  • When writing the migration pipeline, map one field at a time. Problems are easier to isolate if there is only one thing that could break at a time.
  • When mapping a field, work on one subfield at a time if possible. Some field types like images and addresses offer many subfields. Again, try to isolate errors by introducing individual changes each time.
  • Commit to your code repository any and every change that produces right results. That way, you can go back in time and recover a partially working migration.
  • Learn about debugging migrations. We will talk about this topic in a future blog post.
  • See help from the community. Migrate maintainers and enthusiasts are very active and responsive in the #migrate channel of Drupal slack.
  • If you feel stuck, take a break from the computer and come back to it later. Resting can do wonders in finding solutions to hard problems.

What did you learn in today’s blog post? Did you know what happens upon importing and rolling back a migration? Did you know that in some cases, data might remain in the system even after rollback operations? Do you have a use case for running migrations with the update flag? Do you have any other advice on writing migrations? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with your colleagues.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether the migration series or other topics.

Aug 07 2019
Aug 07

Our team is always excited to catch up with fellow Drupal community members (and each other) in person during DrupalCon. Here’s what we have on deck for this year’s event:

Visit us at booth #709

Drop by and say hi in the exhibit hall! We’ll be at booth number 709, giving away some new swag that is very special to us. Have a lot to talk about? Schedule a meeting with us

Palantiri Sessions

Keeping That New Car Smell: Tips for Publishing Accessible Content by Alex Brandt and Nelson Harris

Content editors play a huge role in maintaining web accessibility standards as they publish new content over time. Alex and Nelson will go over a handful of tips to make sure your content is accessible for your audience.


Fostering Community Health and Demystifying the CWG by George DeMet and friends

The Drupal Community Working Group is tasked with fostering community health. This Q&A format session hopes to bring to light our charter, our processes, our impact and how we can improve.


The Challenge of Emotional Labor in Open Source Communities by Ken Rickard

Emotional labor is, in one sense, the invisible thread that ties all our work together. Emotional labor supports and enables the creation and maintenance of our products. It is a critical community resource, yet undervalued and often dismissed. In this session, we'll take a look at a few reasons why that may be the case and discuss some ways in which open source communities are starting to recognize the value of emotional labor.

  • Date: Thursday, April 11
  • Time: 2:30pm
  • Location: Exhibit Stage | Level 4


The Remote Work Toolkit: Tricks for Keeping Healthy and Happy by Kristen Mayer and Luke Wertz

Moving from working in a physical office to a remote office can be a big change, yet have a lot of benefits. Kristen and Luke will talk about transitioning from working in an office environment to working remotely - how to embrace the good things about remote work, but also ways in which you might need to change your behavior to mitigate the challenges and stay mentally healthy.

Join us for Trivia Night 

Thursday night we will be sponsoring one of our favorite parts of DrupalCon, Trivia Night. Brush up on your Drupal facts, grab some friends, and don't forget to bring your badge! Flying solo to DrupalCon? We would love to have you on our team!

  • Date: Thursday, April 11
  • Time: 8pm - 11:45pm
  • Location: Armory at Seattle Center | 305 Harrison Street

We'll see you all next week!

Aug 06 2019
Aug 06

We were honored to have two members of our team present sessions at Drupal North this past June in the beautiful and vibrant city of Montreal, Canada.  Drupal North is an annual,  free, three-day conference focusing on Drupal-related topics and the community that drives the Drupal Project forward.

We were pleased to make new connections while also reconnecting with friends and peers. As always, it was also a privilege to be given the opportunity to share our expertise.

Crispin Bailey
Director of UX + Design

[embedded content]

The American Foundation for the Blind (AFB), established in 1921, is an amazing organization that advocates on behalf of visually impaired persons, providing community, resources and opportunities.

This talk covered the process we went through to overhaul the site's messaging, content architecture, and visual design, culminating in a fully-responsive HTML prototype and style guide that was used to implement a brand new, fully-accessible, Drupal 8 website.

Andrew Mallis
CEO

[embedded content]

 
Our client, the deYoung Museum, had an ambitious design project scope with tight timelines on delivery that required us to be nimble and innovative. This talk covered the team’s journey, how we repurposed GatherContent as a CMS, and how we automated deployments via its workflow states using Circle CI and Netlify.

It also looks at the components architectures that more greatly empowered our client to own their digital stories, and author them to the point where insights.famsf.org/gauguin received a Webby Honoree award.

Andrew Mallis
CEO

[embedded content]

Google Analytics provides a richness of data giving insight to a user’s needs and behaviors once they’re on your site. However, the amount of data it presents can be disorienting, leaving users unable to focus on key metrics, or misrepresenting the implications of critical indicators in relation to their goals.

In this talk, Andrew Mallis presented best practices to access the clearest, and most useful analytics data necessary to better tell your story.
 
 
We hope you find these talks helpful and insightful, and we look forward to seeing you at future events.

Aug 06 2019
Aug 06

Author’s Note: No rabbits were harmed in the writing of this post.

This summer, I had the opportunity to attend Decoupled Days 2019 in New York City. The organizers did a fabulous job of putting together an extremely insightful, yet approachable, program for the third year of the conference to date.

In case you haven’t heard of the event before, Decoupled Days is somewhat of a boutique conference that focuses solely on decoupled CMS architectures, which combine a CMS like Drupal with the latest front-end web apps, native mobile and desktop apps, or even IoT devices. Given the contemporary popularity of universal Javascript (being used to develop both the front and back ends of apps), this conference also demands a strong interest in the latest JavaScript technologies and frameworks.

If you weren’t able to attend this year, and have the opportunity in the future, I would highly recommend the event to anyone interested in decoupled architectures, whether you’re a beginner or an expert in the area. With that in mind, here are a few of the sessions I was able to attend this year that might give you a sense of what to expect.

Christopher Bloom (Phase2) gave an excellent performance at his session, giving all of us a crash course in TypeScript. He provided a very helpful live demo of how TypeScript can make it much easier and safer to write apps in frameworks like Vue.js, by allowing for real-time error-checking within your IDE (as opposed to at runtime in your browser) and providing the ability to leverage ES6 Class syntax and decorators together seamlessly.

Jamie Hollern (Amazee Labs) showed off how it’s possible to have a streamlined integration between your Drupal site and a fully-fledged UI pattern library like Storybook. By using the Component Libraries, GraphQL, and GraphQL Twig contributed modules in concert with a Storybook-based style guide, Jamie gave a fabulous demonstration of how you can finally stick to the classic DRY (Don’t Repeat Yourself) principle when it comes to a Drupal design system. Because Storybook supports front-end components built using Twig markup, and allows for populating those components using mock data in a GraphQL format, we can use those same Twig templates and GraphQL queries within Drupal with almost no refactoring whatsoever. If you’re interested in learning more about building sites with Drupal, GraphQL, and Twig, Amazee Labs has also published an “Amazing Apps” repo that acts as a sample GraphQL/Twig sample project.

For developers looking to step up their local development capabilities, Decoupled Days features two sessions on invaluable tools for doing decoupled work on your local machine.

Kevin Bridges (Drud) showed how simple and straightforward it can be to use the DDEV command-line tool to quickly spin up a Drupal 8 instance (using the Umami demo site profile, for example), enable the JSON:API contributed module in order to expose Drupal’s data, and then install (using gatsby-cli) and use a local Gatsby.js site to ingest and display that Drupal data.

Matt Biilman (Netlify) also demonstrated the newly launched Netlify Dev command-line tool for developers who use Netlify to host their static site projects. With Netlify Dev, you could spawn an entire local environment for the Gatsby.js site you just installed using DDEV, which will be able to locally run all routing rules, edge logic, and cloud functions. Netlify Dev will even allow you to stream that local Gatsby.js site to a live URL (i.e., on Netlify) with hot-reloading, so that your remote teammates can view your site as you build it.

As usual, Jesús Manuel Olivas (WeKnow) is always pushing the Drupal community further and further, this time by demonstrating his own in-house, Git-based CMS called Blaze, which is a platform that could potentially replace the need for Drupal altogether. Blaze promises to provide the lightest-weight back-end possible for your already light-weight, static-site front-end. In lieu of a relatively heavyweight back-end like Drupal, Blaze would provide a familiar WYSIWYG editor and easy content modeling with a few clicks. The biggest wins from using any Git-based CMS, though, are the extremely lowered costs and lightning-fast change deployments (with immediate updates to your code repo and CI/CD pipeline).

Aug 06 2019
Aug 06

Drupal is certainly not only the open-source CMS game in town, but when consulting with clients about the best solution for the full range of their needs, it tends to be my go-to.

Here’s why: 

  • Architecture
  • Scalability
  • Database Views
  • Flexibility
  • Security
  • Modules
  • Search
  • Migration  

Architecture

Drupal 8 is built on modern programming practices, and of course, the same will be true for June 2020 release of Drupal 9. 

A Development – Test – Production environment is the default assumption with a Drupal 8 site. Too often, other CMS sites are managed as a single instance, which is a single point of failure. 

Also, Drupal comes with a built-in automated testing framework. Drupal 8 supports unit integration and system/functional testing using the PHP Unit framework. Drupal is built to be inherently extensible through configuration, so every data type can be templated without touching code to achieve fully customized, structured data collection.

Scalability

Drupal has proven to be scalable at the most extreme traffic levels. Weather.com, as just one example, is a Drupal site. Many of the Federal cabinet-level agencies using open source have built their web infrastructures on Drupal. Drupal has built-in functionality, such as a robust caching API and JS/CSS minification/aggregation to optimize page load speed.

Database Views

Drupal Views, which is in Drupal 8 core, is a powerful tool that allows you to quickly construct database views, with AJAX filtering and sorting included. This allows you to quickly construct and publish lists of any data in your Drupal site, without needing a developer to do it for you.

Flexibility

There are several components of flexibility. 

Drupal 8 was built as an API-first CMS, explicitly supporting the idea that the display layer for content stored in a Drupal CMS may not be Drupal. The API first design of Drupal 8 also means that it is easier to integrate Drupal with third-party applications, as the API framework is already in place.

Customers vary widely in the ways in which they currently consume content. We assume that new ways will emerge for consuming content in the future, and even though we may not be in a position to predict right now what that will look like, Drupal is well poised to support what comes next.

Security

The only totally secure CMS is the one installed on a server that is sitting at the bottom of the Mariana trench, with no connectivity to anything!  However, Drupal has been tested in the most rigorous and security-conscious environments across government and industry. With a dedicated security team managing not just Drupal core but also many popular modules, and the openness inherent in open source, Drupal is a solid, secure, platform for any website.

Modules / Extensions

Drupal modules are created and contributed to the community because they solve a problem. If you have the same or similar problem to solve you may be a simple module install away from solving that problem. Also, all Drupal modules are managed and accessible through a single repository at Drupal.org, providing a critical layer of vetting and security.

Search

Current versions of Drupal come with  powerful and unparalleled out-of-the-box search functionality. Also, SOLR integration is plug-and-play with Drupal, allowing you to extend the capabilities of search to index documents, or across multiple domains, or to build faceted search results to improve the user experience.

Migration

Face it: migration is not fun with any CMS. However, the Drupal 8 migration (and the same will be true for Drupal 9) API is highly capable of importing complex data from other systems. Simpler CMS platforms tend to offer simple migration for out-of-the-box content types (posts and pages), but not so much for complex data or custom content types.

Summary

Settling on the right CMS platform is often not an obvious choice. Weighing the relative benefits of every option can take time and calls for expert consultation. In instances where complexity increases, and there’s a need to integrate the CMS with outside data sources, I’ll admit to a Drupal bias. This is based on my experience of Drupal as a CMS framework that was designed specifically for the challenges of a mobile-first, API driven, integrated digital environment.

Looking for further exploration into the relative merits of your open source CMS options? Contact us today for an insightful, informative and fully transparent conversation.


 

Aug 06 2019
Aug 06

Long articles with many sections often discourage users from reading. They start reading and usually leave before reaching half of such articles.

To avoid this type of user experience, I recommend to group each section in your article into a collapsible tab. The article reader then will be able to digest the text in smaller pieces.

The Collapse Text Drupal 8 module adds editor filter plugin to your editor. You then will be able to create collapsible text tabs with a tag system similar to HTML.

Read on to learn how to use this module!

Step #1. Install the Required Module

  • Open the terminal application of your computer
  • Go to the root of your Drupal installation (the composer.json file is located inside this directory)
  • Type the following command:

composer require drupal/collapse_text

Type composer installation command

  • Click Extend
  • Scroll down until you find the Collapse Text module and enable it
  • Click Install

Click Install

Step #2. Create an Editor Role

  • Click People > Roles > Add role

Add Collapsible Blocks to Text-Heavy Nodes in Drupal 8

  • Enter the Role Name Editor and click Save
  • Click the dropdown besides Editor and select Edit permissions

Click the dropdown besides Editor

  • Check these permissions:
    • Comment
      • Edit own comments
      • Post comments
      • View comments
    • Contact
      • Use the site-wide contact form
    • Filter
      • Use the Full HTML text format
    • Node
      • Article: Create new content
      • Article: Delete own content
      • Article: Delete revisions
      • Article: Edit own content
      • Article: Revert revisions
      • Article: View revisions
      • Access the Content overview page
      • View published content
      • View own unpublished content
    • System
      • Use the administration pages and help
      • View the administration theme
    • Taxonomy
      • Tags: Create terms
      • Access the taxonomy vocabulary overview page
    • Toolbar
      • Use the toolbar
    • User
      • Cancel own user account
      • View user information
  • Click Save permissions

Click Save permissions

Step #3. Create a User with the New Editor Role

  • Click People > Add user
  • Create a user with the Editor role
  • Click Create new account

Click Create new account

Step #4. Add the Plugin to the Text Format

  • Click Configuration > Text formats and editors

Click Configuration > Text formats and editors

  • Click the Configure button for the Full HTML format

Click the Configure button

  • Enable the Collapsible text blocks filter and check that it comes after the other two filters specified in the description

Enable the Collapsible text blocks filter

The Full HTML format has these two filters disabled by default, so we are good to go.

  • Click Save configuration

Click Save configuration

Step #5. Create Content

  • Log out and log back in as the user with the Editor role

Log out and log back in

  • Click Content > Add content
  • Write a proper title for the node

The Tabs Structure

Each tab is declared between a pair of tags.

To show an opened tab (not collapsed at all) you put the text between the [collapse] and [/collapse] tags.

To show a collapsed tab you put the text between the [collapsed] and [/collapsed]tags.

The opening [collapse] and [collapsed] tags support two “attribute values”:

  • title
  • class

If you don’t specify a title attribute, the module will take the first title available between the [collapse]/[collapsed] tags.

If you don’t specify a title attribute, the module will take the first title available

It is possible to nest collapsible tabs.

It is possible to nest collapsible tabs

  • Finish editing the node form and click Save

Finish editing the node form and click Save

The image is floated, that is a Bartik specific style. Let’s apply some CSS.

Step #6. Basic Styling

Hint: I’m going to edit the original core theme files because I’m working on a sandbox environment. That is not recommended on a production server. As a matter of fact, it is not a good practice at all. If you want to improve your Drupal theming skills, take a look at this OSTraining class.

  • Open the file core/themes/bartik/css/components/field.css
  • Add this code to the end of the file:
@media all and (min-width: 560px) {
 .node .field--type-image {
   float: none;
 }
  • Open the file core/themes/bartik/css/components/node.css
  • Add this code to the end of the file:
/* Collapse Text Styles */
.open,
.shut {
font-family: sans-serif;
}

.open {
background: black;
color: white;
}

.shut {
background: #444;
color: #CCC;
}

summary {
background-color: red;
color: transparent;
}

.nested1 {
background-color: rgba(224, 110, 108, 0.25);
}
  • Save both files
  • Click Configuration > Performance > Clear all caches
  • Refresh the site

Click Configuration > Performance > Clear all caches

Refresh the site

I hope you liked this tutorial. Thanks for reading!


About the author

Jorge lived in Ecuador and Germany. Now he is back to his homeland Colombia. He spends his time translating from English and German to Spanish. He enjoys playing with Drupal and other Open Source Content Management Systems and technologies.
Aug 06 2019
Aug 06

Did you know that the term “One-Stop-Shop” is one of the most clichéd marketing taglines to use, according to Hubspot? Thankfully, I came across that article before I sat down to write this one. So, I’m NOT going to say that a Drupal distribution is a “one-stop-shop” for a quick and easy way to launch your website. Let’s put it this way instead – If you want to build a Drupal website and eager to see it go live real quick, while making sure that you want to save time on maintenance too, Drupal distributions are meant for you.  

What is a Drupal distribution?

A Drupal distribution is an all-inclusive package to get your website up and running quickly. This package consists of the Drupal Core (basic features), installation profiles, themes (for customized designs), libraries (consists of assets like CSS or Javascript) and modules specific to an industry. For example, if you run a publishing company, a distribution like Thunder can help you speed up your development process. Here you can find modules like Paragraphs, Media Entity, Entity Browser and features like Thunder admin theme, scheduled publishing and much more – all in one place. 

Why should you use a Drupal 8 distribution?

Let me give you a few reasons for that -

  • You don’t have to scramble your way through thousands of Drupal modules only to find a few that you really need.  
  • Configuring Drupal core is easier too as most part of it comes preconfigured.
  • The features and modules included in a Drupal distribution are time tested, optimized and proven for quality.
  • Maintenance of a Drupal distribution is simpler because updates for all modules and features can be performed on one shot!  
  • Since you don’t have to reinvent the wheel every time, you save on time. You save precious resource time. Which also means, you save on money! 
  • Now that you have saved some time, you can spend more time on customizing and personalizing these components to tailor-fit your business needs.

Top 15 Drupal Distributions (alphabetically sorted)

1. CiviCRM Starter Kit

The CiviCRM Starter Kit brings together the power of Drupal and the open-source CRM tool – CiviCRM. The popular CRM is used by more than 8000 organizations to centralize constituent communications. Along with core Drupal and CiviCRM, the distribution also packs in CiviCRM related modules like CiviCRM Cron, Webform CiviCRM, CiviCRM Clear All Caches, etc.

 2. Commerce Kiskstart

If you are looking to quickly get your e-commerce store up and running on Drupal Commerce framework, this one’s for you. Commerce Kickstart is a Drupal distribution made for both Drupal 7 and Drupal 8 and is maintained by Centarro (previously Commerce Guys). The Commerce Kickstart 2.x version comes loaded with beautiful themes, catalog, promotion engines, variety of payment tools, utility tools, shipping and fulfilment tools, analytics and reporting tools, marketing tools, search configuration, custom back office interface and much more.

drupal distributions

                                Source - https://www.drupal.org/project/commerce_kickstart

3. Conference Organizing Distribution

Creating a website for events and conference gets easier with this Drupal distribution. Conference Organizing Distribution (COD) was made for Drupal 7 but is being actively ported to Drupal 8. With COD, you can -

  • Create/manage tickets for event registrations
  • Create announcements for paper submissions
  • Moderate session selections
  • Provide an option for attendees to vote for their favourite sessions
  • Schedule sessions on any day and place
  • Easily manage sponsorships
  • Event management made easy with a powerful event management dashboard
  • Keep a track on multiple events and sessions
  • Sell tickets with Drupal commerce

4. Contenta

This API-first Drupal distribution provides you with a framework that is API ready. It reduces the complexity and pain of using or trying decoupled/headless Drupal. Contenta also comes pre-installed with code and demo content along with front-end application examples. Even if you are new to Drupal, Contenta offers simple and quick ways to get the Drupal CMS part ready and you can then focus on frontend frameworks you intend to use. If you’re looking for a complete solution for a headless Drupal project, ContentJS is your best bet. Content JS integrates Contenta CS with front-end framework NodeJs for a powerful, high performing digital experience.

5. Drupal Government Distributions (federal, regional, local)

The aGov Drupal 8 distribution was developed to meet the guidelines of the Australian government. It allows government bodies to follow standards like the WCAG 2.0 AA, Australian government Web guide, AGLS metadata and Digital Service Standard. However, the developers of aGov, PreviousNext, no longer develop of support this distribution as they are now focused on the GovCMS Drupal distribution. GovCMS was built on the foundation of aGov to build more secure, compliant and adaptable government websites. 
deGov Drupal 8 distribution was built for German government websites and used Acquia Lightning to offer more valuable features and functionalities. Some features common to all the Drupal government distributions-

  • Meeting all government standards
  • Workbench moderation
  • Citizen engagement portals
  • Responsive design
  • Example content
  • Intranet/Extranet

6. Acquia Lightning

True to its name, Acquia Lightning is a light-weight Drupal 8 distribution that you can use to develop and deploy a website in lightning speed (up to 30% lesser development time!). Developed by Acquia, Lightning aims to provide full flexibility and a great authoring experience to editorial teams and content authors. Built on Drupal 8, it offers powerful features like page layouts, drag and drop of assets using Panels, rich text, media, slideshows, Google maps, content scheduling and much more. You can also streamline the workflow process of publishing, reviewing, approving and scheduling content.

7. Open Atrium

OpenAtrium is a Drupal distribution built specifically for organizations to be able to create a collaborative intranet solution for social collaboration and knowledge management. It offers features like a drag and drop layout, events management (Events), document management (Files), issue tracking, granular access controls, media management, a worktracker (to monitor tasks and maintain transparency), and much more. It is also offers responsive layouts and themes.

8. Open Academy

Built on the Panopoly base distribution, the Drupal distribution is tailor-made for higher education websites which can be further extended and customized. It is an easy-to-use tool that does not need users to be technical. You can have a great website without any customizations too! Open Academy distribution consists of a Drupal 7 installation profile and features meant for managing courses, departments, faculty, presentations, news, events, publications and more. The themes provided are optimized and mobile ready.

9. Open Social

Open Social is a Drupal 8 distribution that allows organizations to create intranets, online communities and other social portals easily. It is being used by hundreds of organizations including NGOs and government bodies to facilitate communication and connection with their volunteers, employees, members and customers. It also has features like multi-lingual support, private file system, social login, Geo-location maps, etc.

drupal distribution social module

Source : https://www.drupal.org/project/social

10. Opigno LMS

Opigno Distribution is a Learning Management System built on Drupal. It is an easily scalable solution built not just for universities but also for organizations looking to create e-learning solutions. It allows to manage training paths that are organized in courses, activities and modules. It also provides with features like adaptive learning paths, management of skill acquisition, quizzes, blended learning (online modules + in-house sessions + virtual classrooms), award certificates, forums, live meetings and more.

opigno lms drupal distributions

                                               
 Source: https://www.drupal.org/project/opigno_lms

11. Panopoly

This is a base Drupal distribution – which basically means it also acts like a foundation or a base framework for many other distros to be built upon. Panopoly Distribution is powered by the magic of the Panels module and its features like In-place editor, Panelizer, Fieldable Panel Panes, etc. The Panopoly package consists of contributed modules and libraries. It offers cross-browser and responsive layouts, drag and drop page customizations, a powerful easy-to-use Admin interface, etc. It can also be extended through many Panopoly apps. 

12. Presto!

Want a Drupal 8 starter-kit that can meet all your content management needs and get you up and running, presto?! Count on Presto! Whats better, you can start using Presto right out-of-the-box! It is power packed with some great content features like Intelligent content editing, Promo bar (inline alerts for news/announcements), Divider (adding space), Carousel (interactive images), Blocks, etc. It also comes shipped with a responsive theme based on Bootstrap framework that can be further customized to add more layouts. It also lets you easily integrate with Drupal Commerce to make selling on your website easier. With Presto, you can reduce the development time by 20%!

13. Reservoir

Like Contenta, Reservoir too is an API-first Drupal distribution for decoupling Drupal. With this tool, you can build content repositories that are ready to be consumed by front-end applications. It is packed with all necessary web service APIs necessary to create decoupled websites. Reservoir was developed with the objective to make Drupal more accessible to developers, to provide best-practices for building decoupled applications and to provide a starting point for Drupal developers (with less or no Drupal experience) to build a content repository. Reservoir uses JSON API (a specification used for APIs in JSON) to interact with the back-end content. It also ships with API documentation, OpenAPI format export (compatible with a plethora of tools) and a huge set of libraries, SDKs and references.

14. Thunder

This Drupal 8 distribution is designed exclusively for professional publishing. Thunder was originally designed for and by Hubert Burda Media. The Drupal distribution is loaded with features meant for the publishing sector like the Paragraph module, drag and drop of content, Media Entity, Entity browser, Content lock, Video embed field, Facebook Instant articles, Google AMP, LiveBlog, Nexx.tv video player and much more. All of this along with Drupal core features and responsive themes. 

Drupal distribution module thunder


Source: https://www.drupal.org/project/thunder

15. Varbase

Are you lost in a mountain of Drupal modules and wondering which one to pick? Looking for a package that can jumpstart your web development process right away? Varbase is your go-to Drupal distribution then! Varbase provides you with all the necessities and essential modules, features and configurations to speed up your time to market. 

Aug 06 2019
Aug 06

At the start of every month, we gather all the Drupal blog posts from the previous month that we’ve enjoyed the most. Here’s an overview of our favorite posts from July related to Drupal - enjoy the read!

5 Reasons to Upgrade Your Site to Drupal 8, then 9

Our July selection begins with a blog post by Third & Grove titled “5 Reasons to Upgrade Your Site to Drupal 8, then 9”. Since the upgrade from Drupal 8 will be a smooth and simple one, the best thing to do is to make the move to D8 now and start benefiting from its superior capabilities, as the author of this blog post, Curtis Ogle, also emphasizes. 

In this post, Curtis thus presents his top 5 features of Drupal 8 that make a very strong case for the upgrade. These are: configuration management right in the core; RESTful APIs; twig templates (his personal favorite one); all the contrib modules from D7; and, lastly, the fact that D8 is future proof, with all future upgrade paths considerably smoother than with previous versions.

Read more

The Top Four Benefits of Building a Site on Drupal 8 

Still very much in line with the previous post, this next one was written by BounteousChris Greatens and outlines the main benefits of choosing to build a website in Drupal 8. With an abundance of different CMS solutions, the ones that hold the obvious advantage are those who offer both excellent authoring and administrative features, as well as development capabilities. 

According to Chris, there are 4 main features that make Drupal stand out among other CMS: flexibility, scalability, security and, exactly as in the previously mentioned blog post, the ability of future-proofing. All of Drupal’s additional capabilities only add to this, making it a viable platform for various use cases.

Read more

Prepare for Drupal 9: stop using drupal_set_message()!

Next up, we have a blog post by Gábor Hojtsy reporting on the most recent state of deprecated code in preparation for Drupal 9, which contains two important findings.

The first one is that as much as 29% of all analyzed instances of deprecated API uses can be attributed to drupal_set_message() - so, basically, no longer using this API means you’ll already be 29% on your way towards Drupal 9 readiness.

Gábor’s second finding is that 76% of deprecated API use (47% other API uses beside drupal_set_message()’s 29%) can in fact already be resolved now, 10 months before the release of Drupal 9. This gives project maintainers and contributors plenty of time to work towards D9 compatibility. 

Read more

5 Reasons to Attend and Sponsor Open Source Events

A really great post from July that had us recall the awesome Drupal community is “5 Reasons to Attend and Sponsor Open Source Events”, written by Promet Source’s Chris O’Donnell. He answers the question “Is it worth to keep sponsoring DrupalCamps and other events?” with a hard “Yes” and five (well, six, actually) supporting reasons.

These reasons are: it’s good for business; you (as a company) owe it to the community; you’re able to find new talented developers at these events; you learn a lot; there are various fun activities; and, the sixth bonus reason, you meet many amazing Drupalists and forge new friendships. This last reason alone is actually enough to justify going to at least one or two ‘Camps a year.

Read more

Drupal + Javascript: Exploring the Possibilities

Hook42’s Emanuel London’s introduction to the exploration of the possibilities of Drupal in combination with JavaScript is another post from July that we enjoyed. Excited as he was about the plethora of emerging JavaScript frameworks and the flexibility they offer, Emanuel was a bit disappointed by the fact that the Drupal community hasn’t kept up-to-date with all these technologies, and thus decided to remedy this in a series of blog posts. 

Future posts in the series will explore some of the tools for native mobile app development, e.g. ReactNative, as well as some Drupal tools, modules and distributions, such as ContentCMS. By the end of the series, we’ll hopefully be better prepared for Drupal-powered mobile app development and maybe even compete with WordPress in that area.

Read more

Eight reasons why Drupal should be every government’s CMS

It is a well-known fact in the community that Drupal is the go-to choice for government websites, thanks in large part to its security and multisite capabilities. Anne Stefanyk of Kanopi Studios further underlines this with six additional reasons why governments should choose Drupal as their preferred CMS.

Besides security and multisite/multilingual support, Drupal’s advantage also lies in: its mobility, accessibility, easy content management, ability to handle large amounts of traffic and data, flexibility, and affordability.

These are all aspects crucial to the experience of a government website. As such, Drupal truly is best suited for this role, as is also evidenced by the over 150 countries relying on Drupal to power their websites.

Read more

Getting Start with Layout Builder in Drupal 8

Nearing the end of July’s list, we have a post by Ivan Zugec of WebWash, essentially a tutorial on using Drupal’s recently stable Layout Builder. It contains all the basics you need to get started with this powerful new functionality. 

The first part of the post covers using the Layout Builder to customize content types, with Ivan working on the Article content type as an example. It details how to create a default layout for articles, as well as how to override it for a single article.

The second part then deals with using the module as a page builder, customizing the layout of an individual piece of content, from creating a custom block to embedding images. The post concludes with links to some additional modules and a FAQ section. 
 
Read more

An Open Letter to the Drupal Community

We round off July’s list with J.D. Flynn’s open letter to the Drupal community. This is a very interesting post which deals with a recent positive addition to drupal.org and how it can be exploited to “game the system” - namely, issue credits. 

The problem with the issue credit system is that it can be used to amass hundreds of credits with fixes for simple novice issues, which leaves fewer of these novice issues to fledgling developers trying to get their foot in the door, as well as gives unjustified credibility to the person or company in question and demoralizes other developers. 

J.D. presents four possible solutions to this: weighted credits; mandatory difficulty tagging of issues; credit limits; and a redistribution of credits. He finishes with a call to action to new developers to seek out help and to seasoned developers to offer mentorship to newcomers.

Read more

We hope you enjoyed our selection of Drupal blog posts from July and perhaps even found some thoughts that inspired ideas of your own. Don’t forget to visit our blog from time to time so you don’t miss any of our upcoming posts! 
 

Aug 06 2019
Aug 06

So far we have learned how to write basic Drupal migrations and use process plugins to transform data to meet the format expected by the destination. In the previous entry we learned one of many approaches to migrating images. In today’s example, we will change it a bit to introduce two new migration concepts: constants and pseudofields. Both can be used as data placeholders in the migration timeline. Along with other process plugins, they allow you to build dynamic values that can be used as part of the migrate process pipeline.

Syntax for constants and pseudofields in the Drupal process migration pipeline

Setting and using constants

In the Migrate API, a constant is an arbitrary value that can be used later in the process pipeline. They are set as direct children of  the source section. You write a constants key whose value is a list of name-value pairs. Even though they are defined in the source section, they are independent of the particular source plugin in use. The following code snippet shows a generalization for settings and using constants:

source:
  constants:
    MY_STRING: 'http://understanddrupal.com'
    MY_INTEGER: 31
    MY_DECIMAL: 3.1415927
    MY_ARRAY:
      - 'dinarcon'
      - 'dinartecc'
  plugin: source_plugin_name
  source_plugin_config_1: source_config_value_1
  source_plugin_config_2: source_config_value_2
process:
  process_destination_1: constants/MY_INTEGER
  process_destination_2:
    plugin: concat
    source: constants/MY_ARRAY
    delimiter: ' '

You can set as many constants as you need. Although not required by the API, it is a common convention to write the constant names in all uppercase and using underscores (_) to separate words. The value can be set to anything you need to use later. In the example above, there are strings, integers, decimals, and arrays. To use a constant in the process section you type its name, just like any other column provided by the source plugin. Note that you use the constant you need to name the full hierarchy under the source section. That is, the word constant and the name itself separated by a slash (/) symbol. They can be used to copy their value directly to the destination or as part of any process plugin configuration.

Technical note: The word constants for storing the values in the source section is not special. You can use any word you want as long as it does not collide with another configuration key of your particular source plugin. A reason to use a different name is that your source actually contains a column named constants. In that case you could use defaults or something else. The one restriction is that whatever value you use, you have to use it in the process section to refer to any constant. For example:

source:
  defaults:
    MY_VALUE: 'http://understanddrupal.com'
  plugin: source_plugin_name
  source_plugin_config: source_config_value
process:
  process_destination: defaults/MY_VALUE

Setting and using pseudofields

Similar to constants, pseudofields stores arbitrary values for use later in the process pipeline. There are some key differences. Pseudofields are set in the process section. The name is arbitrary as long as it does not conflict with a property name or field name of the destination. The value can be set to a verbatim copy from the source (a column or a constant) or they can use process plugins for data transformations. The following code snippet shows a generalization for settings and using pseudofields:

source:
  constants:
    MY_BASE_URL: 'http://understanddrupal.com'
  plugin: source_plugin_name
  source_plugin_config_1: source_config_value_1
  source_plugin_config_2: source_config_value_2
process:
  title: source_column_title
  my_pseudofield_1:
    plugin: concat
    source:
      - constants/MY_BASE_URL
      - source_column_relative_url
    delimiter: '/'
  my_pseudofield_2:
    plugin: urlencode
    source: '@my_pseudofield_1'
  field_link/uri: '@my_pseudofield_2'
  field_link/title: '@title'

In the above example, my_pseudofield_1 is set to the result of a concat process transformation that joins a constant and a column from the source section. The result value is later used as part of a urlencode process transformation. Note that to use the value from my_pseudofield_1 you have to enclose it in quotes (') and prepend an at sign (@) to the name. The new value obtained from URL encode operation is stored in my_pseudofield_2. This last pseudofield is used to set the value of the URI subfield for field_link. The example could be simplified, for example, by using a single pseudofield and chaining process plugins. It is presented that way to demonstrate that a pseudofield could be used as direct assignments or as part of process plugin configuration values.

Technical note: If the name of the subfield can be arbitrary, how can you prevent name clashes with destination property names and field names? You might have to look at the source for the entity and the configuration of the bundle. In the case of a node migration, look at the baseFieldDefinitions() method of the Node class for a list of property names. Be mindful of class inheritance and method overriding. For a list of fields and their machine names, look at the “Manage fields” section of the content type you are migrating into. The Field API prefixes any field created via the administration interface with the string field_. This reduces the likelihood of name clashes. Other than these two name restrictions, anything else can be used. In this case, the Migrate API will eventually perform an entity save operation which will discard the pseudofields.

Understanding Drupal Migrate API process pipeline

The migrate process pipeline is a mechanism by which the value of any destination property, field, or pseudofield that has been set can be used by anything defined later in the process section. The fact that using a pseudofield requires to enclose its name in quotes and prepend an at sign is actually a requirement of the process pipeline. Let’s see some examples using a node migration:

  • To use the title property of the node entity, you would write @title
  • To use the field_body field of the Basic page content type, you would write @field_body
  • To use the my_temp_value pseudofield, you would write @my_temp_value

In the process pipeline, these values can be used just like constants and columns from the source. The only restriction is that they need to be set before being used. For those familiar with the rewrite results feature of Views, it follows the same idea. You have access to everything defined previously. Anytime you use enclose a name in quotes and prepend it with an at sign, you are telling the migrate API to look for that element in the process section instead of the source section.

Migrating images using the image_import plugin

Let’s practice the concepts of constants, pseudofields, and the migrate process pipeline by modifying the example of the previous entry. The Migrate Files module provides another process plugin named image_import that allows you to directly set all the subfield values in the plugin configuration itself.

As in previous examples, we will create a new module and write a migration definition file to perform the migration. It is assumed that Drupal was installed using the standard installation profile. The code snippets will be compact to focus on particular elements of the migration. The full code is available at https://github.com/dinarcon/ud_migrations The module name is UD Migration constants and pseudofields and its machine name is ud_migrations_constants_pseudofields. The id of the example migration is udm_constants_pseudofields. Refer to this article for instructions on how to enable the module and run the migration. Make sure to download and enable the Migrate Files module. Otherwise, you will get an error like: “In DiscoveryTrait.php line 53: The "file_import" plugin does not exist. Valid plugin IDs for Drupal\migrate\Plugin\MigratePluginManager are:...”. Let’s see part of the source definition:

source:
  constants:
    BASE_URL: 'https://agaric.coop'
    PHOTO_DESCRIPTION_PREFIX: 'Photo of'
  plugin: embedded_data
  data_rows:
    -
      unique_id: 1
      name: 'Michele Metts'
      photo_url: 'sites/default/files/2018-12/micky-cropped.jpg'
      photo_width: '587'
      photo_height: '657'

Only one record is presented to keep snippet short, but more exist. In addition to having a unique identifier, each record includes a name, a short profile, and details about the image. Note that this time, the photo_url does not provide an absolute URL. Instead, it is a relative path from the domain hosting the images. In this example, the domain is https://agaric.coop so that value is stored in the BASE_URL constant which is later used to assemble a valid absolute URL to the image. Also, there is no photo description, but one can be created by concatenating some strings. The PHOTO_DESCRIPTION_PREFIX constant stores the prefix to add to the name to create a photo description. Now, let’s see the process definition:

process:
  title: name
  psf_image_url:
    plugin: concat
    source:
      - constants/BASE_URL
      - photo_url
    delimiter: '/'
  psf_image_description:
    plugin: concat
    source:
      - constants/PHOTO_DESCRIPTION_PREFIX
      - name
    delimiter: ' '
  field_image:
    plugin: image_import
    source: '@psf_image_url'
    reuse: TRUE
    alt: '@psf_image_description'
    title: '@title'
    width: photo_width
    height: photo_height

The title node property is set directly to the value of the name column from the source. Then, two pseudofields. psf_image_url stores a valid absolute URL to the image using the BASE_URL constant and the photo_url column from the source. psf_image_description uses the PHOTO_DESCRIPTION_PREFIX constant and the name column from the source to store a description for the image.

For the field_image field, the image_import plugin is used. This time, the subfields are not set manually. Instead, they are assigned using plugin configuration keys. The absence of the id_only configuration allows for this. The URL to the image is set in the source key and uses the psf_image_url pseudofield. The alt key allows you to set the alternative attribute for the image and in this case the psf_image_description pseudofield is used. For the title subfield sets the text of a subfield with the same name and in this case it is assigned the value of the title node property which was set at the beginning of the process pipeline. Remember that not only psedufields are available. Finally, the width and height configuration uses the columns from the source to set the values of the corresponding subfields.

What did you learn in today’s blog post? Did you know you can define constants in your source as data placeholders for use in the process section? Were you aware that pseudofields can be created in the process section to store intermediary data for process definitions that come next? Have you ever wondered what is the migration process pipeline and how it works? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with your colleagues.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether the migration series or other topics.

Aug 05 2019
Aug 05

When deploying changes to a Drupal environment, you should be running database updates (e.g. drush updb, or through update.php) first, and only then import new configuration. The reason for this is that update hooks may want to update configuration, which will fail if that configuration is already structured in the new format (you can't do without updates either; update hooks don't just deal with configuration, but may also need to change the database schema and do associated data migrations). So, there really is no discussion; updates first, config import second. But sometimes, you need some configuration to be available before executing an update hook.

For example, you may want to configure a pathauto pattern, and then generate all aliases for the affected content. Or, you need to do some content restructuring, for which you need to add a new field, and then migrate data from an old field into the new field (bonus tip: you should be using post update hooks for such changes). So, that's a catch-22, right?

Well, no. The answer is actually pretty simple, at least in principle: make sure you import that particular configuration you need within your update hook. For importing some configuration from your configuration sync directory, you can add this function to your module's .install file:

/**
 * Synchronize a configuration entity.
 * 
 * Don't use this to create a new field, use 
 * my_custom_module_create_field_from_sync().
 *
 * @param string $id
 *   The config ID.
 *
 * @see https://blt.readthedocs.io/en/9.x/readme/configuration-management/#using-update-hooks-to-importing-individual-config-files
 */
function my_custom_module_read_config_from_sync($id) {
  // Statically cache storage objects.
  static $fileStorage, $activeStorage;

  if (empty($fileStorage)) {
    global $config_directories;
    $fileStorage = new FileStorage($config_directories[CONFIG_SYNC_DIRECTORY]);
  }
  if (empty($activeStorage)) {
    $activeStorage = \Drupal::service('config.storage');
  }

  $config_data = $fileStorage->read($id);
  $activeStorage->write($id, $config_data);
}

Use it like this:

my_custom_module_read_config_from_sync('pathauto.pattern.landing_page_url_alias');

As you might have seen in the docblock above that function, it is not actually suitable for creating fields. This is because just importing the configuration will not create the field storage in the database. When you need to create a field, use the following code:

/**
 * Creates a field from configuration in the sync directory.
 *
 * For fields the method used in kankernl_custom_read_config_from_sync() does
 * not work properly.
 *
 * @param string $entityTypeId
 *   The ID of the entity type the field should be created for.
 * @param string[] $bundles
 *   An array of IDs of the bundles the field should be added to.
 * @param string $field
 *   The name of the field to add.
 *
 * @throws \Drupal\Core\Entity\EntityStorageException
 */
function my_custom_module_create_field_from_sync($entityTypeId, array $bundles, $field) {
  // Statically cache storage objects.
  static $fileStorage;

  // Create the file storage to read from.
  if (empty($fileStorage)) {
    global $config_directories;
    $fileStorage = new FileStorage($config_directories[CONFIG_SYNC_DIRECTORY]);
  }

  /** @var \Drupal\Core\Entity\EntityStorageInterface $fieldConfigStorage */
  $fieldStorage = \Drupal::service('entity_type.manager')
    ->getStorage('field_storage_config');

  // If the storage does not yet exit, create it first.
  if (empty($fieldStorage->load("$entityTypeId.$field"))) {
    \Drupal::service('entity_type.manager')->getStorage('field_storage_config')
      ->create($fileStorage->read("field.storage.$entityTypeId.$field"))
      ->save();
  }

  /** @var \Drupal\Core\Entity\EntityStorageInterface $fieldConfigStorage */
  $fieldConfigStorage = \Drupal::service('entity_type.manager')
    ->getStorage('field_config');

  // Create the field instances.
  foreach ($bundles as $bundleId) {
    $config = $fieldConfigStorage->load("$entityTypeId.$bundleId.$field");
    if (empty($config)) {
      $fieldConfigStorage->create($fileStorage->read("field.field.$entityTypeId.$bundleId.$field"))
        ->save();
    }
  }
}

And, once again, a usage example:

my_custom_module_create_field_from_sync('node', ['basic_page', 'article'], 'field_category');

The function will check whether the field already exists, so it is safe to run again, or to run it for a field that already exists on another bundle of the same entity type.

Note that when using post update hooks, it will be important to create a single hook implementation that applies all required actions for what should be considered a single change, because there are no guarantees about the order of post update hooks. So that would for example constitute:

  1. Create a new field.
  2. Migrate data from the old field to the new field.
  3. Remove the old field.

Hopefully, this helps someone get out of that catch 22. Whatever you do, don't run your config import before your database updates.

Aug 05 2019
Aug 05

I've been putting off learning how to build sites in Drupal 8, and migrating my existing Drupal 7 sites over to Drupal 8. Why? Drupal 8 uses a lot of new tools. I want to learn how to set up a Drupal 8 site in the "right" (optimal) way so that I don't incur technical debt for myself later on. That means I have a lot of tools to learn. That takes time, which I don't have a lot of. So I've procrastinated.

Now, two deadlines are creeping up on me. The first is the release of Drupal 9, planned for June 3, 2020. At that point, there's a bit longer before Drupal 7 goes unsupported (November 2021 is the current plan), but it's time to move.

The second is the requirement for Strong Customer Authentication (SCA) on European payments the middle of this coming September. One of my D7 sites runs Ubercart and uses Ubercart Stripe, which won't support SCA.The only alternative to upgrading is to disable Stripe as a payment method. Actually, AndraeRay has taken maintainer status of Ubercart Stripe and nearly has an SCA-compliant 7.x-3.x branch ready. Many thanks!

So it's time to start. But how do I set up my development workflow? Having done a lot of reading, here's what I propose. I'd really value input from the Drupal community here; I'm bound to have missed something important, and I've one big question I can't answer (see below). Please pile in at the Comments.

My Proposed Approach

  • Set up a development server, separate from the server that live sites will be deployed on. This will have plenty of RAM, the same version of PHP as the deployment environment, and Composer installed.
  • I plan to use a private git repository to store each site's code and configuration.
  • On the development server, use Composer to install drupal-composer/drupal-project.
  • Use .gitignore to exclude all core and contributed modules and themes from the git repository.
  • I'll also move environment-specific settings (such as database credentials) to a settings.local.php file, which will also be excluded using .gitignore.
  • Add everything else to git.
  • As I work on the site, I'll commit, add and push changes to the git repository.
  • I'll also set up a sync directory using $config_directories[‘sync’], so that I can use Drupal console to export the site's configuration and include that in git as well.
  • On the production server, I can periodically pull everything from the git repository. When I do so, I'd use composer update (never composer install). This removes the need for the production server to have a large amount of RAM (since generating composer.lock is done on the development server). I'd then use Drupal console to import the synchronised configuration, and clear all caches.
  • For theming, on Drupal 7 I built my own responsive themes by subtheming Zen. Zen for Drupal 8 is in Alpha only; the last (alpha) release was 3 years ago and the last commit was 20 months ago. So it looks like the best approach is to subtheme Classy directly.
  • Previously, I've found Sass and Compass to be a great help with building a custom theme, so I'd plan to use them again. I'd include one simple .scss file that is excluded from git to override the base hue of the theme; this allows my development site to have a different colour scheme to the production one, to help me always remember which site I'm on.

All of the above can be modified to allow a staging copy of the site as well, if that becomes helpful.

Questions

I have one big question.

This workflow allows me to make configuration changes on the development site, and then push them up to the deployment site when ready.

It seems to me that, whilst configuration needs pushing from development to production, content needs pushing the other way. That's to say, I'll want regularly to make sure my development site contains the latest content on the production site. New and altered pages need pushing down, as do nodes that use the new content types I've developed in development.

Configuration: Development => Staging => Production
Content: Production => Staging => Development

How do you do this?

Specifically, I don't think I want to dump the entire database from production, and replace the development database contents with the sqldump. Doing that would also override configuration changes on the development site, and that's not the way this whole workflow is designed.

So is there a generally adopted method to use, to dump the site content from production and import it to development, but without making any changes to the configuration that is stored in the development database?

Over to you

So over to you, reader. I've learnt a lot, but I have a lot to learn still.

Can you help answer that question?

Have I missed anything really important? Would anything in the approach above be unwise, and if so what and why? Am I right that Classy is the best base theme, or is another preferable? (It would need to be actively developed, likely to stick around as Drupal moves through 8.8, 8.9, 9.x and so on, and not slow the site down significantly or make it harder to maintain by adding lots of preprocessing or complex div structures / css libraries). Am I using git and composer in the right way? Any other advice?

Thanks in advance!

Aug 05 2019
Aug 05

Creating a faceted search in Drupal implies some configuration steps. This can be overwhelming to people new to Drupal.

The MixItUp Views Drupal 8 module allows you to create a simplified version of a faceted search based on the taxonomies of the content type. It also provides a nice animation, that makes the user experience even better.

This could be, for example, a good alternative to small commerce sites or other types of online catalogs at their initial state.

Screenshot of an e-commerce catalogue with a filtered search of products

Let’s start!

Step #1. Install the required modules

  • Open the terminal application of your computer
  • Go to the root of your Drupal installation (the composer.json file is located inside this directory)
  • Type the following command:

composer require drupal/mixitup_views

Type in the installation command

  • Go to the vendor folder of your Drupal installation
  • Locate the /patrickkunka directory
  • Move its whole content to the /libraries directory (you will have to create a libraries directory if you do not have one already):

Move the directory

  • Leave the empty /patrickkunka directory inside the /vendor folder
  • Make sure that mixitup.js and mixitup.min.js files are inside /libraries/mixitup/dist

Make sure the JS files are inside the proper folder

  • Click Extend
  • Scroll down until you find the MixItUp Views module and check it
  • Click Install

Click Install

Step #2. Create the Taxonomy Terms

For this example, it is necessary to create 3 vocabularies and their respective terms according to this structure.

  • Color
    • Black
    • Red
    • Blue
    • Grey
    • Lightblue
    • Orange
    • White
    • Yellow
  • Brand
    • Brand A
    • Brand B
  • Type
    • Long Sleeve
    • Short sleeve
  • Click Structure > Taxonomy > Add vocabulary

Click Structure > Taxonomy > Add vocabulary

Once you have created the first vocabulary, you have to add terms to it.

  • Click Add term

Click Add term

  • Enter each one of the Color terms and click Save each time

Click Save each time

  • Once you have added all Color terms, repeat the process for the other 2 vocabularies and their terms

Repeat the process for the other 2 vocabularies and their terms

Repeat the process for the other 2 vocabularies and their terms

Step #3. Add the Taxonomy Fields to the Article Content Type

  • Click Structure > Content types > Article > Add field

Click <i>Structure > Content types > Article > Add field

  • Select the "Taxonomy" term from the dropdown and give this field a proper name. It makes sense to use the same vocabulary name.
  • Click Save and continue

Click Save and continue

  • Set the Allowed number of values to Unlimited (only for the Color taxonomy term)
  • Click Save field settings

Click Save field settings

Note: This is because a shirt can have more than one color, but it cannot have long and short sleeves at the same time.

  • Select by each Reference type the corresponding vocabulary and click Save settings

click Save settings

  • Repeat this process for the other 2 taxonomy terms

This is just an example. For a real product node, you would need other fields like a price field and a link to the cart.

For a real product node, you would need other fields like a price field and a link to the cart

  • Click the Manage form display tab
  • Locate the Color field and change the field widget to Check boxes/radio buttons
  • Click Save

Click Save

  • Click the Manage display tab
  • Rearrange the taxonomy fields and hide their labels
  • Disable the Tags field by dragging it to the Disabled section
  • Click Save

Click Save

Step #4. Create Content

  • Click Content > Add content > Article
  • Enter proper text and images. Make sure you select a value for each taxonomy field. This makes sense since we want to filter the results of the view according to the taxonomy terms (you could have made these fields required to avoid empty fields here)
  • Click Save

Click Save

  • Repeat the process for each one of the nodes

Repeat the process for each one of the nodes

Step # 5. Create the View

  • Click Structure > Views > Add view

Click <i>Structure > Views > Add view

  • Show Content of Type Article
  • Check Create a page
  • Under Page Display Settings select MixItUp of Fields (you can use Teasers as well)
  • Click Save and edit

Click Save and edit

  • Click the Add button in the Field section of the Views UI

Click the Add button in the Field section of the Views UI

  • In the search box type: "appears in: article" (without the double quotes)
  • Check the fields Image, Color, Brand, and Type
  • Click Add and configure fields

Click Add and configure fields

  • For each one of the taxonomy terms click Apply
  • Change the dimensions of the image field and link it to the Content
  • Click Apply

Click Apply

  • In the Format section, click the MixItup Settings

In the Format section, click the MixItup Settings

This is where you can configure settings, such as the animation of the elements of the view, the aggregation type, and other additional sorting and filtering options. You should take a look at each one of these configurations and read their description.

  • Leave the defaults and click Apply
  • Save the view

Go to the page and test the filtered search

Go to the page and test the filtered search

Go to the page and test the filtered search

Congratulations! You have just added another practical module to your site-builder knowledge base. Thanks for reading!


About the author

Jorge lived in Ecuador and Germany. Now he is back to his homeland Colombia. He spends his time translating from English and German to Spanish. He enjoys playing with Drupal and other Open Source Content Management Systems and technologies.
Aug 05 2019
Aug 05

I catch up with Brendan Blaine, a developer for the Drupal Association, to find out what it takes to run events.drupal.org, why the conferences run so smoothly, and always remember to use a coaster.

It's really exciting to be a part of the team that puts on DrupalCon.

Aug 05 2019
Aug 05

Using cloud is about leveraging its agility among other benefits. For the Drupal-powered website, a right service provider can impact how well the website performs and can affect the business revenue.

A robust server infrastructure, such as AWS, when backs up the most advanced CMS, Drupal, it proves to accelerate the website’s performance, security and availability.

But why AWS and what benefits does it offer over others? Let’s deep dive to understand how it proves to be the best solution for hosting your Drupal websites.

Points To Consider For Hosting Drupal Websites

The following are the points to keep in mind while considering providers for hosting your pure or headless Drupal website.

Better Server Infrastructure: Drupal specialised cloud hosting provider should offer a server infrastructure that is specifically optimized for running Drupal websites in a way they were designed to run.

Better Speed: It should help optimise the Drupal website to run faster and should have the ability to use caching tools such as memcache, varnish, etc.

Better Support: The provider should offer better hosting support with the right knowledge of a Drupal website.

Better Security and Compatibility: The hosting provider should be able to provide security notifications, server-wide security patches, and even pre-emptive server upgrades to handle nuances in upcoming Drupal versions.

Why not a traditional server method?

There are two ways of hosting Drupal website via traditional server setups: 

  • a shared hosting server, where multiple websites run on the same server
  • or  a dedicated Virtual Private Server (VPS) per website. 

However, there are disadvantages to this approach, which are:

  1. With a lot of non-redundant single-instance services running on the same server, there are chances if any component crashes, the entire site can get offline.
  2. Being non-scalable, this server does not scale up or down automatically and requires a manual intervention to make changes to the hardware configuration and may cause server to go down due to an unexpected traffic boost.
  3. The setup constantly runs at full power, irrespective of usage, causing wastage of resources and money.

    Hosting Drupal on AWS

Amazon Web Services (AWS) is a pioneer of cloud hosting industry providing hi-tech server infrastructure and is proved to be highly secure and reliable.
With serverless computing, developers can focus on their core product instead of worrying about managing and operating servers or runtimes, either in the cloud or on-premises. It eliminates infrastructure management tasks such as server or cluster provisioning, patching, operating system maintenance, and capacity provisioning. It enables you to build modern applications with increased agility and lower total cost of ownership and time-to-market.

With serverless being the fastest-growing trend with an annual growth rate of 75% and foreseen to be adopted at a much higher rate, let’s understand the significance of the AWS components in the Virtual Private Cloud (VPC). Each of these components proves it to be the right choice for hosting pure or headless websites.

AWS-syanmic-content-srijan-technologiesArchitecture diagram showcasing Drupal hosting on AWS

  •  Restrict connection: NAT Gateway

Network Address Translation (NAT) gateway enables instances in a private subnet to connect to the internet or other AWS services. Hence, the private instances in the private subnet are not exposed via the Internet gateway, instead, all the traffic is routed via the NAT gateway. 

The gateway ensures that the site will always remain up and running. AWS takes over the responsibility of its maintenance.

  • Restrict access: Bastion Host
Bastion hosts protects the system by restricting access to backend systems in protected or sensitive network segments. Its benefit is that it minimises the chances of any potential security attack.
  • Database: AWS Aurora

The Aurora database provides invaluable reliability and scalability, better performance and response times. With fast failover capabilities and storage durability, it minimizes technical obstacles.

  • Upload content: Amazon S3

With Amazon S3,  store, retrieve and protect any amount of data at any time in a scalable storage bucket. Recover lost data easily, pay for the storage you actually use, protect data from unauthorized use and easily upload and download your data with SSL encryption.

  • Memcached/Redis: AWS Elasticache
Elasticache is a web service that makes it easy to set up, manage, and scale a distributed in-memory data store in the cloud.
  • Edge Caching: AWS CloudFront
CloudFront is an AWS content delivery network which provides a globally-distributed network of proxy servers which cache content locally to consumers, to improve access speed for downloading the content.
  • Web servers: Amazon EC2

Amazon EC2 is a web service that provides secure, resizable compute capacity in the cloud.

Amazon Route 53 effectively connects user requests to infrastructure running in AWS and can also be used toroute users to infrastructure outside of AWS.

Benefits of Hosting Drupal Website on AWS

Let’s look at the advantages of AWS for hosting pure or headless Drupal websites.

High Performing Hosting Environment

The kind of performance you want from your server depends upon the type of Drupal website you are building. A simple website with a decent amount of traffic can work well on a limited shared host platform. However, for a fairly complex interactive Drupal site, a typical shared hosting solution might not be feasible. 

Instead, opt for AWS, which provides a server facility and you get billed as per your usage.

Improved Access To Server Environment 

Shared hosting environment restricts its users to gain full control and put a limitation on their ability to change configurations for Apache or PHP, and there might be caps on bandwidth and file storage. These limitations get removed when you're willing to pay a higher premium for advanced level access and hosting services.

This is not true with AWS, which gives you direct control over your server instances, with permissions to SSH or use its interface control panel to adjust settings.

Control over the infrastructure 

Infrastructure needs might not remain constant and are bound to change with time. Adding or removing hosting resources might prove to be difficult or not even possible and would end up paying for the unused resources.

However, opting for AWS will let you pay for the services you use and can shut them off easily if you don’t need them anymore. On-demand virtual hosting and a wide variety of services and hardware types make AWS convenient for anyone and everyone.

No Long-term Commitments

If you are hosting a website to gauge the performance and responsiveness, you probably would not want to allocate a whole bunch of machines and resources for a testing project which might be over within a week or so.

The convenience of AWS on-demand instances means that you can spin up a new server in a matter of minutes, and shut it down (without any further financial cost) in just as much time.

Refrain Physical Hardware Maintenance

The advantage of using virtual resources is to avoid having to buy and maintain physical hardware. 

Going with virtually hosted servers with AWS helps you focus on your core competency - creating Drupal websites and frees you up from dealing with data center operations.

Why Choose Srijan?

Srijan’s team of AWS professionals can help you migrate your website on AWS cloud. With an expertise in enhancing Drupal-optimised hosting environment utilising AWS for a reliable enterprise-level hosting, we can help you implement various AWS capabilities as per your enterprises’ requirements. Drop us a line and let our experts explore how you can get the best of AWS.

Aug 05 2019
Aug 05

In the previous entry, we learned how to use process plugins to transform data between source and destination. Some Drupal fields have multiple components. For example, formatted text fields store the text to display and the text format to apply. Image fields store a reference to the file, alternative, and title text, width, and height. The migrate API refers to a field’s component as a subfield. Today we will learn how to migrate into them and know which subfields are available.

Examples of migrating into subfields

Getting the example code

Today’s example will consist of migrating data into the `Body` and `Image` fields of the `Article` content type that are available out of the box. This assumes that Drupal was installed using the `standard` installation profile. As in previous examples, we will create a new module and write a migration definition file to perform the migration. The code snippets will be compact to focus on particular elements of the migration. The full code snippet is available at https://github.com/dinarcon/ud_migrations The module name is `UD Migration Subfields` and its machine name is `ud_migrations_subfields`. The `id` of the example migration is `udm_subfields`. Refer to this article for instructions on how to enable the module and run the migration.

source:
  plugin: embedded_data
  data_rows:
    -
      unique_id: 1
      name: 'Michele Metts'
      profile: 'freescholar on Drupal.org'
      photo_url: 'https://agaric.coop/sites/default/files/2018-12/micky-cropped.jpg'
      photo_description: 'Photo of Michele Metts'
      photo_width: '587'
      photo_height: '657'

Only one record is presented to keep snippet short, but more exist. In addition to having a unique identifier, each record includes a name, a short profile, and details about the image.

Migrating formatted text

The `Body` field is of type `Text (formatted, long, with summary)`. This type of field has three components: the full text (value) to present, a summary text, and a text format. The Migrate API allows you to write to each component separately defining subfields targets. The next code snippets shows how to do it:

process:
  field_text_with_summay/value: source_value
  field_text_with_summay/summary: source_summary
  field_text_with_summay/format: source_format

The syntax to migrate into subfields is the machine name of the field and the subfield name separated by a slash (/). Then, a colon (:), a space, and the value. You can set the value to a source column name for a verbatim copy or use any combination of process plugins. It is not required to migrate into all subfields. Each field determines what components are required, so it is possible that not all subfields are set. In this example, only the value and text format will be set.

process:
  body/value: profile
  body/format:
    plugin: default_value
    default_value: restricted_html

The `value` subfield is set to the `profile` source column. As you can see in the first snippet, it contains HTML markup. An `a` tag to be precise. Because we want the tag to be rendered as a link, a text format that allows such tag needs to be specified. There is no information about text formats in the source, but Drupal comes with a couple we can choose from. In this case, we use the `Restricted HTML` text format. Note that the `default_value` plugin is used and set to `restricted_html`. When setting text formats, it is necessary to use its machine name. You can find them in the configuration page for each text format. For `Restricted HTML` that is /admin/config/content/formats/manage/restricted_html.

Note: Text formats are a whole different subject that even has security implications. To keep the discussion on topic, we will only give some recommendations. When you need to migrate HTML markup, you need to know which tags appear in your source, which ones you want to allow in Drupal, and select a text format that accepts what you have whitelisted and filter out any dangerous tags like `script`. As a general rule, you should avoid setting the `format` subfield to use the `Full HTML` text format.

Migrating images

There are different approaches to migrating images. Today, we are going to use the Migrate Files module. It is important to note that Drupal treats images as files with extra properties and behavior. Any approach used to migrate files can be adapted to migrate images.

process:
  field_image/target_id:
    plugin: file_import
    source: photo_url
    reuse: TRUE
    id_only: TRUE
  field_image/alt: photo_description
  field_image/title: photo_description
  field_image/width: photo_width
  field_image/height: photo_height

When migrating any field you have to use their machine in the mapping section. For the `Image` field, the machine name is `field_image`. Knowing that, you set each of its subfields:

  • `target_id` stores an integer number which Drupal uses as a reference to the file.
  • `alt` stores a string that represents the alternative text. Always set one for better accessibility.
  • `title` stores a string that represents the title attribute.
  • `width` stores an integer number which represents the width in pixels.
  • `height` stores an integer number which represents the height in pixels.

For the `target_id`, the plugin `file_import` is used. This plugin requires a `source` configuration value with a URL to the file. In this case, the `photo_url` column from the source section is used. The `reuse` flag indicates that if a file with the same location and name exists, it should be used instead of downloading a new copy. When working on migrations, it is common to run them over and over until you get the expected results. Using the `reuse` flag will avoid creating multiple references or copies of the image file, depending on the plugin configuration. The `id_only` flag is set so that the plugin only returns that file identifier used by Drupal instead of an entity reference array. This is done because each subfield is being set manually. For the rest of the subfields (`alt`, `title`, `width`, and `height`) the value is a verbatim copy from the source.

Note: The Migrate Files module offers another plugin named `image_import`. That one allows you to set all the subfields as part of the plugin configuration. An example of its use will be shown in the next article. This example uses the `file_import` plugin to emphasize the configuration of the image subfields.

Which subfields are available?

Some fields have many subfields. Address fields, for example, have 13 subfields. How can you know which ones are available? The answer is found in the class that provides the field type. Once you find the class, look for the `schema` method. The subfields are contained in the `columns` array of the value returned by the `schema` method. Let’s see some examples:

  • The `Text (plain)` field is provided by the StringItem class.
  • The `Number (integer)` field is provided by the IntegerItem class.
  • The `Text (formatted, long, with summary)` is provided by the TextWithSummaryItem class.
  • The `Image` field is provided by the ImageItem class.

The `schema` method defines the database columns used by the field to store its data. When migrating into subfields, you are actually migrating into those particular database columns. Any restriction set by the database schema needs to be respected. That is why you do not use units when migrating width and height for images. The database only expects an integer number representing the corresponding values in pixels. Because of object-oriented practices, sometimes you need to look at the parent class to know all the subfields that are available.

Another option is to connect to the database and check the table structures. For example, the `Image` field stores its data in the `node__field_image` table. Among others, this table has five columns named after the field’s machine name and the subfield:

  • field_image_target_id
  • field_image_alt
  • field_image_title
  • field_image_width
  • field_image_height

Looking at the source code or the database schema is arguably not straightforward. This information is included for reference to those who want to explore the Migrate API in more detail. You can look for migrations examples to see what subfields are available. I might even provide a list in a future blog post. ;-)

Tip: You can use Drupal Console for code introspection and analysis of database table structure. Also, many plugins are defined by classes that end with the string `Item`. You can use your IDEs search feature to find the class using the name of the field as hint.

Default subfields

Every Drupal field has at least one subfield. For example, `Text (plain)` and `Number (integer)` defines only the `value` subfield. The following code snippets are equivalent:

process:
  field_string/value: source_value_string
  field_integer/value: source_value_integer
process:
  field_string: source_value_string
  field_integer: source_value_integer

In examples from previous days, no subfield has been manually set, but Drupal knows what to do. As we have mentioned, the Migrate API offers syntactic sugar to write shorter migration definition files. This is another example. You can safely skip the default subfield and manually set the others as needed. For `File` and `Image` fields, the default subfield is `target_id`. How does the Migrate API know what subfield is the default? You need to check the code again.

The default subfield is determined by the return value of `mainPropertyName` method of the class providing the field type. Again, object oriented practices might require looking at the parent classes to find this method. In the case of the `Image` field, it is provided by ImageItem which extends FileItem which extends EntityReferenceItem. It is the latter that contains the `mainPropertyName` returning the string `target_id`.

What did you learn in today’s blog post? Were you aware of the concept of subfields? Did you ever wonder what are the possible destination targets (subfields) for each field type? Did you know that the Migrate API finds the default subfield for you? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with your colleagues.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether the migration series or other topics.

Aug 03 2019
Aug 03

In the previous entry, we wrote our first Drupal migration. In that example, we copied verbatim values from the source to the destination. More often than not, the data needs to be transformed in some way or another to match the format expected by the destination or to meet business requirements. Today we will learn more about process plugins and how they work as part of the Drupal migration pipeline.

Syntax for process plugin definition and chaining

Syntactic sugar

The Migrate API offers a lot of syntactic sugar to make it easier to write migration definition files. Field mappings in the process section are an example of this. Each of them requires a process plugin to be defined. If none is manually set, then the get plugin is assumed. The following two code snippets are equivalent in functionality.

process:
  title: creative_title

process:
  title:
    plugin: get
    source: creative_title

The get process plugin simply copies a value from the source to the destination without making any changes. Because this is a common operation, get is considered the default. There are many process plugins provided by Drupal core and contributed modules. Their configuration can be generalized as follows:

process:
  destination_field:
    plugin: plugin_name
    config_1: value_1
    config_2: value_2
    config_3: value_3

The process plugin is configured within an extra level of indentation under the destination field. The plugin key is required and determines which plugin to use. Then, a list of configuration options follows. Refer to the documentation of each plugin to know what options are available. Some configuration options will be required while others will be optional. For example, the concat plugin requires a source, but the delimiter is optional. An example of its use appears later in this entry.

Providing default values

Sometimes, the destination requires a property or field to be set, but that information is not present in the source. Imagine you are migrating nodes. As we have mentioned, it is recommended to write one migration file per content type. If you know in advance that for a particular migration you will always create nodes of type Basic page, then it would be redundant to have a column in the source with the same value for every row. The data might not be needed. Or it might not exist. In any case, the default_value plugin can be used to provide a value when the data is not available in the source.

source: ...
process:
  type:
    plugin: default_value
    default_value: page
destination:
  plugin: 'entity:node'

The above example sets the type property for all nodes in this migration to page, which is the machine name of the Basic page content type. Do not confuse the name of the plugin with the name of its configuration property as they happen to be the same: default_value. Also note that because a (content) type is manually set in the process section, the default_bundle key in the destination section is no longer required. You can see the latter being used in the example of writing your Drupal migration blog post.

Concatenating values

Consider the following migration request: you have a source listing people with first and last name in separate columns. Both are capitalized. The two values need to be put together (concatenated) and used as the title of nodes of type Basic page. The character casing needs to be changed so that only the first letter of each word is capitalized. If there is a need to display them in all caps, CSS can be used for presentation. For example: FELIX DELATTRE would be transformed to Felix Delattre.

Tip: Question business requirements when they might produce undesired results. For instance, if you were to implement this feature as requested DAMIEN MCKENNA would be transformed to Damien Mckenna. That is not the correct capitalization for the last name McKenna. If automatic transformation is not possible or feasible for all variations of the source data, take notes and perform manual updates after the initial migration. Evaluate as many use cases as possible and bring them to the client’s attention.

To implement this feature, let’s create a new module ud_migrations_process_intro, create a migrations folder, and write a migration definition file called udm_process_intro.yml inside it. Follow the instructions in this entry to find the proper location and folder structure or download the sample module from https://github.com/dinarcon/ud_migrations It is the one named UD Process Plugins Introduction and machine name udm_process_intro. For this example, we assume a Drupal installation using the standard installation profile which comes with the Basic Page content type. Let’s see how to handle the concatenation of first an last name.

id: udm_process_intro
label: 'UD Process Plugins Introduction'
source:
  plugin: embedded_data
  data_rows:
    -
      unique_id: 1
      first_name: 'FELIX'
      last_name: 'DELATTRE'
    -
      unique_id: 2
      first_name: 'BENJAMIN'
      last_name: 'MELANÇON'
    -
      unique_id: 3
      first_name: 'STEFAN'
      last_name: 'FREUDENBERG'
  ids:
    unique_id:
      type: integer
process:
  type:
    plugin: default_value
    default_value: page
  title:
    plugin: concat
    source:
      - first_name
      - last_name
    delimiter: ' '
destination:
  plugin: 'entity:node'

The concat plugin can be used to glue together an arbitrary number of strings. Its source property contains an array of all the values that you want put together. The delimiter is an optional parameter that defines a string to add between the elements as they are concatenated. If not set, there will be no separation between the elements in the concatenated result. This plugin has an important limitation. You cannot use strings literals as part of what you want to concatenate. For example, joining the string Hello with the value of the first_name column. All the values to concatenate need to be columns in the source or fields already available in the process pipeline. We will talk about the latter in a future blog post.

To execute the above migration, you need to enable the ud_migrations_process_intro module. Assuming you have Migrate Run installed, open a terminal, switch directories to your Drupal docroot, and execute the following command: drush migrate:import udm_process_intro Refer to this entry if the migration fails. If it works, you will see three basic pages whose title contains the names of some of my Drupal mentors. #DrupalThanks

Chaining process plugins

Good progress so far, but the feature has not been fully implemented. You still need to change the capitalization so that only the first letter of each word in the resulting title is uppercase. Thankfully, the Migrate API allows chaining of process plugins. This works similarly to unix pipelines in that the output of one process plugin becomes the input of the next one in the chain. When the last plugin in the chain completes its transformation, the return value is assigned to the destination field. Let’s see this in action:

id: udm_process_intro
label: 'UD Process Plugins Introduction'
source: ...
process:
  type: ...
  title:
    -
      plugin: concat
      source:
        - first_name
        - last_name
      delimiter: ' '
    -
      plugin: callback
      callable: mb_strtolower
    -
      plugin: callback
      callable: ucwords
destination: ...

The callback process plugin pass a value to a PHP function and returns its result. The function to call is specified in the callable configuration option. Note that this plugin expects a source option containing a column from the source or value of the process pipeline. That value is sent as the first argument to the function. Because we are using the callback plugin as part of a chain, the source is assumed to be the last output of the previous plugin. Hence, there is no need to define a source. So, we concatenate the columns, make them all lowercase, and then capitalize each word.

Relying on direct PHP function calls should be a last resort. Better alternatives include writing your own process plugins which encapsulates your business logic separate of the migration definition. The callback plugin comes with its own limitation. For example, you cannot pass extra parameters to the callable function. It will receive the specified value as its first argument and nothing else. In the above example, we could combine the calls to mb_strtolower() and ucwords() into a single call to mb_convert_case($source, MB_CASE_TITLE) if passing extra parameters were allowed.

Tip: You should have a good understanding of your source and destination formats. In this example, one of the values to want to transform is MELANÇON. Because of the cedilla (ç) using strtolower() is not adequate in this case since it would leave that character uppercase (melanÇon). Multibyte string functions (mb_*) are required for proper transformation. ucwords() is not one of them and would present similar issues if the first letter of the words are special characters. Attention should be given to the character encoding of the tables in your destination database.

Technical note: mb_strtolower is a function provided by the mbstring PHP extension. It does not come enabled by default or you might not have it installed altogether. In those cases, the function would not be available when Drupal tries to call it. The following error is produced when trying to call a function that is not available: The "callable" must be a valid function or method. For Drupal and this particular function that error would never be triggered, even if the extension is missing. That is because Drupal core depends on some Symfony packages which in turn depend on the symfony/polyfill-mbstring package. The latter provides a polyfill) for mb_* functions that has been leveraged since version 8.6.x of Drupal.

What did you learn in today’s blog post? Did you know that syntactic sugar allows you to write shorter plugin definitions? Were you aware of process plugin chaining to perform multiple transformations over the same data? Had you considered character encoding on the source and destination when planning your migrations? Are you making your best effort to avoid the callback process plugin? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with your colleagues.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether the migration series or other topics.

Aug 03 2019
Aug 03

Approaching 20 years old, the Drupal Community must prioritize recruiting the next generation of Drupal Professionals

Kaleem Clarkson Ferris Wheel in Centennial Olympic Park in Atlanta, Georgia

Time flies when you are having fun. One of those sayings I remember my parents saying that turned out to be quite true. My first Drupal experience was nearly 10 years ago and within a blink of an eye, we have seen enormous organizations adopt and commit to Drupal such as Turner, the Weather Channel, The Grammys, and Georgia.gov.

Throughout the years, I have been very fortunate to meet a lot of Drupal community members in person but one thing I have noticed lately is that nearly everyone’s usernames can be anywhere between 10–15 years old. What does that mean? As my dad would say, it means we are getting O — L — D, old.

For any thriving community, family business, organization, or your even favorite band for that matter, all of these entities must think about succession planning. Succession what, what is succession planning?

Succession planning is a process for identifying and developing new leaders who can replace old leaders when they leave, retire or die. -Wikipedia

That's right, we need to start planning a process for identifying who can take over in leadership roles that continue to push Drupal forward. If we intend to keep Drupal as a viable solution for large and small enterprises, then we should market ourselves to the talent pool as a viable career option to help in an attempt to lure talent to our community.

There are many different way’s to promote our community and develop new leaders. Mentorship. Mentorship helps ease the barrier for entry into our community by providing guidance into how our community operates. Our community does have some great efforts taking place in the form of mentoring such as Drupal Diversity & Inclusion (DDI) initiative, the core mentoring initiative and of course the code and mentoring sprints at DrupalCon and DrupalCamps. These efforts are awesome and should be recognized as part of a larger strategic initiative to recruit the next generation of Drupal professionals.

Companies spend billions of dollars a year in recruiting but as an open-source community, we don’t have billions so

… what else can we do to attract new Drupal career professionals?

This year’s Atlanta Drupal Users’s Group (ADUG) decided to develop the Drupal Career Summit, all in an effort to recruit more professionals into the Drupal community. Participants will explore career opportunities, career development, and how open source solutions are changing the way we buy, build, and use technology.

  • Learn about job opportunities and training.
  • Hear how local leaders progressed through their careers and the change open source creates their clients and business.
  • Connect one-on-one with professionals in the career you want and learn about their progression, opportunities, challenges, and wins.

On Saturday, September 14 from 1pm -4:30pm. Hilton Garden Inn Atlanta-Buckhead 3342 Peachtree Rd., NE | Atlanta, GA 30326 | LEARN MORE

Student and job seekers can attend for FREE! The Summit will allow you to meet with potential employers and industry leaders. We’ll begin the summit with a panel of marketers, developers, designers, and managers that have extensive experience in the tech industry, and more specifically, the Drupal community. You’ll get a chance to learn about career opportunities and connect with peers with similar interests.

We’re looking for companies that want to hire and educate. You can get involved with the summit by becoming a sponsor for DrupalCamp Atlanta. Sponsors of the event will have the opportunity to engage with potential candidates through sponsored discussion tables and branded booths. With your sponsorship, you’ll get a booth, a discussion table, and 2 passes! At your booth, you’ll get plenty of foot traffic and a fantastic chance to network with attendees.

If you can’t physically attend our first Career Summit, you can still donate to our fundraising goals. And if you are not in the position to donate invite your employer, friends, and colleagues to participate. Drupal Career Summit.

Aug 03 2019
Aug 03

Decoupled Days 2019 last month, the third edition of the conference, was fantastic. I had the privilege to speak and attend last year, as well. The conference has quickly risen to be one of my favorite conference of the year.

Not familiar with Decoupled Days? Spawned in the “Hallway Track” of DrupalCon between its founders, the conference originated as Decoupled Drupal Days in 2017. Last year saw the phasing out of the word “Drupal” as the conference became focused on decoupling in general, not just Drupal. That is one reason it has quickly become a favorite event. It is an engineering and design conference. The act of decoupling in a system requires specific system design and presents engineering challenges. The organizers identify it as:

The only conference on the future of CMS, headless CMS, and decoupled CMS.

It’s not just about Drupal. Moreover, it’s not about abandoning Drupal. It is about using the correct tooling for the exact problem and how do our previous paradigms fit into this new architecture. One of the most exciting concepts discussed throughout the sessions was the shift of “where Drupal lives in the architecture stack.” Oh, and of course the exciting “battle” between GraphQL and RESTful APIs; more precisely for the event, the JSON API specification.

GraphQL vs JSON API

One identifiable topic at the conference was GraphQL. Facebook created the GraphQL specification, and the GraphQL Foundation now manages the specification. It has also grown in popularity thanks to venture-capital based companies like Apollo which have built server and client tooling around GraphQL. 

There were no JSON API specific sessions at the conference, but the JSON API maintainers were in attendance to listen and take in information rather than speak.

One of the most significant conversation pieces about GraphQL is that it isn’t a RESTful API interface. GraphQL is a query-like language that allows you to pass queries or mutations (non-read operations) as a GET or POST to the GraphQL endpoint. Each GraphQL server has its schema that maps to the backend data storage.

I enjoyed having many conversations about the different use cases for GraphQL and JSON API. I feel that GraphQL is a perfect fit to get up and running with a read-only API that requires a specially designed schema created by the consumer side. After that, when an API requires manipulation of data, or you don’t want to define the API schema, JSON API comes out on top. Again, it comes down to preference and proper tooling for the project.

After my Delivering Headless Commerce session, I was able to sit down with Mateu Aguiló BoschWim LeersGabe Sulliceand others to discuss the Drupal JSON API implementation. One of the biggest takeaways from that conversation is the plan forward to provide cross-bundle entity collections. The JSON API Cross Bundles module is being developed to sandbox feature development and eventually move support into Drupal core once an evaluated. 

Decoupled architectures in practice

I made it to two sessions which covered decoupled architectures that placed Drupal in a different position of the architecture stack.

Dominic Laylock from Mentrix discussed how the Economist scaled their content platform using an event-driven decoupled architecture. Drupal had become the monolithic centerpiece of the platform. With growing multichannel requirements, the Economist revised to an event-driven and microservice architecture. Drupal moved to the background, and its content pushed out, treating it as a content source like other items in their stack. Data is taken from Drupal and processed into a canonical data source based on Schema.org by workers and pushed into their content store. All consumers access content through a GraphQL endpoint that only accesses the content store. The session page: https://decoupleddays.com/session/content-systems-architecture-approaches-decoupled-world

Mateu Aguiló Bosch’s session focused on the importance of using a proxy in front of your Drupal API server. Mateu’s main argument was that your API proxy should be a Node.js server. In all reality, the proxy could be replaced by any service which handles concurrent requests and allows routing between different backend services. Many cloud providers offer API gateways, or you could write on in Golang or ReactPHP. The benefit of using Node.js? A Node.js server can execute server-side rendering for modern JavaScript frameworks. Server-side rendering improves your JavaScript application’s SEO and accessibility by rendering HTML on the server and not the client — like a standard server-side application. Here is a link to the session for when videos are available: https://decoupleddays.com/session/nodejs-proxy-between-drupal-and-your-consumers

Decoupled: 3 years of hosting and building

Michael Schmid of Amazee discussed their history of the building and hosting decoupled architectures — which spawned the creation of the amazee.io hosting company. Amazee Labs is the agency leading the development of the GraphQL server spec in Drupal. It was interesting to learn that the development and training were leveraged through client work, much like Centarro with Drupal Commerce. Here is a link to the session for when videos are available: https://decoupleddays.com/session/3-years-decoupled-website-building-and-hosting-our-learnings

Delivering Headless Commerce

On Thursday, I gave my Delivering Headless Commerce session. Centarro has been developing API-First improvements for Drupal Commerce, alongside features such as Addressbook, Pricelists, and Invoices. The presentation covered the available API integration options in Drupal and how to build consumers for a headless Drupal Commerce site. Both GraphQL and JSON:API have their benefits and challenges when implementing headless eCommerce. The slides go into depth on using either technology for the various components that go into a headless Drupal Commerce build. I discuss improvements we have made and our current development status to improve the API capabilities of Drupal Commerce.

You can also catch my talk again at the Drupal Chicago meetup on August 7th: https://www.meetup.com/drupalchicago/events/262875406/

Aug 02 2019
Aug 02

Our lead community developer, Alona Oneill, has been sitting in on the latest Drupal Core Initiative meetings and putting together meeting recaps outlining key talking points from each discussion. This article breaks down highlights from meetings this past May. You'll find that the meetings, while also providing updates of completed tasks, are also conversations looking for community member involvement. There are many moving pieces as things are getting ramped up for Drupal 9, so if you see something you think you can provide insights on, we encourage you to get involved.

Out of the box meeting  07/30/19

Issues were being worked on by, Ofer ShaalKeith Jay, and Mark Conroy.

Still needs work:

Needs to be reviewed:

Admin UI meeting! 07/31/19

Meetings are for core and contributed project developers as well as people who have integrations and services related to core. 

  • Usually happens every other Wednesday at 2:30pm UTC.
  • Is done over chat.
  • Happens in threads, which you can follow to be notified of new replies even if you don’t comment in the thread. You may also join the meeting later and participate asynchronously!
  • There are roughly 5-10 minutes between topics for those who are multitasking to follow along.

Beta & Stable issue blockers that are not components and need design:

  • Refer to the spreadsheet of Claro components to find other items that need worked on. Found in the "other design issues" tab.
  • Also, there are several issues that are being marked as beta&stable blockers.
    • If you catch any issue that you think that needs design work please add them there so the design team is aware of them.
    • If you want to go through any of those and add screenshots or other info we'll really appreciate that!

Editor role next steps:

Tables on mobile:

  • Table overflow on mobile currently needs reworked.
  • There is a design proposal currently under review.
    • Andrew Nevins (anevins), recommend to look into accessible-only solutions to this problem, as making a table responsive (apart from simply adding a horizontal scrollbar) can disrupt the semantics of the table to people who need it the most.
Aug 02 2019
Aug 02

In the previous entry, we learned that the Migrate API is an implementation of an ETL framework. We also talked about the steps involved in writing and running migrations. Now, let’s write our first Drupal migration. We are going to start with a very basic example: creating nodes out of hardcoded data. For this, we assume a Drupal installation using the standard installation profile which comes with the Basic Page content type.

As we progress through the series, the migrations will become more complete and more complex. Ideally, only one concept will be introduced at a time. When that is not possible, we will explain how different parts work together. The focus of today's lesson is learning the structure of a migration definition file and how to run it.

Writing the migration definition file

The migration definition file needs to live in a module. So, let’s create a custom one named ud_migrations_first and set Drupal core’s migrate module as dependencies in the *.info.yml file.

Now, let’s create a folder called migrations and inside it a file called udm_first.yml. Note that the extension is yml, not yaml. The content of the file will be:


type: module
name: UD First Migration
description: 'Example of basic Drupal migration. Learn more at https://understanddrupal.com/migrations.'
package: Understand Drupal
core: 8.x
dependencies:
  - drupal:migrate

The final folder structure will look like:


id: udm_first
label: 'UD First migration'
source:
  plugin: embedded_data
  data_rows:
    -
      unique_id: 1
      creative_title: 'The versatility of Drupal fields'
      engaging_content: 'Fields are Drupal''s atomic data storage mechanism...'
    -
      unique_id: 2
      creative_title: 'What is a view in Drupal? How do they work?'
      engaging_content: 'In Drupal, a view is a listing of information. It can a list of nodes, users, comments, taxonomy terms, files, etc...'
  ids:
    unique_id:
      type: integer
process:
  title: creative_title
  body: engaging_content
destination:
  plugin: 'entity:node'
  default_bundle: page

YAML is a key-value format with optional nesting of elements. They are very sensitive to white spaces and indentation. For example, they require at least one space character after the colon symbol (:) that separates the key from the value. Also, note that each level in the hierarchy is indented by two spaces exactly. A common source of errors when writing migrations is improper spacing or indentation of the YAML files.

A quick glimpse at the file reveals the three major parts: source, process, and destination. Other keys provide extra information about the migration. There are more keys that the ones shown above. For example, it is possible to define dependencies among migrations. Another option is to tag migrations so they can be executed together. We are going to learn more about these options in future entries.

Let’s review each key-value pair in the file. For id, it is customary to set its value to match the filename containing the migration definition, but without the .yml extension. This key serves as an internal identifier that Drupal and the Migrate API use to execute and keep track of the migration. The id value should be alphanumeric characters, optionally using underscores to separate words. As for the label key, it is a human readable string used to name the migration in various interfaces.

In this example we are using the embedded_data source plugin. It allows you to define the data to migrate right inside the definition file. To configure it, you define a data_rows key whose value is an array of all the elements you want to migrate. Each element might contain an arbitrary number of key-value pairs representing “columns” of data to be imported.

A common use case for the embedded_data plugin is testing of the Migrate API itself. Another valid one is to create default content when the data is known in advance. I often present introduction to Drupal workshops. To save time, I use this plugin to create nodes which are later used in the views creation explanation. Check this repository for an example of this. Note that it uses a different directory structure to define the migrations. That will be explained in future blog posts.

For the destination we are using the entity:node plugin which allows you to create nodes of any content type. The default_bundle key indicates that all nodes to be created will be of type “Basic page”, by default. It is important to note that the value of the default_bundle key is the machine name of the content type. You can find it at /admin/structure/types/manage/page In general, the Migrate API uses machine names for the values. As we explore the system, we will point out when they are used and where to find the right ones.

In the process section you map columns from the source to node properties and fields. The keys are entity property names or the field machine names. In this case, we are setting values for the title of the node and its body field. You can find the field machine names in the content type configuration page: /admin/structure/types/manage/page/fields. Values can be copied directly from the source or transformed via process plugins. This example makes a verbatim copy of the values from the source to the destination. The column names in the source are not required to match the destination property or field name. In this example they are purposely different to make them easier to identify.

You can download the example code from https://github.com/dinarcon/ud_migrations The example above is actually in a submodule in that repository. The same repository will be used for many examples throughout series. Download the whole repository into the ./modules/custom directory of the Drupal installation and enable the “UD First Migration” module.

Running the migration

Let’s use Drush to run the migrations with the commands provided by Migrate Run. Open a terminal, switch directories to Drupal’s webroot, and execute the following commands.


$ drush pm:enable -y migrate migrate_run ud_migrations_first
$ drush migrate:status
$ drush migrate:import udm_first

The first command enables the core migrate module, the runner, and the custom module holding the migration definition file. The second command shows a list of all migrations available in the system. Only one should be listed with the migration ID udm_first. The third command executes the migration. If all goes well, you can visit the content overview page at /admin/content and see two basic pages created. Congratulations, you have successfully run your first Drupal migration!!!

Or maybe not? Drupal migrations can fail in many ways and sometimes the error messages are not very descriptive. In upcoming blog posts we will talk about recommended workflows and strategies for debugging migrations. For now, let’s mention a couple of things that could go wrong with this example. If after running the drush migrate:status command you do not see the udm_first migration, make sure that the ud_migrations_first module is enabled. If it is enabled, and you do not see it, rebuild the cache by running drush cache:rebuild.

If you see the migration, but you get a yaml parse error when running the migrate:import command check your indentation. Copying and pasting from GitHub to your IDE/editor might change the spacing. An extraneous space can break the whole migration so pay close attention. If the command reports that it created the nodes, but you get a fatal error when trying to view one, it is because the content type was not set properly. Remember that the machine name of the “Basic page” content type is page, not basic_page. This error cannot be fixed from the administration interface. What you have to do is rollback the migration issuing the following command: drush migrate:rollback udm_first, then fix the default_bundle value, rebuild the cache, and import again.

Note: Migrate Tools could be used for running the migration. This module depends on Migrate Plus. For now, let’s keep module dependencies to a minimum to focus on core Migrate functionality. Also, skipping them demonstrates that these modules, although quite useful, are not hard requirements for running migration projects. If you decide to use Migrate Tools make sure to uninstall Migrate Run. Both provide the same Drush commands and conflict with each other if the two are enabled.

What did you learn in today’s blog post? Did you know that Migrate Plus and Migrate Tools are not hard requirements for Drupal migrations projects? Did you know you can place your YAML files in a migrations directory? What advice would you give to someone writing their first migration? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with your friends and colleagues.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether the migration series or other topics.

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web