Aug 18 2019
Aug 18

Today we will learn how to migrate content from a JSON file into Drupal using the Migrate Plus module. We will show how to configure the migration to read files from the local file system and remote locations. The example includes node, images, and paragraphs migrations. Let’s get started.

Example configuration of JSON source migration

Note: Migrate Plus has many more features. For example, it contains source plugins to import from XML files and SOAP endpoints. It provides many useful process plugins for DOM manipulation, string replacement, transliteration, etc. The module also lets you define migration plugins as configurations and create groups to share settings. It offers a custom event to modify the source data before processing begins. In today’s blog post, we are focusing on importing JSON files. Other features will be covered in future entries.

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD JSON source migration whose machine name is ud_migrations_json_source. It comes with four migrations: udm_json_source_paragraph, udm_json_source_image, udm_json_source_node_local, and udm_json_source_node_remote.

You can get the Migrate Plus module using composer: composer require 'drupal/migrate_plus:^5.0'. This will install the 8.x-5.x branch where new development will happen. This branch was created to introduce breaking changes in preparation for Drupal 9. As of this writing, the 8.x-4.x branch has feature parity with the newer branch. If your Drupal site is not composer-based, you can download the module manually.

Understanding the example set up

This migration will reuse the same configuration from the introduction to paragraph migrations example. Refer to that article for details on the configuration: the destinations will be the same content type, paragraph type, and fields. The source will be changed in today's example, as we use it to explain JSON migrations. The end result will again be nodes containing an image and a paragraph with information about someone’s favorite book. The major difference is that we are going to read from JSON. In fact, three of the migrations will read from the same file. The following snippet shows a reduced version of the file to get a sense of its structure:

{
  "data": {
    "udm_people": [
      {
        "unique_id": 1,
        "name": "Michele Metts",
        "photo_file": "P01",
        "book_ref": "B10"
      },
      {...},
      {...}
    ],
    "udm_book_paragraph": [
      {
        "book_id": "B10",
        "book_details": {
          "title": "The definite guide to Drupal 7",
          "author": "Benjamin Melançon et al."
        }
      },
      {...},
      {...}
    ],
    "udm_photos": [
      {
        "photo_id": "P01",
        "photo_url": "https://agaric.coop/sites/default/files/pictures/picture-15-1421176712.jpg",
        "photo_dimensions": [240, 351]
      },
      {...},
      {...}
    ]
  }
}

Note: You can literally swap migration sources without changing any other part of the migration.  This is a powerful feature of ETL frameworks like Drupal’s Migrate API. Although possible, the example includes slight changes to demonstrate various plugin configuration options. Also, some machine names had to be changed to avoid conflicts with other examples in the demo repository.

Migrating nodes from a JSON file

In any migration project, understanding the source is very important. For JSON migrations, there are two major considerations. First, where in the file hierarchy lies the data that you want to import. It can be at the root of the file or several levels deep in the hierarchy. Second, when you get to the array of records that you want to import, what fields are going to be made available to the migration. It is possible that each record contains more data than needed. For improved performance, it is recommended to manually include only the fields that will be required for the migration. The following code snippet shows part of the local JSON file relevant to the node migration:

{
  "data": {
    "udm_people": [
      {
        "unique_id": 1,
        "name": "Michele Metts",
        "photo_file": "P01",
        "book_ref": "B10"
      },
      {...},
      {...}
    ]
  }
}

The array of records containing node data lies two levels deep in the hierarchy. Starting with data at the root and then descending one level to udm_people. Each element of this array is an object with four properties:

  • unique_id is the unique identifier for each record within the data/udm_people hierarchy.
  • name is the name of a person. This will be used in the node title.
  • photo_file is the unique identifier of an image that was created in a separate migration.
  • book_ref is the unique identifier of a book paragraph that was created in a separate migration.

The following snippet shows the configuration to read a local JSON file for the node migration:

source:
  plugin: url
  data_fetcher_plugin: file
  data_parser_plugin: json
  urls:
    - modules/custom/ud_migrations/ud_migrations_json_source/sources/udm_data.json
  item_selector: data/udm_people
  fields:
    - name: src_unique_id
      label: 'Unique ID'
      selector: unique_id
    - name: src_name
      label: 'Name'
      selector: name
    - name: src_photo_file
      label: 'Photo ID'
      selector: photo_file
    - name: src_book_ref
      label: 'Book paragraph ID'
      selector: book_ref
  ids:
    src_unique_id:
      type: integer

The name of the plugin is url. Because we are reading a local file, the data_fetcher_plugin  is set to file and the data_parser_plugin to json. The urls configuration contains an array of file paths relative to the Drupal root. In the example, we are reading from one file only, but you can read from multiple files at once. In that case, it is important that they have a homogeneous structure. The settings that follow will apply equally to all the files listed in urls.

The item_selector configuration indicates where in the JSON file lies the array of records to be migrated. Its value is an XPath-like string used to traverse the file hierarchy. In this case, the value is data/udm_people. Note that you separate each level in the hierarchy with a slash (/).

fields has to be set to an array. Each element represents a field that will be made available to the migration. The following options can be set:

  • name is required. This is how the field is going to be referenced in the migration. The name itself can be arbitrary. If it contained spaces, you need to put double quotation marks (") around it when referring to it in the migration.
  • label is optional. This is a description used when presenting details about the migration. For example, in the user interface provided by the Migrate Tools module. When defined, you do not use the label to refer to the field. Keep using the name.
  • selector is required. This is another XPath-like string to find the field to import. The value must be relative to the location specified by the item_selector configuration. In the example, the fields are direct children of the records to migrate. Therefore, only the property name is specified (e.g., unique_id). If you had nested objects or arrays, you would use a slash (/) character to go deeper in the hierarchy. This will be demonstrated in the image and paragraph migrations.

Finally, you specify an ids array of field names that would uniquely identify each record. As already stated, the unique_id field servers that purpose. The following snippet shows part of the process, destination, and dependencies configuration of the node migration:

process:
  field_ud_image/target_id:
    plugin: migration_lookup
    migration: udm_json_source_image
    source: src_photo_file
destination:
  plugin: 'entity:node'
  default_bundle: ud_paragraphs
migration_dependencies:
  required:
    - udm_json_source_image
    - udm_json_source_paragraph
  optional: []

The source for the setting the image reference is src_photo_file. Again, this is the name of the field, not the label nor selector. The configuration of the migration lookup plugin and dependencies point to two JSON migrations that come with this example. One is for migrating images and the other for migrating paragraphs.

Migrating paragraphs from a JSON file

Let’s consider an example where the records to migrate have many levels of nesting. The following snippets show part of the local JSON file and source plugin configuration for the paragraph migration:

{
  "data": {
    "udm_book_paragraph": [
      {
        "book_id": "B10",
        "book_details": {
          "title": "The definite guide to Drupal 7",
          "author": "Benjamin Melançon et al."
        }
      },
      {...},
      {...}
    ]
}
source:
  plugin: url
  data_fetcher_plugin: file
  data_parser_plugin: json
  urls:
    - modules/custom/ud_migrations/ud_migrations_json_source/sources/udm_data.json
  item_selector: data/udm_book_paragraph
  fields:
    - name: src_book_id
      label: 'Book ID'
      selector: book_id
    - name: src_book_title
      label: 'Title'
      selector: book_details/title
    - name: src_book_author
      label: 'Author'
      selector: book_details/author
  ids:
    src_book_id:
      type: string

The plugin, data_fetcher_plugin, data_parser_plugin and urls configurations have the same values as in the node migration. The item_selector and ids configurations are slightly different to represent the path to paragraph records and the unique identifier field, respectively.

The interesting part is the value of the fields configuration. Taking data/udm_book_paragraph as a starting point, the records with paragraph data have a nested structure. Notice that book_details is an object with two properties: title and author. To refer to them, the selectors are book_details/title and book_details/author, respectively. Note that you can go as many level deeps in the hierarchy to find the value that should be assigned to the field. Every level in the hierarchy would be separated by a slash (/).

In this example, the target is a single paragraph type. But a similar technique can be used to migrate multiple types. One way to configure the JSON file is to have two properties. paragraph_id would contain the unique identifier for the record. paragraph_data would be an object with a property to set the paragraph type. This would also have an arbitrary number of extra properties with the data to be migrated. In the process section, you would iterate over the records to map the paragraph fields.

The following snippet shows part of the process configuration of the paragraph migration:

process:
  field_ud_book_paragraph_title: src_book_title
  field_ud_book_paragraph_author: src_book_author

Migrating images from a JSON file

Let’s consider an example where the records to migrate have more data than needed. The following snippets show part of the local JSON file and source plugin configuration for the image migration:

{
  "data": {
    "udm_photos": [
      {
        "photo_id": "P01",
        "photo_url": "https://agaric.coop/sites/default/files/pictures/picture-15-1421176712.jpg",
        "photo_dimensions": [240, 351]
      },
      {...},
      {...}
    ]
  }
}
source:
  plugin: url
  data_fetcher_plugin: file
  data_parser_plugin: json
  urls:
    - modules/custom/ud_migrations/ud_migrations_json_source/sources/udm_data.json
  item_selector: data/udm_photos
  fields:
    - name: src_photo_id
      label: 'Photo ID'
      selector: photo_id
    - name: src_photo_url
      label: 'Photo URL'
      selector: photo_url
  ids:
    src_photo_id:
      type: string

The plugin, data_fetcher_plugin, data_parser_plugin and urls configurations have the same values as in the node migration. The item_selector and ids configurations are slightly different to represent the path to image records and the unique identifier field, respectively.

The interesting part is the value of the fields configuration. Taking data/udm_photos as a starting point, the records with image data have extra properties that are not used in the migration. Particularly, the photo_dimensions property contains an array with two values representing the width and height of the image, respectively. To ignore this property, you simply omit it from the fields configuration. In case you wanted to use it, the selectors would be photo_dimensions/0 for the width and photo_dimensions/1 for the height. Note that you use a zero-based numerical index to get the values out of arrays. Like with objects, a slash (/) is used to separate each level in the hierarchy. You can go as far as necessary in the hierarchy.

The following snippet shows part of the process configuration of the image migration:

process:
  psf_destination_filename:
    plugin: callback
    callable: basename
    source: src_photo_url

JSON file location

When using the file data fetcher plugin, you have three options to indicate the location to the JSON files in the urls configuration:

  • Use a relative path from the Drupal root. The path should not start with a slash (/). This is the approach used in this demo. For example, modules/custom/my_module/json_files/example.json.
  • Use an absolute path pointing to the CSV location in the file system. The path should start with a slash (/). For example, /var/www/drupal/modules/custom/my_module/json_files/example.json.
  • Use a stream wrapper.

Being able to use stream wrappers gives you many more options. For instance:

  • Files located in the public, private, and temporary file systems managed by Drupal. This leverages functionality already available in Drupal core. For example: public://json_files/example.json.
  • Files located in profiles, modules, and themes. You can use the System stream wrapper module or apply this core patch to get this functionality. For example, module://my_module/json_files/example.json.
  • Files located in remote servers including RSS feeds. You can use the Remote stream wrapper module to get this functionality. For example, https://understanddrupal.com/json-files/example.json.

Migrating remote JSON files

Migrate Plus provides another data fetcher plugin named http. You can use it to fetch files using the http and https protocols. Under the hood, it uses the Guzzle HTTP Client library. In a future blog post we will explain this data fetcher in more detail. For now, the udm_json_source_node_remote migration demonstrates a basic setup for this plugin. Note that only the data_fetcher_plugin and urls configurations are different from the local file example. The following snippet shows part of the configuration to read a remote JSON file for the node migration:

source:
  plugin: url
  data_fetcher_plugin: http
  data_parser_plugin: json
  urls:
    - https://api.myjson.com/bins/110rcr
  item_selector: data/udm_people
  fields: ...
  ids: ...

And that is how you can use JSON files as the source of your migrations. Many more configurations are possible. For example, you can provide authentication information to get access to protected resources. You can also set custom HTTP headers. Examples will be presented in a future entry.

What did you learn in today’s blog post? Have you migrated from JSON files before? If so, what challenges have you found? Did you know that you can read local and remote files? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors: Drupalize.me by Osio Labs has online tutorials about migrations, among other topics, and Agaric provides migration trainings, among other services.  Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Aug 17 2019
Aug 17

Today we will learn how to migrate content from a Comma-Separated Values (CSV) file into Drupal. We are going to use the latest version of the Migrate Source CSV module which depends on the third-party library league/csv. We will show how configure the source plugin to read files with or without a header row. We will also talk about a new feature that allows you to use stream wrappers to set the file location. Let’s get started.

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD CSV source migration whose machine name is ud_migrations_csv_source. It comes with three migrations: udm_csv_source_paragraph, udm_csv_source_image, and udm_csv_source_node.

You can get the Migrate Source CSV module is using composer: composer require drupal/migrate_source_csv. This will also download its dependency: the league/csv library. The example assumes you are using 8.x-3.x branch of the module, which requires composer to be installed. If your Drupal site is not composer-based, you can use the 8.x-2.x branch. Continue reading to learn the difference between the two branches.

Understanding the example set up

This migration will reuse the same configuration from the introduction to paragraph migrations example. Refer to that article for details on the configuration: the destinations will be the same content type, paragraph type, and fields. The source will be changed in today's example, as we use it to explain JSON migration. The end result will again be nodes containing an image and a paragraph with information about someone’s favorite book. The major difference is that we are going to read from JSON.

Note that you can literally swap migration sources without changing any other part of the migration. This is a powerful feature of ETL frameworks like Drupal’s Migrate API. Although possible, the example includes slight changes to demonstrate various plugin configuration options. Also, some machine names had to be changed to avoid conflicts with other examples in the demo repository.

Migrating CSV files with a header row

In any migration project, understanding the source is very important. For CSV migrations, the primary thing to consider is whether or not the file contains a row of headers. Other things to consider are what characters to use as delimiter, enclosure, and escape character. For now, let’s consider the following CSV file whose first row serves as column headers:

unique_id,name,photo_file,book_ref
1,Michele Metts,P01,B10
2,Benjamin Melançon,P02,B20
3,Stefan Freudenberg,P03,B30

This file will be used in the node migration. The four columns are used as follows:

  • unique_id is the unique identifier for each record in this CSV file.
  • name is the name of a person. This will be used as the node title.
  • photo_file is the unique identifier of an image that was created in a separate migration.
  • book_ref is the unique identifier of a book paragraph that was created in a separate migration.

The following snippet shows the configuration of the CSV source plugin for the node migration:

source:
  plugin: csv
  path: modules/custom/ud_migrations/ud_migrations_csv_source/sources/udm_people.csv
ids: [unique_id]

The name of the plugin is csv. Then you define the path pointing to the file itself. In this case, the path is relative to the Drupal root. Finally, you specify an ids array of columns names that would uniquely identify each record. As already stated, the unique_id column servers that purpose. Note that there is no need to specify all the columns names from the CSV file. The plugin will automatically make them available. That is the simplest configuration of the CSV source plugin.

The following snippet shows part of the process, destination, and dependencies configuration of the node migration:

process:
  field_ud_image/target_id:
    plugin: migration_lookup
    migration: udm_csv_source_image
    source: photo_file
destination:
  plugin: 'entity:node'
  default_bundle: ud_paragraphs
migration_dependencies:
  required:
    - udm_csv_source_image
    - udm_csv_source_paragraph
optional: []

Note that the source for the setting the image reference is photo_file. In the process pipeline you can directly use any column name that exists in the CSV file. The configuration of the migration lookup plugin and dependencies point to two CSV migrations that come with this example. One is for migrating images and the other for migrating paragraphs.

Migrating CSV files without a header row

Now let’s consider two examples of CSV files that do not have a header row. The following snippets show the example CSV file and source plugin configuration for the paragraph migration:

B10,The definite guide to Drupal 7,Benjamin Melançon et al.
B20,Understanding Drupal Views,Carlos Dinarte
B30,Understanding Drupal Migrations,Mauricio Dinarte

source:
  plugin: csv
  path: modules/custom/ud_migrations/ud_migrations_csv_source/sources/udm_book_paragraph.csv
  ids: [book_id]
  header_offset: null
  fields:
    - name: book_id
    - name: book_title
- name: 'Book author'

When you do not have a header row, you need to specify two more configuration options. header_offset has to be set to null. fields has to be set to an array where each element represents a column in the CSV file. You include a name for each column following the order in which they appear in the file. The name itself can be arbitrary. If it contained spaces, you need to put quotes (') around it. After that, you set the ids configuration to one or more columns using the names you defined.

In the process section you refer to source columns as usual. You write their name adding quotes if it contained spaces. The following snippet shows how the process section is configured for the paragraph migration:

process:
  field_ud_book_paragraph_title: book_title
field_ud_book_paragraph_author: 'Book author'

The final example will show a slight variation of the previous configuration. The following two snippets show the example CSV file and source plugin configuration for the image migration:

P01,https://agaric.coop/sites/default/files/pictures/picture-15-1421176712.jpg
P02,https://agaric.coop/sites/default/files/pictures/picture-3-1421176784.jpg
P03,https://agaric.coop/sites/default/files/pictures/picture-2-1421176752.jpg

source:
  plugin: csv
  path: modules/custom/ud_migrations/ud_migrations_csv_source/sources/udm_photos.csv
  ids: [photo_id]
  header_offset: null
  fields:
    - name: photo_id
      label: 'Photo ID'
    - name: photo_url
label: 'Photo URL'

For each column defined in the fields configuration, you can optionally set a label. This is a description used when presenting details about the migration. For example, in the user interface provided by the Migrate Tools module. When defined, you do not use the label to refer to source columns. You keep using the column name. You can see this in the value of the ids configuration.

The following snippet shows part of the process configuration of the image migration:

process:
  psf_destination_filename:
    plugin: callback
    callable: basename
source: photo_url

CSV file location

When setting the path configuration you have three options to indicate the CSV file location:

  • Use a relative path from the Drupal root. The path should not start with a slash (/). This is the approach used in this demo. For example, modules/custom/my_module/csv_files/example.csv.
  • Use an absolute path pointing to the CSV location in the file system. The path should start with a slash (/). For example, /var/www/drupal/modules/custom/my_module/csv_files/example.csv.
  • Use a stream wrapper. This feature was introduced in the 8.x-3.x branch of the module. Previous versions cannot make use of them.

Being able to use stream wrappers gives you many options for setting the location to the CSV file. For instance:

  • Files located in the public, private, and temporary file systems managed by Drupal. This leverages functionality already available in Drupal core. For example: public://csv_files/example.csv.
  • Files located in profiles, modules, and themes. You can use the System stream wrapper module or apply this core patch to get this functionality. For example, module://my_module/csv_files/example.csv.
  • Files located in remote servers including RSS feeds. You can use the Remote stream wrapper module to get this functionality. For example, https://understanddrupal.com/csv-files/example.csv.

CSV source plugin configuration

The configuration options for the CSV source plugin are very well documented in the source code. They are included here for quick reference:

  • path is required. It contains the path to the CSV file. Starting with the 8.x-3.x branch, stream wrappers are supported.
  • ids is required. It contains an array of column names that uniquely identify each record.
  • header_offset is optional. The index of record to be used as the CSV header and the thereby each record's field name. It defaults to zero (0) because the index is zero-based. For CSV files with no header row the value should be set to null.
  • fields is optional. It contains a nested array of names and labels to use instead of a header row. If set, it will overwrite the column names obtained from header_offset.
  • delimiter is optional. It contains one character column delimiter. It defaults to a comma (,). For example, if your file uses tabs as delimiter, you set this configuration to \t.
  • enclosure is optional. It contains one character used to enclose the column values. Defaults to double quotation marks (").
  • escape is optional. It contains one character used for character escaping in the column values. It defaults to a backslash (****).

Important: The configuration options changed significantly between the 8.x-3.x and 8.x-2.x branches. Refer to this change record for a reference of how to configure the plugin for the 8.x-2.x.

And that is how you can use CSV files as the source of your migrations. Because this is such a common need, it was considered to move the CSV source plugin to Drupal core. The effort is currently on hold and it is unclear if it will materialize during Drupal 8’s lifecycle. The maintainers of the Migrate API are focusing their efforts on other priorities at the moment. You can read this issue to learn about the motivation and context for offering functionality in Drupal core.

Note: The Migrate Spreadsheet module can also be used to migrate data from CSV files. It also supports Microsoft Office Excel and LibreOffice Calc (OpenDocument) files. The module leverages the PhpOffice/PhpSpreadsheet library.

What did you learn in today’s blog post? Have you migrated from CSV files before? Did you know that it is now possible to read files using stream wrappers? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors: Drupalize.me by Osio Labs has online tutorials about migrations, among other topics, and Agaric provides migration trainings, among other services. Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Aug 16 2019
Aug 16

After a couple months off, SC DUG met this month with a presentation on super cheap Drupal hosting.

Chris Zietlow from Mindgrub, Will Jackson from Kanopi Studios, and I all gave short talks very cheap ways to host Drupal 8.

[embedded content]

Chris opened by talking about using AWS Micro servers. Will shared a solution using a Raspberry Pi for a fully wireless server. I closed the discussion with a review of using Drupal Tome on Netlify.

We all worked from a loose set of rules to help keep us honest and prevent overlapping:

Rules for Cheap D8 Hosting Challenge

The goal is to figure out the cheapest D8 hosting that would actually function for a project, even if it is deeply irresponsible to actually use.

Rules

  1. It has to actually work for D8 (so modern PHP version, working database, etc),
  2. You do not actually have to spend the money, but you do need to know all the steps required to make it work.
  3. It needs to honor the TOS for any networks and services you use (no illegal network taps – legal hidden taps are fair game).
  4. You have to share your idea with the other players so we don’t have two people propose the same solution (first-come-first-serve on ideas).

Reporting

Be prepared to talk for about 5 minutes on how your solution would work.  Your talk needs to include:

  1. Estimated Monthly cost for the first year.
  2. Steps required to make it work.
  3. Known weaknesses.

If you have a super cheap hosting solution for Drupal 8 we’d love to hear about it.

Aug 15 2019
Aug 15

Today we will present an introduction to paragraphs migrations in Drupal. The example consists of migrating paragraphs of one type, then connecting the migrated paragraphs to nodes. A separate image migration is included to demonstrate how they are different. At the end, we will talk about behavior that deletes paragraphs when the host entity is deleted. Let’s get started.

Example mapping for paragraph reference field.

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD paragraphs migration introduction whose machine name is ud_migrations_paragraph_intro. It comes with three migrations: ud_migrations_paragraph_intro_paragraph, ud_migrations_paragraph_intro_image, and ud_migrations_paragraph_intro_node. One content type, one paragraph type, and four fields will be created when the module is installed.

Note: Configuration placed in a module’s config/install directory will be copied to Drupal’s active configuration. And if those files have a dependencies/enforced/module key, the configuration will be removed when the listed modules are uninstalled. That is how the content type, the paragraph type, and the fields are automatically created and deleted.

You can get the Paragraph module is using composer: composer require drupal/paragraphs. This will also download its dependency: the Entity Reference Revisions module. If your Drupal site is not composer-based, you can get the code for both modules manually.

Understanding the example set up

The example code creates one paragraph type named UD book paragraph (ud_book_paragraph). It has two “Text (plain)” fields: Title (field_ud_book_paragraph_title) and Author (field_ud_book_paragraph_author). A new UD Paragraphs (ud_paragraphs) content type is also created. This has two fields: Image (field_ud_image) and Favorite book (field_ud_favorite_book) containing references to images and book paragraphs imported in separate migrations. The words in parenthesis represent the machine names of the different elements.

The paragraph migration

Migrating into a paragraph type is very similar to migrating into a content type. You specify the source, process the fields making any required transformation, and set the destination entity and bundle. The following code snippet shows the source, process, and destination sections:

source:
  plugin: embedded_data
  data_rows:
    - book_id: 'B10'
      book_title: 'The definite guide to Drupal 7'
      book_author: 'Benjamin Melançon et al.'
    - book_id: 'B20'
      book_title: 'Understanding Drupal Views'
      book_author: 'Carlos Dinarte'
    - book_id: 'B30'
      book_title: 'Understanding Drupal Migrations'
      book_author: 'Mauricio Dinarte'
  ids:
    book_id:
      type: string
process:
  field_ud_book_paragraph_title: book_title
  field_ud_book_paragraph_author: book_author
destination:
  plugin: 'entity_reference_revisions:paragraph'
  default_bundle: ud_book_paragraph

The most important part of a paragraph migration is setting the destination plugin to entity_reference_revisions:paragraph. This plugin is actually provided by the Entity Reference Revisions module. It is very important to note that paragraphs entities are revisioned. This means that when you want to create a reference to them, you need to provide two IDs: target_id and target_revision_id. Regular entity reference fields like files, images, and taxonomy terms only require the target_id. This will be further explained with the node migration.

The other configuration that you can optionally set in the destination section is default_bundle. The value will be the machine name of the paragraph type you are migrating into. You can do this when all the paragraphs for a particular migration definition file will be of the same type. If that is not the case, you can leave out the default_bundle configuration and add a mapping for the type entity property in the process section.

You can execute the paragraph migration with this command: drush migrate:import
ud_migrations_paragraph_intro_paragraph
. After running the migration, there is not much you can do to verify that it worked. Contrary to other entities, there is no user interface, available out of the box, that lists all paragraphs in the system. One way to verify if the migration worked is to manually create a View that shows paragraphs. Another way is to query the database directly. You can inspect the tables that store the paragraph fields’ data. In this example, the tables would be:

  • paragraph__field_ud_book_paragraph_author for the current author.
  • paragraph__field_ud_book_paragraph_title for the current title.
  • paragraph_r__8c3a9563ac for all the author revisions.
  • paragraph_r__3fa7e9863a for all the title revisions.

Each of those tables contains information about the bundle (paragraph type), the entity id, the revision id, and the migrated field value. Table names are derived from the machine names of the fields. If they are too long, the field name will be hashed to produce a shorter table name. Having to query the database is not ideal. Unfortunately, the options available to check if a paragraph migration worked are limited at the moment.

The node migration

The node migration will serve as the host for both referenced entities: images and paragraphs. The image migration is very similar to the one explained in a previous article. This time, the focus will be the paragraph migration. Both of them are set as dependencies of the node migration, so they need to be executed in advance. The following snippet shows how the source, destinations, and dependencies are set:

source:
  plugin: embedded_data
  data_rows:
    - unique_id: 1
      name: 'Michele Metts'
      photo_file: 'P01'
      book_ref: 'B10'
    - unique_id: 2
      name: 'Benjamin Melançon'
      photo_file: 'P02'
      book_ref: 'B20'
    - unique_id: 3
      name: 'Stefan Freudenberg'
      photo_file: 'P03'
      book_ref: 'B30'
  ids:
    unique_id:
      type: integer
destination:
  plugin: 'entity:node'
  default_bundle: ud_paragraphs
migration_dependencies:
  required:
    - ud_migrations_paragraph_intro_image
    - ud_migrations_paragraph_intro_paragraph
  optional: []

Note that photo_file and book_ref both contain the unique identifier of records in the image and paragraph migrations, respectively. These can be used with the migration_lookup plugin to map the reference fields in the nodes to be migrated. ud_paragraphs is the machine name of the target content type.

The mapping of the image reference field follows the same pattern than the one explained in the article on migration dependencies. Using the migration_lookup plugin, you indicate which is the migration that should be searched for the images. You also specify which source column contains the unique identifiers that match those in the image migration. This operation will return a single value: the file ID (fid) of the image. This value can be assigned to the target_id subfield of field_ud_image to establish the relationship. The following code snippet shows how to do it:

field_ud_image/target_id:
  plugin: migration_lookup
  migration: ud_migrations_paragraph_intro_image
  source: photo_file

Paragraph field mappings

Before diving into the paragraph field mapping, let’s think about what needs to be done. Paragraphs are revisioned entities. To make a reference to them, you need two IDs: their entity id and their entity revision id. These two values need to be assigned to two subfields of the paragraph reference field: target_id and target_revision_id respectively. You have to come up with a process pipeline that complies with this requirement. There are many ways to do it, and the specifics will depend on your field configuration. In this example, the paragraph reference field allows an unlimited number of paragraphs to be associated, but only of one type: ud_book_paragraph. Another thing to note is that even though the field allows you to add as many paragraphs as you want, the example migrates exactly one paragraph.

With those considerations in mind, the mapping of the paragraph field will be a two step process. First, use the migration_lookup plugin to get a reference to the paragraph. Second, use the fetched values to set the paragraph reference subfields. The following code snippet shows how to do it:

pseudo_mbe_book_paragraph:
  plugin: migration_lookup
  migration: ud_migrations_paragraph_intro_paragraph
  source: book_ref
field_ud_favorite_book:
  plugin: sub_process
  source:
    - '@pseudo_mbe_book_paragraph'
  process:
    target_id: '0'
    target_revision_id: '1'

The first step is a normal migration_lookup procedure. The important difference is that instead of getting a single value, like with images, the paragraph lookup operation will return an array of two values. The format is like [3, 7] where the 3 represents the entity id and the 7 represents the entity revision id of the paragraph. Note that the array keys are not named. To access those values, you would use the index of the elements starting with zero (0). This will be important later. The returned array is stored in the pseudo_mbe_book_paragraph pseudofield.

The second step is to set the target_id and target_revision_id subfields. In this example, field_ud_favorite_book is the machine name paragraph reference field. Remember that it is configured to accept an arbitrary number of paragraphs, and each will require passing an array of two elements. This means you need to process an array of arrays. To do that, you use the sub_process plugin to iterate over an array of paragraph references. In this example, the structure to iterate over would be like this:

[
  [3, 7]
]

Let’s dissect how to do the mapping of the paragraph reference field. The source configuration of the sub_process plugin contains an array of paragraph references. In the example, that array has a single element: the '@pseudo_mbe_book_paragraph' pseudofield. The quotes (') and at sign (@) are required to reuse an element that appears before in the process pipeline. Then, in the process configuration, you set the subfields for the paragraph reference field. It is worth noting that at this point you are iterating over a list of paragraph references, even if that list contains only one element. If you had more than one paragraph to migrate, whatever you defined in process will apply to all of them.

The process configuration is an array of subfield mappings. The left side of the assignment is the name of the subfield you want to set. The right side of the assignment is an array index of the paragraph reference being processed. Remember that this array does not have named-keys, so you use their numerical index to refer to them. The example sets the target_id subfield to the element in the 0 index and the target_revision_id subfield to the element in the one 1 index. Using the example data, this would be target_id: 3 and target_revision_id: 7. The quotes around the numerical indexes are important. If not used, the migration will not find the indexes and the paragraphs will not be associated. The end result of this operation will be something like this:

'field_ud_favorite_book' => array (1) [
  array (2) [
    'target_id' => string (1) "3"
    'target_revision_id' => string (1) "7"
  ]
]

There are three ways to run the migrations: manually, executing dependencies, and using tags. The following code snippet shows the three options:

# 1) Manually.
$ drush migrate:import ud_migrations_paragraph_intro_image
$ drush migrate:import ud_migrations_paragraph_intro_paragraph
$ drush migrate:import ud_migrations_paragraph_intro_node

# 2) Executing depenpencies.
$ drush migrate:import ud_migrations_paragraph_intro_node --execute-dependencies

# 3) Using tags.
$ drush migrate:import --tag='UD Paragraphs Intro'

And that is one way to map paragraph reference fields. In the end, all you have to do is set the target_id and target_revision_id subfields. The process pipeline that gets you to that point can vary depending on how your paragraphs are configured. The following is a non-exhaustive list of things to consider when migrating paragraphs:

  • How many paragraphs types can be referenced?
  • How many paragraphs instances are being migrated? Is this a multivalue field?
  • Do paragraphs have translations?
  • Do paragraphs have revisions?

Do migrated paragraphs disappear upon node rollback?

Paragraphs migrations are affected by a particular behavior of revisioned entities. If the host entity is deleted, and the paragraphs do not have translations, the whole paragraph gets deleted. That means that deleting a node will make the referenced paragraphs’ data to be removed. How does this affect your migration workflow? If the migration of the host entity is rollback, then the paragraphs will be removed, the migrate API will not know about it. In this example, if you run a migrate status command after rolling back the node migration, you will see that the paragraph migration indicated that there are no pending elements to process. The file migration for the images will report the same, but in that case, the images will remain on the system.

In any migration project, it is common that you do rollback operations to test new field mappings or fix errors. Thus, chances are very high that you will stumble upon this behavior. Thanks to Damien McKenna for helping me understand this behavior and tracking it to the rollback() method of the EntityReferenceRevisions destination plugin. So, what do you do to recover the deleted paragraphs? You have to rollback both migrations: node and paragraph. And then, you have to import the two again. The following snippet shows how to do it:

# 1) Rollback both migrations.
$ drush migrate:rollback ud_migrations_paragraph_intro_node
$ drush migrate:rollback ud_migrations_paragraph_intro_paragraph

# 2) Import both migrations againg.

$ drush migrate:import ud_migrations_paragraph_intro_paragraph
$ drush migrate:import ud_migrations_paragraph_intro_node

What did you learn in today’s blog post? Have you migrated paragraphs before? If so, what challenges have you found? Did you know paragraph reference fields require two subfields to be set? Did you that deleting the host entity also deletes referenced paragraphs? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Aug 15 2019
Aug 15

The Drupal Community Working Group is happy to announce that we've teamed up with Otter Tech to offer live, monthly, online Code of Conduct enforcement training for Drupal Event organizers and volunteers through the end of 2019. 

The training is designed to provide "first responder" skills to Drupal community members who take reports of potential Code of Conduct issues at Drupal events, including meetups, camps, conventions, and other gatherings. The workshops will be attended by Code of Conduct enforcement teams from other open source events, which will allow cross-pollination of knowledge with the Drupal community.

Each monthly online workshop is the same; community members only have to attend one monthly workshop of their choice to complete the training.  We strongly encourage all Drupal event organizers to consider sponsoring one or two persons' attendance at this workshop.

The monthly online workshops will be presented by Sage Sharp, Otter Tech's CEO and a diversity and inclusion leader in the open source community. From the official description of the workshop, it will include:

  • Practice taking a report of a potential Code of Conduct violation (an incident report)
  • Practice following up with the reported person
  • Instructor modeling on how to take a report and follow up on a report
  • One practice scenario for a report given at an event
  • One practice scenario for a report given in an online community
  • Discussion on bias, microaggressions, personal conflicts, and false reporting
  • Frameworks for evaluating a response to a report
  • 40 minutes total of Q&A time

In addition, we have received a Drupal Community Cultivation Grant to help defray the cost of the workshop for those that need assistance. The standard cost of the workshop is $350, Otter Tech has worked with us to allow us to provide the workshop for $300. To register for the workshop, first let us know that you're interested by completing this sign-up form - everyone who completes the form will receive a coupon code for $50 off the regular price of the workshop.

For those that require additional assistance, we have a limited number of $100 subsidies available, bringing the workshop price down to $200. Subsidies will be provided based on reported need as well as our goal to make this training opportunity available to all corners of our community. To apply for the subsidy, complete the relevant section on the sign-up form. The deadline for applying for the subsidy is end-of-business on Friday, September 6, 2019 - those selected for the subsidy will be notified after this date (in time for the September 9, 2019 workshop).

The workshops will be held on:

  • September 9 (Monday) at 3 pm to 7 pm U.S. Pacific Time / 8 am to 12 pm Australia Eastern Time
  • October 23 (Wednesday) at 5 am to 9 am U.S. Pacific Time / 2 pm to 6 pm Central European Time
  • November 14 (Thursday) at 6 pm to 10 pm U.S. Pacific Time / 1 pm to 5 pm Australia Eastern Time
  • December 4 (Wednesday) at 9 am to 1 pm U.S. Pacific Time / 6 pm to 10 pm Central European Time

Those that successfully complete the training will be (at their discretion) listed on Drupal.org (in the Drupal Community Workgroup section) as a means to prove that they have completed the training. We feel that moving forward, the Drupal community now has the opportunity to have professionally trained Code of Conduct contacts at the vast majority of our events, once again, leading the way in the open source community.

We are fully aware that the fact that the workshops will be presented in English limit who will be able to attend. We are more than interested in finding additional professional Code of Conduct workshops in other languages. Please contact us if you can assist.
 

Aug 15 2019
Aug 15

Drupal is moving to the future and adopts more and more innovative trends. No wonder that high tech engineering leaders trust Drupal and build their sites with it.

Drupal in high-tech: innovative companies + innovative CMS

They have found each other! Thinking about Drupal’s innovative spirit, we want to mention plenty of its capabilities, so here are at least a few:

Great examples of high tech company websites built on Drupal

So let’s learn more about Drupal for high tech company websites by looking at the following examples.

Tesla

Electric cars, solar panels, and renewable energy are the three pillars of expertise of the incredible innovator — Tesla. The high tech giant has also chosen the right CMS — the Tesla’s site is built with Drupal.

Tesla website built with Drupal

Amazing web design with background videos and zooming effects allows customers to see the products almost like in real life and get inspired. Users can select among 30+ regions to see the website version in their native language.Tesla website built with Drupal

While choosing the products, you can specify all parameters and see the changed picture without a page reload. There is also an online payment feature.

Tesla website built with Drupal

General Electric

Discussing Drupal for high tech industry leaders, we are glad to mention General Electric Company (GE). Its innovation builds, powers, moves, and cures the world. And Drupal powers their website where we can learn this and more about their activity.

The site’s users can select among such GE businesses as aviation, power, renewable energy, healthcare, lighting, and so on. The large and handy search bar on the main page also quickly takes them to where they wish. The stylish design of many sections features background videos.

GE website built with Drupal

As 130 countries are currently home to GE operations, users can select and visit their specific site version.

GE website built with Drupal

Iteris

Iteris Inc. produces innovative sensors and other solutions that predict the state of traffic, weather, soil, etc., to boost the agriculture and transportation industries. While Iteris products win high tech innovation awards, Drupal has won their trust as a CMS.

Iteris website built with Drupal

They chose Drupal 8 to revamp their website as part of the rebranding campaign. The site is great from frontend to backend — from the stylish design with the main page’s slider to complex permissions for particular user types.

Iteris website built with Drupal

Pfizer

The Pfizer multinational pharmaceutical corporation uses technology and innovative science for advanced patient care.

Pfizer website built with Drupal

Their website has 50+ country/region and language options. The content is presented in five key categories: “Your Health,” “Our Science,” “Our People,” “Our Purpose”, and “Our products.”

Pfizer website built with Drupal

There is a strong search feature for Pfizer’s products, the option to find the clinical trial results, and much more.

What about your future website?

Hopefully, these examples of high tech company websites built on Drupal have inspired you. They are just a few from a million+ Drupal sites worldwide in various industries — e-commerce, education, business & finance, and so on. In addition to being innovative and powerful, the CMS is very versatile and flexible.

So contact our development team to discuss how Drupal can be helpful in your case or with your website idea!

Aug 15 2019
Aug 15

Once, the Drupal community had Mollom, and everything was good. It was a web service that would let you use an API to scan comments and other user-submitted content and it would let your site know whether it thought it was spam, or not, so it could safely publish the content. Or not. It was created by our very own Dries Buytaert and obviously had a Drupal module. It was the service of choice for Drupal sites struggling with comment spam. Unfortunately, Mollom no longer exists. But there is an alternative, from the WordPress world: Akismet.

Akismet is very similar to Mollom. It too is a web service that lets you use an API to judge if some submitted content is spam. The name is derived from the Turkish word kismet, which means fate or destiny. The A simply stands for automatic (or Automattic, the company behind both WordPress and Akismet). It was created for WordPress, and like Mollom was once for Drupal, it is the service of choice for WordPress sites. However, nothing is keeping any other software from making use of it, so when you download the Akismet module, you can use it with your Drupal site as well. Incidentally, the module is actually based on the code of the Mollom module.

There is no stable release of the module, currently. In fact, there is no release at all, not even an alpha release. Except for development releases, that is. This means that for now it might not be an option for you to deploy the module. Hopefully, this changes soon, although the last commit is over a year ago at the time of writing. A mitigating circumstance, though, is that Drupal.org itself seems to be using this module as well, albeit in the Drupal 7 version (this article will be discussing the D8 version).

Adding Akismet to your Drupal site

How to add the module will depend on your workflow. Either download a development release, or - when you use a composer-based workflow - add the module like so:

$ composer require drupal/akismet:^[email protected]

Then, enable the module through the Extend admin screen (/admin/modules).

Basic configuration

In order to configure the module, you will first need an Akismet API key. To get this, register at https://akismet.com. If you have a wordpress.com account (which you might have from either wordpress.com itself, or e.g. because you also have Gravatar) you can sign in with it.

Once you've obtained an API key, you can go to /admin/config/content/akismet/settings to configure Akismet.

The choice what to do when Akismet is not available is probably dependent on how busy your site is. If it is very busy, and you do not get tons of spam, you probably want to accept all submissions. If you get a lot of spam and not very many actual contributions, you might want to block everything. If you both have a high traffic site and get a lot of spam, good luck. Of course, you can always look at a second line of defense, like the Honeypot module

The second two settings - Show a link to the privacy policy and Enable testing mode - seem to be left-overs from the Mollom module, because neither of them seem to do anything. I created issues for both the privacy policy and the testing mode.

While Mollom had a requirement to either show a link to its terms of use on protected forms, or have your own terms of use that made it clear you make use of the Mollom service, Akismet doesn't seem to have such a requirement. (Of course, it is a good idea to add something about your use of Akismet in your terms of use or your privacy policy).

Testing with Akismet is possible to either pass "viagra-test-123" as the body of the message, or "[email protected]" as the email address; these will always result in a spam classification. This seems to trigger the "unsure" scenario, eventhough that doesn't actually fully work, currently (see further). Forcing a ham (the opposite of spam - I didn't make this up) response is a bit trickier, because it would involve setting some parameters to the web service you do not have control over from the outside. Especially the testing mode might be a nice feature request for the Akismet module. Ideally, the module would work similar to the Mollom module, where you could simply send in a comment with "ham", "unsure" and "spam" to test. As said, I created an issue to flesh out this functionality.

The advanced configuration hides settings to control whether to only log errors and warnings, or all Akismet messages, and a timeout for contacting the Akismet server. Especially in the beginning you might want to log all messages to monitor whether things are working as they should. 

Configuring which forms to protect

When having finished the basic configuration for the module, it is time to configure the forms you want to protect. This happens on the path /admin/config/content/akismet. Here, click the "Add form" button to start configuring a form.

When clicking the button, the module will ask you which form you wish to configure. Out of the box, the module will offer to protect node, comment user and contact forms. A hook is offered to add additional forms, although either a module will need to implement the hook itself, or it will have to be done for it. Here, I'm choosing to just protect the comment form, as I am suffering from quite a lot of comment spam. Once you've chosen a form, it will show the form fields you might want to pass to the Akismet web server for analysis.

You'll basically want to select anything that is directly controlled by the user. The obvious candidate is the body, but also the subject, user name, email address, website and hostname will contain clues whether something is spam or not.

Next, you get to select what happens when Akismet decides content is or might be spam. Akismet may report back that it is sure something is spam. If it says something is spam, but does not pass back this certainty flag, the Drupal module says Akismet is "unsure", which is actually a term that can be traced back to the Mollom roots of this module. You may tell the module it should then retain the submission for manual moderation, although this doesn't seem to work correctly, at the moment. I created an issue in the issue queue for that. What I'm seeing happening is that the post is discarded, just like when Akismet is sure.

Click Create Protected Akismet Form to save the protection configuration. You're now ready to catch some spam. You can look at the watchdog log (/admin/reports/dblog) to see the module reporting on what it is doing. Akismet itself also has a nice dashboard with some graphs showing you how much ham and spam it detected on your site.

Reporting spam to Akismet

Sometimes, Akismet might wrongly accept some piece of content that is actually spam. Or, when the moderation queue mechanism actually works properly, you probably want to let Akismet know that yes, something is in fact spam (you might also want to let Akismet know it didn't correctly identify spam, i.e. report false positives. This is a feature of the web service, but is currently not in the module; another feature request, it seems. I've submitted one in the issue queue).

The module comes with action plugins for comments and nodes that let you unpublish the content and report it as spam to Akismet at the same time. You can add it to your comment views by changing their configuration at /admin/structure/views/view/comment (you will need the Views UI module enabled to be able to configure views). Unfortunately, it seems that also with this functionality there is an issue, the action doesn't actually unpublish. A patch is available in the linked issue and of course the workaround is to first use the Akismet action, and then use the standard unpublish action.

Find the configure link for the Comment operations form. Click the link and, in the modal that opens, find the list of checkboxes for the available actions. Enable Report to Akismet and unpublish and save the configuration. Repeat for the Unapproved comments display. This will mean you will now have the action available in the actions dropdown on the comments overviews at /admin/content/comment.

Adding this to a node view will be similar, although chances are that when you have end users submitting nodes, you likely also have some dedicated views for your specific use case, such as Forum posts.

Issues created as a result of this blog post

Please note that I did not intend to "set anyone to work" with creating these. I simply wanted to record some findings and ideas from writing up this blog post.

Aug 15 2019
Aug 15

Extensive nodes (or other types of entities) with many text fields, such as biographies, often remain unread because of the huge (and discouraging) amount of text.

The Drupal 8 "Field Group" module allows you to group fields and to present them in containers like vertical or horizontal tabs, accordions or just plain wrappers. It lets you group fields in the frontend of your site, and in the backend as well.

Keep reading to learn how to use this module!

Step #1. - Install the Required Module

According to the project page, Drupal versions above 8.3 require the 8.3 branch of the module. You have to force Composer to download this specific version.

  • Open the Terminal application of your PC
  • Go to the root of your Drupal installation (the composer.json file is located inside this directory)
  • Type the following command:

composer require "drupal/field_group:^3.0"

Type the following command

  • Click Extend
  • Scroll down until you find the Field Group module and check it
  • Click Install

Click Install

Step #2. Create the Content Type

For the purpose of this tutorial, you will create a content type with this structure.

  • Content type title: Vertebrate
    • Field Image
    • Text field (Introduction)
    • Field Image
    • Text field (First Part)
    • Field Image
    • Text field (Second Part)
    • Field Image
    • Text field (Conclusion)
  • Click Structure > Content types > Add Content type
  • Give the Content type a proper name, click Save and manage fields

Give the Content type a proper name and click Save

  • Click Add field
  • From the dropdown select Image and write a proper label
  • Click Save and Continue

Click Save and Continue

  • Leave the defaults and click Save field settings
  • Click Save settings
  • Click Add field
  • Select Text (formatted long) and give it a proper label
  • Click Save and continue

Click Save and continue

  • Click Save field settings
  • Click Save settings
  • Repeat the process 3 more times for the image and text fields
  • Delete the Body field

The "Manage fields" screen on your computer should look more or less like this:

The Manage fields screen on your computer should look more or less like this

Step #3. Group the fields

The "Field Group" module works on the display of the node and in the form display as well.

  • Click Manage form display > Add group

Click Manage form display and then click Add group

  • Select Fieldset
  • Click Save and continue

Click Save and continue

  • Click Create group
  • Repeat the process and create 3 more Fieldset groups

Repeat the process and create 3 more Fieldset groups

  • Rearrange the items according to the image using the cross handles
  • Click Save

Click Save

  • Click Manage display > Add group

Click Manage display and then click Add group

  • This time, select Tabs
  • Click Save and continue

Click Save and continue

  • Change the direction to Horizontal
  • Click Create Group

You can add an ID and CSS classes to the container to ease the styling process.

You can add an ID and CSS classes to the container to ease the styling process

  • Click Add group
  • Select Tab and give it a proper label
  • Click Save and continue

Click Save and continue

  • Select Default state OPEN (for the first tab, the other tabs will have Default state CLOSE)
  • Click Create group
  • Repeat the process with the other three tabs

Repeat the process with the other three tabs

  • Hide the image labels
  • Rearrange the items, take care of the indentation
  • Click Save

Click Save

Step #4. Create Content

  • Click Content > Add Content > Vertebrate

The form fields are now grouped in four fieldsets. This is very practical for editors.

  • Fill out the form, upload images and click Save

Fill out the form, upload images and click Save

Take a look at the node. All content is grouped in horizontal tabs. Users will definitely have a better reading experience.

Take a look at the node, all content is grouped in horizontal tabs

Users will definitely have a better reading experience

Please, leave us your comments below. Thanks for reading!


About the author

Jorge lived in Ecuador and Germany. Now he is back to his homeland Colombia. He spends his time translating from English and German to Spanish. He enjoys playing with Drupal and other Open Source Content Management Systems and technologies.
Aug 14 2019
Aug 14

Today we will learn how to migrate addresses into Drupal. We are going to use the field provided by the Address module which depends on the third-party library commerceguys/addressing. When migrating addresses you need to be careful with the data that Drupal expects. The address components can change per country. The way to store those components also varies per country. These and other important consideration will be explained. Let’s get started.

Example address field process mapping.

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD address whose machine name is ud_migrations_address. The migration to execute is udm_address. Notice that this migration writes to a content type called UD Address and one field: field_ud_address. This content type and field will be created when the module is installed. They will also be removed when the module is uninstalled. The demo module itself depends on the following modules: address and migrate.

Note: Configuration placed in a module’s config/install directory will be copied to Drupal’s active configuration. And if those files have a dependencies/enforced/module key, the configuration will be removed when the listed modules are uninstalled. That is how the content type and fields are automatically created and deleted.

The recommended way to install the Address module is using composer: composer require drupal/address. This will grab the Drupal module and the commerceguys/addressing library that it depends on. If your Drupal site is not composer-based, an alternative is to use the Ludwig module. Read this article if you want to learn more about this option. In the example, it is assumed that the module and its dependency were obtained via composer. Also, keep an eye on the Composer Support in Core Initiative as they make progress.

Source and destination sections

The example will migrate three addresses from the following countries: Nicaragua, Germany, and the United States of America (USA). This makes it possible to show how different countries expect different address data. As usual, for any migration you need to understand the source. The following code snippet shows how the source and destination sections are configured:

source:
  plugin: embedded_data
  data_rows:
    - unique_id: 1
      first_name: 'Michele'
      last_name: 'Metts'
      company: 'Agaric LLC'
      city: 'Boston'
      state: 'MA'
      zip: '02111'
      country: 'US'
    - unique_id: 2
      first_name: 'Stefan'
      last_name: 'Freudenberg'
      company: 'Agaric GmbH'
      city: 'Hamburg'
      state: ''
      zip: '21073'
      country: 'DE'
    - unique_id: 3
      first_name: 'Benjamin'
      last_name: 'Melançon'
      company: 'Agaric SA'
      city: 'Managua'
      state: 'Managua'
      zip: ''
      country: 'NI'
  ids:
    unique_id:
      type: integer
destination:
  plugin: 'entity:node'
  default_bundle: ud_address

Note that not every address component is set for all addresses. For example, the Nicaraguan address does not contain a ZIP code. And the German address does not contain a state. Also, the Nicaraguan state is fully spelled out: Managua. On the contrary, the USA state is a two letter abbreviation: MA for Massachusetts. One more thing that might not be apparent is that the USA ZIP code belongs to the state of Massachusetts. All of this is important because the module does validation of addresses. The destination is the custom ud_address content type created by the module.

Available subfields

The Address field has 13 subfields available. They can be found in the schema() method of the AddresItem class. Fields are not required to have a one-to-one mapping between their schema and the form widgets used for entering content. This is particularly true for addresses because input elements, labels, and validations change dynamically based on the selected country. The following is a reference list of all subfields for addresses:

  1. langcode for language code.
  2. country_code for country.
  3. administrative_area for administrative area (e.g., state or province).
  4. locality for locality (e.g. city).
  5. dependent_locality for dependent locality (e.g. neighbourhood).
  6. postal_code for postal or ZIP code.
  7. sorting_code for sorting code.
  8. address_line1 for address line 1.
  9. address_line2 for address line 2.
  10. organization for company.
  11. given_name for first name.
  12. additional_name for middle name.
  13. family_name for last name:

Properly describing an address is not trivial. For example, there are discussions to add a third address line component. Check this issue if you need this functionality or would like to participate in the discussion.

Address subfield mappings

In the example, only 9 out of the 13 subfields will be mapped. The following code snippet shows how to do the processing of the address field:

field_ud_address/given_name: first_name
field_ud_address/family_name: last_name
field_ud_address/organization: company
field_ud_address/address_line1:
  plugin: default_value
  default_value: 'It is a secret ;)'
field_ud_address/address_line2:
  plugin: default_value
  default_value: 'Do not tell anyone :)'
field_ud_address/locality: city
field_ud_address/administrative_area: state
field_ud_address/postal_code: zip
field_ud_address/country_code: country

The mapping is relatively simple. You specify a value for each subfield. The tricky part is to know the name of the subfield and the value to store in it. The format for an address component can change among countries. The easiest way to see what components are expected for each country is to create a node for a content type that has an address field. With this example, you can go to /node/add/ud_address and try it yourself. For simplicity sake, let’s consider only 3 countries:

  • For USA, city, state, and ZIP code are all required. And for state, you have a specific list form which you need to select from.
  • For Germany, the company is moved above first and last name. The ZIP code label changes to Postal code and it is required. The city is also required. It is not possible to set a state.
  • For Nicaragua, the Postal code is optional. The State label changes to Department. It is required and offers a predefined list to choose from. The city is also required.

Pay very close attention. The available subfields will depend on the country. Also, the form labels change per country or language settings. They do not necessarily match the subfield names. Moreover, the values that you see on the screen might not match what is stored in the database. For example, a Nicaraguan address will store the full department name like Managua. On the other hand, a USA address will only store a two-letter code for the state like MA for Massachusetts.

Something else that is not apparent even from the user interface is data validation. For example, let’s consider that you have a USA address and select Massachusetts as the state. Entering the ZIP code 55111 will produce the following error: Zip code field is not in the right format. At first glance, the format is correct, a five-digits code. The real problem is that the Address module is validating if that ZIP code is valid for the selected state. It is not valid for Massachusetts. 55111 is a ZIP code for the state of Minnesota which makes the validation fail. Unfortunately, the error message does not indicate that. Nine-digits ZIP codes are accepted as long as they belong to the state that is selected.

Finding expected values

Values for the same subfield can vary per country. How can you find out which value to use? There are a few ways, but they all require varying levels of technical knowledge or access to resources:

  • You can inspect the source code of the address field widget. When the country and state components are rendered as select input fields (dropdowns), you can have a look at the value attribute for the option that you want to select. This will contain the two-letter code for countries, the two-letter abbreviations for USA states, and the fully spelled string for Nicaraguan departments.
  • You can use the Devel module. Create a node containing an address. Then use the devel tab of the node to inspect how the values are stored. It is not recommended to have the devel module in a production site. In fact, do not deploy the code even if the module is not enabled. This approach should only be used in a local development environment. Make sure no module or configuration is committed to the repo nor deployed.
  • You can inspect the database. Look for the records in a table named node__field_[field_machine_name], if migrating nodes. First create some example nodes via the user interface and then query the table. You will see how Drupal stores the values in the database.

If you know a better way, please share it in the comments.

The commerceguys addressing library

With version 8 came many changes in the way Drupal is developed. Now there is an intentional effort to integrate with the greater PHP ecosystem. This involves using already existing libraries and frameworks, like Symfony. But also, making code written for Drupal available as external libraries that could be used by other projects. commerceguys\addressing is one example of a library that was made available as an external library. That being said, the Address module also makes use of it.

Explaining how the library works or where its fetches its database is beyond the scope of this article. Refer to the library documentation for more details on the topic. We are only going to point out some things that are relevant for the migration. For example, the ZIP code validation happens at the validatePostalCode() method of the AddressFormatConstraintValidator class. There is no need to know this for a migration project. But the key thing to remember is that the migration can be affected by third-party libraries outside of Drupal core or contributed modules. Another example, is the value for the state subfield. Address module expects a subdivision as listed in one of the files in the resources/subdivision directory.

Does the validation really affect the migration? We have already mentioned that the Migrate API bypasses Form API validations. And that is true for address fields as well. You can migrate a USA address with state Florida and ZIP code 55111. Both are invalid because you need to use the two-letter state code FL and use a valid ZIP code within the state. Notwithstanding, the migration will not fail in this case. In fact, if you visit the migrated node you will see that Drupal happily shows the address with the data that you entered. The problems arrives when you need to use the address. If you try to edit the node you will see that the state will not be preselected. And if you try to save the node after selecting Florida you will get the validation error for the ZIP code.

This validation issues might be hard to track because no error will be thrown by the migration. The recommendation is to migrate a sample combination of countries and address components. Then, manually check if editing a node shows the migrated data for all the subfields. Also check that the address passes Form API validations upon saving. This manual testing can save you a lot of time and money down the road. After all, if you have an ecommerce site, you do not want to be shipping your products to wrong or invalid addresses. ;-)

Technical note: The commerceguys/addressing library actually follows ISO standards. Particularly, ISO 3166 for country and state codes. It also uses CLDR and Google's address data. The dataset is stored as part of the library’s code in JSON format.

Migrating countries and zone fields

The Address module offer two more fields types: Country and Zone. Both have only one subfield value which is selected by default. For country, you store the two-letter country code. For zone, you store a serialized version of a Zone object.

What did you learn in today’s blog post? Have you migrated address before? Did you know the full list of subcomponents available? Did you know that data expectations change per country? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Aug 14 2019
Aug 14

It‘s easy and enjoyable to create marketing campaigns, drive leads, and tell your brand’s story to the world if your website is on the right CMS. Drupal 8’s benefits will definitely impress any marketer. So let’s take a closer look at the greatness of Drupal 8 for marketers, see what makes it so valuable, and name a few useful modules.

The benefits of Drupal 8 for marketers

Drupal 8 has cutting-edge marketing features built into the core and a myriad of contributed modules helpful in every aspect of your successful marketing. 

Easy integration with marketing automation tools 

Marketers love the various marketing automation and CRM tools. They effectively streamline their workflows, as well as give them valuable analytics. 

Thanks to its built-in support for RESTFul web services, Drupal 8 integrates with marketing tools or any others at a snap of a finger. 

Drupal 8 modules to integrate marketing tools

and many more.

Multilingual Drupal 8 campaigns

Marketers can hold more powerful and convincing campaigns in their users’ native languages. Without a doubt, Drupal 8 is a great choice for multilingual websites.

A hundred languages are supported out-of-box, including those with the RTL text direction. The four built-in multilingual modules make it easy to add languages to your site and translate everything — interface, configuration, and content. Most interface translations are already prepared by the community.

Admin interfaces to add translations are very handy and editor-friendly. As D8’s third-party integration capabilities are among its key benefits, it’s also easy to integrate any translation software.

Drupal 8 modules for translation tool integration

and more.

There also are contributed Drupal 8 modules for multilingual features for every aspect of a multilingual website.

Some Drupal 8 multilingual modules

and many more.

Multilingual Drupal 8 website example

 

Multi-channel marketing in Drupal 8

Marketers can engage customers with their Drupal 8 campaigns on their preferred devices. D8 websites are ready to share their data to any web or mobile applications. 

This is thanks to the API-first architecture — one of D8’s key benefits. The core now has 5 robust modules that effectively expose your Drupal entities as a RESTful web API. In the Drupal 8.7 release, the JSON:API module joined this “team” to make multi-channel experiences even more ambitious.

Marketers will appreciate the “create once, publish everywhere” philosophy adopted by the API-first Drupal 8. It significantly increases their reach with minimum publishing expenses. 

Quick and handy content creation with advanced features

Content is the heart and soul of marketing campaigns. A new level of its creation is among the greatest benefits prepared by Drupal 8 for marketers. Editorial workflows are both very user-friendly and advanced in their capabilities. 

At least a few of Drupal 8 content creation benefits

  • You can make edits on the fly with the inline editing feature.
  • The handy drag-and-drop CKEditor is the default D8’s WYSIWYG editor. 
  • Drupal 8 has a Media Library that lets you save and reuse videos, audios, images, and other media.
  • You can add media directly into articles or news, including remote videos from YouTube or Vimeo via the oEmbed feature in Drupal 8.
  • Configurable editorial workflows are available to your marketing team with the new Content Moderation and Workflows modules.
  • The Views in D8 core lets you add highly configurable content collections to your pages. 
  • It’s easy to create beautiful slideshows and carousels with the help of contributed tools such as Views Slideshow, Juicebox, jCarousel, Owl Carousel, and so on.

Media Library in Drupal 8

Smart content personalization

Individualized, or targeted content delivery helps marketers reach their audience. You just give the right offer at the right moment to the right person. Drupal 8 offers awesome opportunities to marketers in this field. 

Some great Drupal 8 personalization modules

  • Smart Content allows you to offer a different display based on browser conditions.
  • Smart Content Segments helps you manage groups of these conditions in a handy UI.
  • Acquia Lift Connector unites your content and customer in a drag-and-drop interface so marketers can effectively manage your campaigns in real-time using behavioral factors.
  • Cloudwords for Multilingual Drupal is your assistant in multilingual campaigns with the features of workflow automation and project management.

Acquia Lift Connector Drupal 8 personalization module

Social media campaigns

Marketers know for sure that social networks are amazing campaign boosters. Your SM page and your website can be an invincible marketing tandem. With D8, their integration is a breeze. 

Marketers will be amazed by a myriad of contributed Drupal 8 modules help them to:

  • auto-publish content to social networks
  • invite users to join your social media pages
  • add their icons to the website
  • add sharing buttons
  • analyze the statistics 
  • embed feeds 

These Drupal 8 social integration modules include

and many more. 

Social integration modules in Drupal 8

Let your marketing boost with Drupal 8!

These have been just a few glances at the greatness of Drupal 8 for marketers. Contact our Drupal team to discuss in what other ways D8 can be helpful to your marketing campaigns and your business. 


Our Drupal developers are ready to:

  • build you a Drupal 8 website from scratch
  • migrate your Drupal 7 website to Drupal 8
  • update your minor Drupal 8 version (since some features mentioned above only start from Drupal 8.7)
  • install the needed modules and configure them properly based on your needs
  • customize the existing solutions to add the desired marketing features to your site

Enjoy the benefits of Drupal 8 for marketers!

Aug 13 2019
Aug 13

A great place to start a conversation about decoupled Drupal is evaluating Contenta CMS. While Contenta CMS is not necessarily something you must have to build a decoupled Drupal 8 site, it is designed to do it. You can relatively easily create a vanilla Drupal site that supports a decoupled application, but Contenta has already solved many of the related problems.

So what is Contenta CMS exactly? 

Contenta CMS is an API-first Drupal 8 distribution intended to be used to build fully decoupled Drupal sites. It was built by the developers of many of the modules you may end up using as you build your decoupled site, using their defined best practices to address typical needs and challenges. It is meant to be 100% decoupled, and only uses Drupal 8 as an administrative backend with no user-facing HTML content. This is something you could change if you desired to do so.

At first it seemed like a magical, mysterious system. However, Contena uses core and well-known contributed modules with only a single custom module that provides a few customizations. With only a few small tweaks, you could conceivably have a base system for your decoupled project with Contenta CMS as you would any Drupal 8 project.

Features

The main difference between Contenta CMS and a site you would conceivably build yourself lies in the only custom module the distribution comes with, Contenta Enhancements. This module primarily does a few things:

  • creates a new, streamlined administration toolbar for a decoupled site,
  • provides some additional local tasks menu items giving easy access to necessary configuration,
  • alters the node view to display JSON instead of HTML, allowing for easy viewing of node structure, and
  • disables certain routes not needed or necessarily useful for a decoupled site.

Because this is a Drupal module, you’re perfectly fine installing it in your own clean Drupal 8 install to try to replicate Contenta along with the contributed modules used.

Key Modules

The following modules make up the underlying Contenta architecture.

Consumers

Consumers is a utility module that allows other modules to define users of the API and then further allows administrators to manage access control for each consumer. 

Decoupled Router

As the module description states, "Decoupled Router provides an endpoint that will help you resolve path aliases and redirects for entity related routes. This is especially useful for decoupled applications where the editor wants full control over the paths available in the consumer application (for instance a React app)."

To further understand the purpose of Decoupled Router, read the great post (and series) by the module creator.

JSON:API (now part of Drupal 8.7 Core)

JSON:API is a plug and play module that fully implements the JSON:API Specification that without configuration exposes your Drupal entities via a REST API.

JSON:API Extras

JSON:API Extras extends the JSON:API module by allowing customizations and management of API endpoints. The JSON:API module does not offer any configuration options.

JSON-RPC

While JSON:API exposes Drupal entities via a REST API, JSON-RPC exposes Drupal administrative actions that would otherwise not be easily accessible or handled using the REST API.

OpenAPI

The OpenAPI is a utility module that allows your Web Services to be discoverable using the OpenAPI standard and thus neatly documented for testing and development when used in addition the OpenAPI UI and either something like OpenAPI UI Redoc or another similar module.

OpenAPI UI

OpenAPI UI helps provide a user interface within Drupal for displaying your OpenAPI standards-based API explorer for your Drupal site.

OpenAPI UI Redoc

This module provides a plugin that implements Redoc for use with OpenAPI UI to display from within Drupal. This is akin to GraphiQL in the GraphQL standard.

preview of redoc in drupalOpenAPI UI using Redoc in Contenta

SimpleOauth

SimpleOauth is an implementation of the OAuth 2.0 Authorization Framework RFC for Drupal allowing you to use Oauth 2.0 for user authentication.

Subrequests

The subrequests module allows aggregated requests and responses to be sent to Drupal. For example, instead of sending a request for a node and then the node’s associated taxonomy terms, it allows you to send a single request that returns all the data you’d need. The goal being to make decoupled applications more efficient by reducing requests. An excellent article on the motivation for this module can be found on Lullabot’s blog.

Installing Contenta

To get started, head over to the Contenta install page. After installing, run: drush runserver 127.0.0.1:8888 to run server, then access 127.0.0.1:8888 to view site.

preview of contenta home screen

Upon logging in, you will find a familiar interface, with some slightly different links available, with all but the “API” link leading to common Drupal administration pages, and the “Advanced Link” exposing the default Drupal administration toolbar.

preview of contenta admin screen

You should be now ready to begin building your decoupled site. In the following posts in this series, I will explore doing just that.

Conclusion

By adding and configuring the above modules to a new or existing Drupal 8 site (I can’t stress enough that you must install modules using composer), configuring CORS (if needed), you should be ready to begin developing your decoupled application using the technology of your choosing so at this point I wish you happy trails!

*P.S. The folks that brought you ContentaCMS also created ContentaJS, “a nodejs server that proxies to Contenta CMS and holds custom code”. It’s certainly not necessary to begin your decoupled journey, but it does provide some handy features that I think could be extremely useful.
 

Aug 13 2019
Aug 13

Our Family Wizard Home page

Website Redesign: A Professional Face for a Market Leader

When parents divorce or separate, the child often becomes the unwilling intermediary for communication. Our Family Wizard (OFW) is a mobile application for co-parenting exes that facilitates and tracks communication, helps coordinate child duties and stores important information. The app includes a shared calendar, messaging component (with a “read” stamp), school and medical information and an expense log. 

People are usually referred to the site by judges, attorneys and mediators during separation or divorce proceedings. Often a judge or mediator requires that the parents use the application to document any child-related communication. 

Our Family Wizard Mother and Child

Our Family Wizard is the market leader in their vertical, but they felt their website did not exhibit the gravitas, credibility and professionalism that it should. TEN7 had been working with Avirat (the parent company of OFW) since 2015, supporting the existing OFW site, and they hired us to completely make over their website in both content and form. We went through a complete discovery process to ensure we completely understood the needs and desires of the client before we launched into design and content strategy for the new site.

The site redesign focused on the following goals:

Focus the Site to Drive Signups

The entire purpose of the ourfamilywizard.com website is to get the site visitor to sign up for the service and download the app. The new site has been streamlined to focus on two clearly defined calls to action: learn more and get started/sign up. The new design also makes use of a persistent “sticky bar” on the homepage with these two prompts. 

Redesign the Site to Make it More Modern and Professional 

In describing the desired site look and feel, the client mentioned pharmaceutical sites. Pharma sites tend to have a very clean design, large images and clear messaging. But more than that, pharma sites serve two audiences: professionals (doctors) and users (patients). Our Family Wizard also needs to market to two distinct user groups with distinct needs. 

Whereas the old site had a very “default site” feel, the new site feels more alive. Our designer Eva Lovisa paid attention to details big and small—from adding bigger photos and hero images to creating a brand library of icons. She created a bold color scheme with a lot of blues which served two purposes: blues tend to be calming (for the stressed out parents) and dark navy blues tend to have a regal, legal feel (for the professional visitors). She implemented a varied text style palette, choosing more distinctive fonts and creating things like quote and bullet styles to add more visual interest to what could have become a boring text-heavy site.

We gave a lot of consideration to the home page flow. Whereas the old home page felt like a mishmash of information, the new homepage is composed of clean, modular “stripes” of information. This solution is more than just good-looking; the stripes are created using the Paragraphs module in Drupal. Paragraphs functionality puts power in the hands of page editors to create paragraph types for common scenarios (image to the left of text, pull quotes, slideshows, etc.) that can be easily moved and edited.

“Instead of one giant HTML palette to work from, [with Paragraphs] we can work in pre-made chunks that look good. We have a lot more flexibility. We have a few content writers on staff, and the pages they’ve been able to create—the whole look and feel compared to the way they were doing it before—the difference is night and day.”
—Jai Kissoon, Avirat CEO

The mobile app was being redesigned at the same time as the website, so both will have the same identity. However, the website was also designed to be responsive, so it looks great on mobile devices.

Consolidate Information and Make It More Searchable

As part of the redesign, we migrated the Drupal 7 site to Drupal 8. A site redesign or migration is a great time to take stock of the organization and usability of your content items. Although this is primarily a signup site, it also houses a library of helpful content and resources for co-parents. With a long-running site like OFW, there can be an overabundance of categories, taxonomies and tags for the hundreds of pieces of content, and this hinders content usefulness and searchability. In addition, content was scattered in multiple areas around the site. 

We thought about which tags and topics we wanted, which URL structures we needed to change, and made decisions about single or multiple tagging. Each blog article had copies in multiple languages, so we had to properly tag and move those over.

We consolidated the content categories down into a manageable set, which we hope will last them for many years. We sorted and grouped related information to make it more easily navigable. For example, content about using the app and site, regional resource directories and other evergreen content was moved to a new Knowledge Center, found only on the website. Additional articles are found in the site blog. 

“Working with TEN7 has been a good fit. Things have been straightforward and easy. Les [Lim, Senior Developer] is a wizard in his own right. It’s his attitude, more than anything—his ability to help us get to resolution we want, which is sometimes in conflict with what’s easiest. We’re very happy.”
—Jai Kissoon, Avirat CEO 

The new site launched in May of 2018.

Aug 13 2019
Aug 13

Back in April, BigCommerce, in partnership with Acro Media, announced the release of the BigCommerce for Drupal module. This module effectively bridges the gap between the BigCommerce SaaS ecommerce platform and the Drupal open source content management system. It allows Drupal to be used as the frontend customer experience engine for a headless BigCommerce ecommerce store.

For BigCommerce, this integration provides a new and exciting way to utilize their platform for creating innovative, content-rich commerce experiences that were not possible via BigCommerce alone.

For Drupal, this integration extends the options its users and site-builders have for adding ecommerce functionality into a Drupal site. The flexibility of Drupal combined with the stability and ease-of-use of BigCommerce opens up new possibilities for Drupal that didn’t previously exist.

Since the announcement, BigCommerce and Acro Media have continued to educate and promote this exciting new headless commerce option. A new post on the BigCommerce blog published last week title Leverage Headless Commerce To Transform Your User Experience with Drupal Ecommerce (link below) is a recent addition to this information campaign. The BigCommerce teams are experts in what they do and Acro Media is an expert in open source integrations and Drupal. They asked if we could provide an introduction for their readers to really explain what Drupal is and where it fits in to the headless commerce mix. This, of course, was an opportunity not to be missed and so our teams buckled down together once again to provide readers with the best information possible.

So without further explanation, click the link below to learn how you can leverage headless commerce to transform your user experience with Drupal.

Read the full post on the BigCommerce blog

Additional resources:

Aug 13 2019
Aug 13

When the open-source Accelerated Mobile Pages (AMP) project was launched in October 2015, Google AMP was often compared to Facebook's Instant Articles. Nonetheless, both of the tech-giants share a common goal – to make web pages load faster. While AMP can be reached with a web URL, Facebook’s Instant Articles aimed only at easing the pain for their app-users. Teaming up with some powerful launch partners in the publishing and technology sectors, Google AMP aimed to impact the future of content distribution on mobile devices.

Fast forward to today, and Google AMP is the hottest thing on the internet. With over 25 million website domains that have published over 4 Billion AMP pages, it did not take long for the project to be a huge success. Comprising of two main features; Speed and Support to Monetization of Objects, AMPs implications are far reaching for enterprise businesses, marketers, ecommerce and every other big and small organizations. With great features and the fact that its origin as a Google Initiative, it is no surprise that the AMP pages get featured in Google SERP more prominently. 

What is AMP?

With the rapid surge in mobile users, the need to provide a website-like user experience does not just cut it. Today mobile user’s come with a smaller attention-span and varied internet speeds. Businesses can cater to each of these challenge with a fast-loading, light-weight and an app-like website with Google AMP.

AMP is an open-source framework that simplifies the HTML, streamlines CSS rules, restricts use of Javascript (can use AMP’s component library instead) and delivers pages via a Google AMP cache (a proxy-based Content Delivery Network).

Why AMP??

Impacting the technical architecture of digital assets, Google's open source initiative aims to provide streamlined web pages to mobile browsers and other apps.

It is Fast, like Really Fast

Google AMP loads about twice as fast as a normal comparable mobile page and the latency is as less as one-tenth. Intended to provide the fastest experience for mobile users, customers will be able to access content faster, and they are more likely to stay on the page to make a purchase or enquire about your service, because they know it won't take long.

An Organic Boost

Eligibility for the AMP carousal that rests above the other search results on Google SERP, resulting in a substantial increase in organic result and traffic is a major boost for the visibility of an organization. Though not responsible for increasing the page authority and domain authority, Google AMP plays a key role in sending far more traffic your way.

ROI

The fact that AMP leverages and not disrupts the existing web infrastructure of a website, makes the cost of adopting AMP much lesser than the competing technologies. In return, Google AMP enables better user experience which translates to better conversion rates on mobile devices.

Drupal & AMP

With better user engagement, higher dwell time and easy navigation between content benefits, businesses are bound to drive more traffic with AMP-friendly pages and increase their revenue. The AMP module is especially useful for marketers as it is a great addition to optimize their Drupal SEO efforts.

AMP produces HTML that makes the web a faster place. Implementing the AMP module in Drupal is really simple. Just download, enable and configure!
Before you begin with the integration of AMP module with Drupal, you need -
AMP Module : The AMP module mainly handles the conversion of regular Drupal HTML pages to AMP-complaint pages.

Two main components of AMP module:

AMP Module : The AMP module mainly handles the conversion of regular Drupal HTML pages to AMP-complaint pages.
Two main components of AMP module:

AMP Theme: I'm sure you have come across AMP HTML and its standards. The one that are responsible for your content to look effective and perform well on mobile. The Drupal AMP theme produces the mark up required by these standards for websites looking to perform well in the mobile world. Also, AMP theme allows creation of custom-made AMP pages.

AMP PHP Library: Consisting of the AMP base theme and the ExAMPle sub-theme, the Drupal AMP PHP Library handles the final corrections. Users can also create their own AMP sub-theme from scratch, or modify the default ExAMPle sub-theme for their specific requirements.

How to setup AMP with Drupal?

Before you integrate AMP with Drupal, you need to understand that AMP does not replace your entire website. Instead, at its essence, the AMP module provides a view mode for content types, which is displayed when the browser asks for an AMP version.

Download the AMP Module

With your local prepped up, type the following terminal command:

drush dl amp, amptheme, composer_manager

This command will download the AMP module, the AMP theme and the Composer Manager module (suppose if you do not have the Composer Manager already).

If you have been a user of Drupal 8,  you are probably familiar with Composer and its function as a packaging tool for PHP that installs dependencies for a project. The composer is used to install a PHP library that converts raw HTML into AMP HTML. Also, the composer will help to get that library working with Drupal.

However, as the AMP module does not explicitly require Composer Manager for a dependency, alternate workflows can make use of module Composer files without using Composer Manager.

Next, enable the items that are required to get started:

drush en composer_manager, amptheme, ampsubtheme_example

Before enabling the AMP module itself, an AMP sub-theme needs to be enabled. The default configuration for the AMP module sets the AMP Theme to ‘ExAMPle subtheme.’

How to Enable AMP Module?

The AMP module for Drupal can be enabled using Drush. Once the module is enabled, the Composer Manager will take care of the downloading of the other AMP libraries and its dependencies.

drush en amp

Configuration

Once everything is installed and enabled, AMP needs to be configured using a web interface before the Drupal AMP pages can be displayed. First up, you need to decide which content types should have an AMP version. You might not need it for all of them. Enable particular content type by clicking on the “Enable AMP in Custom Display Settings” link. On the next page, open the “Custom Display Settings” fieldset. Check the AMP box, then click Save.

image

Setting an AMP Theme

Once the AMP module and content type is configured, it is time to select a theme for AMP pages and configure it. The view modules and the field formatters of the Drupal AMP module take care of the main content of the page. The Drupal AMP theme, on the other hand, changes the mark-up outside the main content area of the page.

Also, the Drupal AMP themes enables you to create custom styles for your AMP pages. On the main AMP config page, make sure that the setting for the AMP theme is set to the ExAMPle Subtheme or the custom AMP subtheme that you created.

Drupal-theme

Aug 13 2019
Aug 13

We need to nudge governments to start funding and fixing accessibility issues in the Open Source projects that are being used to build digital experiences. Most governments are required by law to build accessible websites and applications. Drupal’s commitment to accessibility is why Drupal is used by a lot of governments to create ambitious digital experiences.

Governments have complex budgeting systems and policies, which can make it difficult for them to contribute to Open Source. At the same time, there are many consulting agencies specializing in Drupal for government, and maybe these organizations need to consider fixing accessibility issues on behalf of their clients.

Accessibility is one of the key selling points of Drupal to governments.

If an agency started contributing, funding, and fixing accessibility issues in Drupal core and Open Source, they’d be showing their government clients that they are indeed experts who understand the importance of getting involved in the Open Source community.

So I have started this blog post with a direct ask for governments to pay to fix accessibility issues without a full explanation as to why. It helps to step back and look at the bigger context: “Why should governments fix accessibility issues in Drupal Core?”

Governments are using Drupal

This summer’s DrupalGovCon in Washington, DC was the largest Drupal event on the East Coast of the United States. The conference was completely free to attend with 1500 registrants. There were dozens of sponsors promoting their Drupal expertise and services. My presentation, Webform for Government, included a section about accessibility. There were also four sessions dedicated to accessibility.

Besides presenting at DrupalGovCon, I also volunteered a few hours to help with Webform-related contribution sprints focusing on improving Drupal’s Form API’s accessibility. I felt that having new contributors help fix accessibility issues would be a rewarding first experience into the world of contributing to Drupal core.

Fixing accessibility issues in Drupal

At the Webform contribution sprint, Lera Atwater (leraa), a first-time contributor, started reviewing Issue #2951317: FAPI #radios form elements do not have an accessible required attribute. She reviewed the most recent patch and provided some feedback. I was able to re-roll the patch a few times to address some new remaining issues and add some basic test coverage. The fact that Lera and I focused on one issue helped move it forward. Still, our solution needs to be reviewed by Andrew Macpherson (andrewmacpherson) or Mike Gifford (mgifford), Drupal’s accessibility maintainers and then Alex Bronstein (effulgentsia) or Tim Plunkett (tim.plunkett), Drupal’s Form API maintainers. Getting an accessibility-related patch committed is a lot of communication and work.

This experience made me ask…

How can we streamline the process for getting accessibility patches committed to Drupal core?

Streamlining the process of fixing accessibility issues

The short answer is we can’t streamline the process of documenting, fixing, and reviewing accessibility issues. These steps are required and needed to ensure the quality of code being committed to Drupal core. What we might be able to do is strive for more efficiency in how we manage accessibility-related issues and the steps required to fix them.

While working on this one accessibility for free in my spare time, it made me wonder...

What would happen if a paid developer collected and worked on multiple accessibility issues for a few weeks and managed to move these issues forward towards a resolution collectively?

First off, I can tell from experience that most Form API and accessibility-related issues in Drupal Core as well as other Open Source projects are very similar. Most accessibility issues have something to do with fixing or adding Aria (Accessible Rich Internet Applications) attributes or keyboard access. A developer should be able to familiarize themselves with the overarching accessibility requirements and fixes needed to address the multiple accessibility issues in Drupal core.

Second, developers tend to work better on focused tasks. Most developers contribute to Open Source in their spare time completing small tasks. Paying a developer to commit and focus on fixing accessibility issues as part of their job is going to yield better results.

Finally, having multiple tickets queued for core maintainers is a more efficient use of Andrew, Mike, Alex, and Tim’s time. Blocks of similar tickets can pass through the core review process more quickly. Also, everyone shares the reward of saying we fixed these accessibility issues.

Governments should pay to fix accessibility issues

I’d like to nudge people or organizations to get involved. In the last month’s Webform 8.x-5.3 release, I settled on the below direct message within the Webform module’s UI.

My conclusion is that we need to directly ask people to get involved, and directly ask organizations to contribute financially (a.k.a. give money). I am admittedly uncomfortable asking people for money because I think to myself, “What if they say no?”

No one should say no to fixing accessibility issues.

The Drupal community cares about accessibility, governments have to care about accessibility, and governments rely on Drupal. Governments should pay to fix accessibility issues.

Talking Drupal and the U.S. Government

Before DrupalGovCon, the Talking Drupal podcast spoke with Abby Bowman about Drupal for Gov. They discussed the U.S. government’s usage and contribution to Drupal and Open Source. From Abby, I learned two interesting things about the U.S. government’s commitment to Open Source.

First, the U.S. government contributes code to Open Source via Code.gov. Second, the U.S. government requires all websites to be accessible, but there is no central department or team, ensuring that all government websites are accessible. All the U.S. government’s websites would immediately benefit from having a team dedicated to finding and fixing accessibility issues.

If you want to hear about my experience at DrupalConGov check out Talking Drupal #221 - Live from GovCon.

How can government and organizations start paying to fix accessibility issues?

The word “pay” is liberally used throughout this post to emphasize the need for governments and organizations to budget for and commit to spending resources for fixing accessibility issues. Either a government related-project needs to get someone on their team to fix accessibility issues or nudge (a.k.a. pay) a vendor or outside developer to fix accessibility issues.

We have to keep asking and experimenting with how we ask organizations to contribute.

Companies that work for governments should pay to fix accessibility issues

In an ideal world, governments should pay to fix accessibility issues. Realistically, some agencies in government can’t directly contribute to Open Source. As stated earlier, any outside vendor who works for the government can contribute to Open Source. Saying that “We care about accessibility and fix accessibility issues within Drupal” is a great slide to include in a pitch deck for a government project.

Maybe governments can mandate that vendors are contributors to the Open Source projects that are being used by a government project.

What is realistically possible?

Realistically, we need to need to fix the accessibility issues in Drupal and Open Source projects. Everyone in the world should have equal access to digital information and services. Every government needs to ensure that the software they are relying on is accessible.

In Drupal and Open Source, we ask individuals, organizations, and governments to get involved. Nudging governments to fix accessibility issues in Drupal and Open Source is just a very direct ask to fix a very specific problem that affects everyone.

There are two immediate ways for governments to get involved in fixing accessibility issues. Either governments dedicate resources to address the problem or they push their vendors to start addressing accessibility issues. In the Open Source community, we need to further embrace and encourage this type of contribution.

Embracing and encouraging governments and organizations contributing to Open Source

In the Drupal Community, we always acknowledge the individuals contributing to Open Source by listing maintainers and contributors on the project pages and even in the software’s MAINTAINERS.txt. We do recognize organizations supporting a Drupal project, but maybe we need to do more. In this day and age, we put corporation names on stadiums. Open Source has reached that scale that the organizations and government that contribute to Open Source are equally important as the individuals. Notably, in the case of enterprise-focused software like Drupal, where organizations are the target consumer, we need to figure out how to get these organizations involved and adequately acknowledged.

Acknowledging that we are already doing a lot

The Drupal community has accomplished something pretty amazing. We have one of the most accessible and secure Open Source Content Manager Systems on the market. We care about accessibility and security and work hard to fix and improve them. As a community, we need to always strive to grow and evolve. Nudging governments to get more involved in fixing accessibility issues will help make our software more accessible to everyone.

Almost done…

We just sent you an email. Please click the link in the email to confirm your subscription!

OKSubscriptions powered by Strikingly

Aug 13 2019
Aug 13

Today we will learn how to migrate dates into Drupal. Depending on your field type and configuration, there are various possible combinations. You can store a single date or a date range. You can store only the date component or also include the time. You might have timezones to take into account. Importing the node creation date requires a slightly different configuration. In addition to the examples, a list of things to consider when migrating dates is also presented.

Example syntax for date migrations.

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD date whose machine name is ud_migrations_date. The migration to execute is udm_date. Notice that this migration writes to a content type called UD Date and to three fields: field_ud_date, field_ud_date_range, and field_ud_datetime. This content type and fields will be created when the module is installed. They will also be removed when the module is uninstalled. The module itself depends on the following modules provided by Drupal core: datetime, datetime_range, and migrate.

Note: Configuration placed in a module’s config/install directory will be copied to Drupal’s active configuration. And if those files have a dependencies/enforced/module key, the configuration will be removed when the listed modules are uninstalled. That is how the content type and fields are automatically created.

PHP date format characters

To migrate dates, you need to be familiar with the format characters of the date PHP function. Basically, you need to find a pattern that matches the date format you need to migrate to and from. For example, January 1, 2019 is described by the F j, Y pattern.

As mentioned in the previous post, you need to pay close attention to how you create the pattern. Upper and lowercase letters represent different things like Y and y for the year with four-digits versus two-digits, respectively. Some date components have subtle variations like d and j for the day with or without leading zeros respectively. Also, take into account white spaces and date component separators. If you need to include a literal letter like T it has to be escaped with \T. If the pattern is wrong, an error will be raised, and the migration will fail.

Date format conversions

For date conversions, you use the format_date plugin. You specify a from_format based on your source and a to_format based on what Drupal expects. In both cases, you will use the PHP date function's format characters to assemble the required patterns. Optionally, you can define the from_timezone and to_timezone configurations if conversions are needed. Just like any other migration, you need to understand your source format. The following code snippet shows the source and destination sections:

source:
  plugin: embedded_data
  data_rows:
    - unique_id: 1
      node_title: 'Date example 1'
      node_creation_date: 'January 1, 2019 19:15:30'
      src_date: '2019/12/1'
      src_date_end: '2019/12/31'
      src_datetime: '2019/12/24 19:15:30'
destination:
  plugin: 'entity:node'
  default_bundle: ud_date

Node creation time migration

The node creation time is migrated using the created entity property. The source column that contains the data is node_creation_date. An example value is January 1, 2019 19:15:30. Drupal expects a UNIX timestamp like 1546370130. The following snippet shows how to do the transformation:

created:
  plugin: format_date
  source: node_creation_date
  from_format: 'F j, Y H:i:s'
  to_format: 'U'
  from_timezone: 'UTC'
  to_timezone: 'UTC'

Following the documentation, F j, Y H:i:s is the from_format and U is the to_format. In the example, it is assumed that the source is provided in UTC. UNIX timestamps are expressed in UTC as well. Therefore, the from_timezone and to_timezone are both set to that value. Even though they are the same, it is important to specify both configurations keys. Otherwise, the from timezone might be picked from your server’s configuration. Refer to the article on user migrations for more details on how to migrate when UNIX timestamps are expected.

Date only migration

The Date module provided by core offers two storage options. You can store the date only, or you can choose to store the date and time. First, let’s consider a date only field. The source column that contains the data is src_date. An example value is '2019/12/1'. Drupal expects date only fields to store data in Y-m-d format like '2019-12-01'. No timezones are involved in migrating this field. The following snippet shows how to do the transformation.

field_ud_date/value:
  plugin: format_date
  source: src_date
  from_format: 'Y/m/j'
  to_format: 'Y-m-d'

Date range migration

The Date Range module provided by Drupal core allows you to have a start and an end date in a single field. The src_date and src_date_end source columns contain the start and end date, respectively. This migration is very similar to date only fields. The difference is that you need to import an extra subfield to store the end date. The following snippet shows how to do the transformation:

field_ud_date_range/value: '@field_ud_date/value'
field_ud_date_range/end_value:
  plugin: format_date
  source: src_date_end
  from_format: 'Y/m/j'
  to_format: 'Y-m-d'

The value subfield stores the start date. The source column used in the example is the same used for the field_ud_date field. Drupal uses the same format internally for date only and date range fields. Considering these two things, it is possible to reuse the field_ud_date mapping to set the start date of the field_ud_date_range field. To do it, you type the name of the previously mapped field in quotes (') and precede it with an at sign (@). Details on this syntax can be found in the blog post about the migrate process pipeline. One important detail is that when field_ud_date was mapped, the value subfield was specified: field_ud_date/value. Because of this, when reusing that mapping, you must also specify the subfield: '@field_ud_date/value'. The end_value subfield stores the end date. The mapping is similar to field_ud_date expect that the source column is src_date_end.

Note: The Date Range module does not come enabled by default. To be able to use it in the example, it is set as a dependency of demo migration module.

Datetime migration

A date and time field stores its value in Y-m-d\TH:i:s format. Note it does not include a timezone. Instead, UTC is assumed by default. In the example, the source column that contains the data is src_datetime. An example value is 2019/12/24 19:15:30. Let’s assume that all dates are provided with a timezone value of America/Managua. The following snippet shows how to do the transformation:

field_ud_datetime/value:
  plugin: format_date
  source: src_datetime
  from_format: 'Y/m/j H:i:s'
  to_format: 'Y-m-d\TH:i:s'
  from_timezone: 'America/Managua'
  to_timezone: 'UTC'

If you need the timezone to be dynamic, things get a bit harder. The from_timezone and to_timezone settings expect a literal value. It is not possible to read a source column to set these configurations. An alternative is that your source column includes timezone information like 2019/12/24 19:15:30 -07:00. In that case, you would need to tweak the from_format to include the timezone component and leave out the from_timezone configuration.

Things to consider

Date migrations can be tricky because they can be affected by things outside of the Migrate API. Here is a non-exhaustive list of things to consider:

  • For date and time fields, the transformation might be affected by your server’s timezone if you do not manually set the from_timezone configuration.
  • People might see the date and time according to the preferences in their user profile. That is, two users might see a different value for the same migrated field if their preferred timezones are not the same.
  • For date only fields, the user might see a time depending on the format used to display them. A list of available formats can be found at /admin/config/regional/date-time.
  • A field can always be configured to be presented in a specific timezone. This would override the site’s timezone and the user’s preferred timezone.

What did you learn in today’s blog post? Did you know that entity properties and date fields expect different destination formats? Did you know how to do timezone conversions? What challenges have you found when migrating dates and times? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Aug 13 2019
Aug 13

Drupal Tome is a static site generator distribution of Drupal 8. It provides mechanisms for taking an entire Drupal site and exporting all the content to HTML for direct service. As part of a recent competition at SCDUG to come up with the cheapest possible Drupal 8 hosting, I decided to do a proof-of-concept level implementation of Drupal 8 with Docksal for local content editing, and Netlify for hosting (total cost was just the domain registration).

The Tome project has directions for setup with Docker, and for setup with Netlify, but they don’t quite line up with each other (I followed the docker instructions, then the Netlify set, but had to chart my own course to get the site from the first project linked to the repo in the second), and since I’m getting used to using Docksal when I had to fall back and do a bit of it myself I realized it was almost painfully easy to setup.

The first step was to go to the Tome documentation for Netlify and setup an account, and site from the template. There is a button in those directions to trigger the Netlify setup, I’ve added one here as well (but if this one fails, check to see if they updated theirs):

Deploy to Netlify

Login with Github or similar service, and let it create a repo for your project.

Follow Netlify’s directions for setting up DNS so you can have the domain you want, and HTTPS (through Let’s Encrypt). It took it a couple hours to get that detail to run right, but it eventually worked. For this project I chose a subdomain of my main blog domain: tome-netlify.spinningcode.org

Next go to Github (or whatever service you used) and clone the repository to your local machine. There is a generated README on that project, but the directions aren’t 100% correct if you aren’t cloning onto a machine with a working PHP environment. This is when I switched over to docksal, and ran the following series of commands:

fin init
fin composer install
fin drush tome:install
fin drush uli

Then log into your local site using the domain from docksal and the link from drush, and add some content.

Next we export the content from Drupal to send over to Netlify for deployment.

fin drush tome:static
git add .
git commit -m "Adding sample content"
git push

…now we wait while Netlify notices and builds the site…

If you look at the site a few minutes later the new content should be posted.

This is all well and good if I want to use the version of the site generated for the Netlify example, but I wanted to make sure I could do something more interesting. These days Drupal ships with an install profile called Unami that provides a more robust sample site than the more traditional Standard install.

So now let’s try to get Unami onto this site. Go back to the terminal and have Tome reset everything (it’ll warn you that you are about to nuke everything):

fin drush tome:init

…select Unami when it asks for a profile…and wait cause this takes a while…

Now just re-export the content and push it to your repo.

fin drush tome:static
git add .
git commit -m "Converting to Unami"
git push

And wait again, cause this also takes a few minutes…

The Unami home page on my subdomain hosted at Netlify.

That really was all that was involved for a simple site, you can see my repository on Github if you want to see all of what was generated along the way.

The whole process is pretty straight forward, but there are a few things that it helps to understand.

First, Netlify is actually regenerating the markup on their servers with this approach. The Drupal nodes, and other entities, are saved as JSON and then imported during the build. This makes the process reliable, but slow. Unami takes several minutes to deploy since Netlify is installing and configuring Drupal, loading the content, and generating the output. The build command provided in that template is clear enough to follow if you are familiar with composer projects:

command = "composer install && ./vendor/bin/drush tome:install -y && ./vendor/bin/drush tome:static -l $DEPLOY_PRIME_URL" 

One upside of this, is that you can use a totally unrelated domain for your local testing and have it adjust correctly to the production domain. When you are using Netlify’s branching workflow for managing dev, test, and production it also protects your work that way.

My directions above load a standard docksal container because that’s quick and easy, which includes MySQL, but Tome falls back to using a Sqlite database since you can be more confident it is there. Again this is reliable but slow. If I were going to do this on a more complete project I’d want a smaller Docksal setup or to switch to using MySQL locally.

A workflow based on this approach might also struggle with concurrent edits or complex configuration of large sites. It would probably make more sense to have the content created on a hidden, but traditional, server and then run through a different workflow. But for someone working on a series small sites that are rarely updated, a totally temporary instance of the site that can be rapidly deployed to a device, have content updated, push out to production, and then deleted locally until needed again.

The final detail to note is that there is no support for forms built into this solution. Netlify has support for that, and Tome has a module that claim to connect to that service but I wasn’t able to quickly determine how to get it connected. I am confident there are solves to this problem, but it is something that would take a little additional work.

Aug 12 2019
Aug 12

Drupal has pretty good multilingual support out of the box. It's also fairly easy to create new entities and just add translation support through the annotation. These things are well documented elsewhere and a quick search will reveal how to do that. That is not what this post is about. This post is about the UX around selecting which fields are translatable.

On the Content Language page at http://example.com/admin/config/regional/content-language you can select which fields on your nodes, entities and various other translatable elements will be available on non-default language edit pages. The section at the top is the list of types of translatable things. Checking these boxen will reveal the related section. You can then go down to that section and start selecting fields to translate, save the form and they become available. All nice and easy.

I came into the current project late and this is my first exposure to this area of Drupal. We have a few content types and a lot of entities. I was ticking the box for the entity I wanted to add, jumping to the end of the form and saving it. When the form came back though it was not selected. I could not figure out why. It wasn't until a co-worker used the form differently to me that the issue was resolved. Greg ticked the entity, scrolled down the page and found it, ticked some of the checkboxen in the entity itself and then saved the page. The checkbox was still ticked.

The UX on this pretty good once you know how it works. It could be fixed fairly easy with a system message pointing out that your checkbox was not saved because none of the items it exposed were selected.

I feel a patch coming on…

Aug 12 2019
Aug 12
A special bird flying in space has the spotlight while lots of identical birds sit on the ground (lack of diversity)

At Drupalcon Seattle, I spoke about some of the challenges Open Source communities like Drupal often have with increasing contributor diversity. We want our contributor base to look like everyone in the world who uses Drupal's technology on the internet, and unfortunately, that is not quite the reality today.

One way to step up is to help more people from underrepresented groups speak at Drupal conferences and workshops. Seeing and hearing from a more diverse group of people can inspire new contributors from all races, ethnicities, gender identities, geographies, religious groups, and more.

To help with this effort, the Drupal Diversity and Inclusion group is hosting a speaker diversity training workshop on September 21 and 28 with Jill Binder, whose expertise has also driven major speaker diversity improvements within the WordPress community.

I'd encourage you to either sign up for this session yourself or send the information to someone in a marginalized group who has knowledge to share, but may be hesitant to speak up. Helping someone see that their expertise is valuable is the kind of support we need to drive meaningful change.

August 12, 2019

44 sec read time

db db
Aug 12 2019
Aug 12

Like many companies in our technology-enabled, globally connected environment, Promet Source operates with clients and team members all over the world. This reality creates a challenge for communications. The truth is, the more we put into our interactions, the more we get out of them.

I’ve been working with distributed teams for a long time. Even though I got very accustomed to joining video calls, until recently, I had opted to not turn my camera on. I guess it started a while back when I had my first virtual interactions with teammates. Probably due to shyness or my lack of knowledge of virtual communications, I tended to avoid the camera component. That's changed.

Eye-Opening Experience

Lately I’ve started using more face-to-face open communications with clients, collaborators and internally. It brings a higher level of empathy, honesty and receptiveness to the conversations. It’s been like going from a gray-scale image to a sudden, colorful world, and has provided an important step toward building trust and strengthening ties.

A couple weeks ago, I was on a call with a collaborator and a team member with whom I’ve been working for more than two months.

That day, I turned on my camera, and the dynamic quickly changed. We started having a candid conversation between the three of us. 

As I we were chatting, my co-worker felt encouraged and also turned on his camera. We were able to see each other's faces for the first time in more than two months of working together. It made a difference.

Next, our collaborator followed suit and turned his camera on. The conversation was instantly raised up to another level. Once one of us opened up, others felt empowered to do the same. It was like a chain reaction or “Domino Effect.”

A New Dimension

We could comment about our surroundings, our clothes, our hair, what was going on in our lives and in our parts of the world!  We were able to get talking and build rapport so much more easily.

The collaboration on the call became alive and we got more out of it than if we had not had the advantage of video.

Looking back on this conversation, it would be easy to say that it was video that made the difference, but that was only one aspect. It was empathy that drove the emotional connection. 

The cameras helped. We were also willing to open ourselves up to a more honest dialog, sharing something personal, becoming available and responsive to each other. 

The Key: Trust

Trust your teammates. Trust your clients. Trust your collaborators. Trust that there is value in what you have to share.

Here’s what I’ve concluded are the keys to successful interactions even when working across multiple time zones.

Lead by Example

Be confident and share honestly. Let other people see you and hear you. Let your emotions shine through your expressions (facial expressions, expressions through the tone of your voice, the words you choose, etc.) 

Open up and people will trust you, and they will be more likely to open up too.  The Domino Effect can be very exciting.

Leverage Human Interaction

Promet Source is a leading practitioner of human-centered design. We know what it means to design for humans and we facilitate human-centered design workshops all over the country to enhance effectiveness and outcomes.

Just as we consistently emphasize that we are designing for humans, we are careful to not lose sight of the fact that we are designing by humans.

Strengthen Teams through Sharing

Too often, the left brain, technology-driven environment in which we operate ignores the powerful impact of the human element in all of our engagements. Even when separated by borders and time zones, efforts to connect on a personal level pays off in ways that are often unanticipated.

Have you found this to be the case? Share your thoughts in the comment section below on why and how connecting on a human level can drive better outcomes.

Sharing your thoughts and experiences can go a long way toward a greater sense of connection and community in our dispersed, digital world.
 

Aug 12 2019
Aug 12
Image of the Rossetta Stone

In Mastering Drupal 8 Multilingual: Part 1 of 3, we focused on planning for Drupal 8 multilingual and its impact on a project's timeline and budget.

In Part 2 (below), we cover everything you need to know to have a functioning multilingual site with no custom code. Part 3 of the series covers more advanced techniques for site builders and front-end developers.

Aug 12 2019
Aug 12

Sean is a strong believer in the open source community at large, and that working collaboratively is best for creating awesome projects. His community work extends into maintaining and building the BADCamp website build, as well as helping to maintain Docksal, a tool used for managing development environments.

Aug 11 2019
Aug 11

Today we complete the user migration example. In the previous post, we covered how to migrate email, timezone, username, password, and status. This time, we cover creation date, roles, and profile pictures. The source, destination, and dependencies configurations were explained already. Therefore, we are jumping straight to the process transformations in this entry.

Example field mapping for user migration

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD users whose machine name is ud_migrations_users. The two migrations to execute are udm_user_pictures and udm_users. Notice that both migrations belong to the same module. Refer to this article to learn where the module should be placed.

The example assumes Drupal was installed using the standard installation profile. Particularly, we depend on a Picture (user_picture) image field attached to the user entity. The word in parenthesis represents the machine name of the image field.

The explanation below is only for the user migration. It depends on a file migration to get the profile pictures. One motivation to have two migrations is for the images to be deleted if the file migration is rolled back. Note that other techniques exist for migrating images without having to create a separate migration. We have covered two of them in the articles about subfields and constants and pseudofields.

Migrating user creation date

Have a look at the previous post for details on the source values. For reference, the user creation time is provided by the member_since column, and one of the values is April 4, 2014. The following snippet shows how the various user date related properties are set:

created:
  plugin: format_date
  source: member_since
  from_format: 'F j, Y'
  to_format: 'U'
changed: '@created'
access: '@created'
login: '@created'

The created, entity property stores a UNIX timestamp of when the user was added to Drupal. The value itself is an integer number representing the number of seconds since the epoch. For example, 280299600 represents Sun, 19 Nov 1978 05:00:00 GMT. Kudos to the readers who knew this is Drupal's default expire HTTP header. Bonus points if you knew it was chosen in honor of someone’s birthdate. ;-)

Back to the migration, you need to transform the provided date from Month day, year format to a UNIX timestamp. To do this, you use the format_date plugin. The from_format is set to F j, Y which means your source date consists of:

  • The full textual representation of a month: April.
  • Followed by a space character.
  • Followed by the day of the month without leading zeros: 4.
  • Followed by a comma and another space character.
  • Followed by the full numeric representation of a year using four digits: 2014

If the value of from_format does not make sense, you are not alone. It is actually assembled from format characters of the date PHP function. When you need to specify the from and to formats, you basically need to look at the documentation and assemble a string that matches the desired date format. You need to pay close attention because upper and lowercase letters represent different things like Y and y for the year with four-digits versus two-digits respectively. Some date components have subtle variations like d and j for the day with or without leading zeros respectively. Also, take into account white spaces and date component separators. To finish the plugin configuration, you need to set the to_format configuration to something that produces a UNIX timestamp. If you look again at the documentation, you will see that U does the job.

The changed, access, and login entity properties are also dates in UNIX timestamp format. changed indicates when the user account was last updated. access indicates when the user last accessed the site. login indicated when the user last logged in. For brevity, the same value assigned to created is also assigned to these three entity properties. The at sign (@) means copy the value of a previous mapping in the process pipeline. If needed, each property can be set to a different value or left unassigned. None is actually required.

Migrating user roles

For reference, the roles are provided by the user_roles column, and one of the values is forum moderator, forum admin. It is a comma separated list of roles from the legacy system which need to be mapped to Drupal roles. It is possible that the user_roles column is not provided at all in the source. The following snippet shows how the roles are set:

roles:
  - plugin: skip_on_empty
    method: process
    source: user_roles
  - plugin: explode
    delimiter: ','
  - plugin: callback
    callable: trim
  - plugin: static_map
    map:
      'forum admin': administrator
      'webmaster': administrator
    default_value: null

First, the skip_on_empty plugin is used to skip the processing of the roles if the source column is missing. Then, the explode plugin is used to break the list into an array of strings representing the roles. Next, the callback plugin invokes the trim PHP function to remove any leading or trailing whitespace from the role names. Finally, the static_map plugin is used to manually map values from the legacy system to Drupal roles. All of these plugins have been explained previously. Refer to other articles in the series or the plugin documentation for details on how to use and configure them.

There are some things that are worth mentioning about migrating roles using this particular process pipeline. If the comma separated list includes spaces before or after the role name, you need to trim the value because the static map will perform an equality check. Having extraneous space characters will produce a mismatch.

Also, you do not need to map the anonymous or authenticated roles. Drupal users are assumed to be authenticated and cannot be anonymous. Any other role needs to be mapped manually to its machine name. You can find the machine name of any role in its edit page. In the example, only two out of four roles are mapped. Any role that is not found in the static map will be assigned the value null as indicated in the default_value configuration. After processing the null value will be ignored, and no role will be assigned. But you could use this feature to assign a default role in case the static map does not produce a match.

Migrating profile pictures

For reference, the profile picture is provided by the user_photo column, and one of the values is P01. This value corresponds to the unique identifier of one record in the udm_user_pictures file migration, which is part of the same demo module.  It is important to note that the user_picture field is not a user entity property. The field is created by the standard installation profile and attached to the user entity. You can find its configuration in the “Manage fields” tab of the “Account settings” configuration page at /admin/config/people/accounts. The following snippet shows how profile pictures are set:

user_picture/target_id:
  plugin: migration_lookup
  migration: udm_user_pictures
  source: user_photo

Image fields are entity references. Their target_id property needs to be an integer number containing the file id (fid) of the image. This can be obtained using the migration_lookup plugin. Details on how to configure it can be found in this article. You could simply use user_picture as your field mapping because target_id is the default subfield and could be omitted. Also note that the alt subfield is not mapped. If present, its value will be used for the alternative text of the image. But if it is not specified, like in this example, Drupal will automatically generate an alternative text out of the username. An example value would be: Profile picture for user michele.

Technical note: The user entity contains other properties you can write to. For a list of available options, check the baseFieldDefinitions() method of the User class defining the entity. Note that more properties can be available up in the class hierarchy.

And with that, we wrap up the user migration example. We covered how to migrate a user’s mail, timezone, username, password, status, creation date, roles, and profile picture. Along the way, we presented various process plugins that had not been used previously in the series. We showed a couple of examples of process plugin chaining to make sure the migrated data is valid and in the format expected by Drupal.

What did you learn in today’s blog post? Did you know how to process dates for user entity properties? Have you migrated user roles before? Did you know how to import profile pictures? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series is made possible thanks to these generous sponsors. Contact us if your organization would like to support this documentation project, whether the migration series or other topics.

Aug 10 2019
Aug 10

Today we are going to learn how to migrate users into Drupal. The example code will be explained in two blog posts. In this one, we cover the migration of email, timezone, username, password, and status. In the next one, we will cover creation date, roles, and profile pictures. Several techniques will be implemented to ensure that the migrated data is valid. For example, making sure that usernames are not duplicated.

Although the example is standalone, we will build on many of the concepts that had already been covered in the series. For instance, a file migration is included to import images used as profile pictures. This topic has been explained in detail in a previous post, and the example code is pretty similar. Therefore, no explanation is provided about the file migration to keep the focus on the user migration. Feel free to read other posts in the series if you need a refresher.

Example field mapping for user migration

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD users whose machine name is ud_migrations_users. The two migrations to execute are udm_user_pictures and udm_users. Notice that both migrations belong to the same module. Refer to this article to learn where the module should be placed.

The example assumes Drupal was installed using the standard installation profile. Particularly, we depend on a Picture (user_picture) image field attached to the user entity. The word in parenthesis represents the machine name of the image field.

The explanation below is only for the user migration. It depends on a file migration to get the profile pictures. One motivation to have two migrations is for the images to be deleted if the file migration is rolled back. Note that other techniques exist for migrating images without having to create a separate migration. We have covered two of them in the articles about subfields and constants and pseudofields.

Understanding the source

It is very important to understand the format of your source data. This will guide the transformation process required to produce the expected destination format. For this example, it is assumed that the legacy system from which users are being imported did not have unique usernames. Emails were used to uniquely identify users, but that is not desired in the new Drupal site. Instead, a username will be created from a public_name source column. Special measures will be taken to prevent duplication as Drupal usernames must be unique. Two more things to consider. First, source passwords are provided in plain text (never do this!). Second, some elements might be missing in the source like roles and profile picture. The following snippet shows a sample record for the source section:

source:
  plugin: embedded_data
  data_rows:
    - legacy_id: 101
      public_name: 'Michele'
      user_email: '[email protected]'
      timezone: 'America/New_York'
      user_password: 'totally insecure password 1'
      user_status: 'active'
      member_since: 'January 1, 2011'
      user_roles: 'forum moderator, forum admin'
      user_photo: 'P01'
  ids:
    legacy_id:
      type: integer

Configuring the destination and dependencies

The destination section specifies that user is the target entity. When that is the case, you can set an optional md5_passwords configuration. If it is set to true, the system will take an MD5 hashed password and convert it to the encryption algorithm that Drupal uses. For more information password migrations refer to these articles for basic and advanced use cases. To migrate the profile pictures, a separate migration is created. The dependency of user on file is added explicitly. Refer to these articles more information on migrating images and files and setting dependencies. The following code snippet shows how the destination and dependencies are set:

destination:
  plugin: 'entity:user'
  md5_passwords: true
migration_dependencies:
  required:
    - udm_user_pictures
  optional: []

Processing the fields

The interesting part of a user migration is the field mapping. The specific transformation will depend on your source, but some arguably complex cases will be addressed in the example. Let’s start with the basics: verbatim copies from source to destination. The following snippet shows three mappings:

mail: user_email
init: user_email
timezone: user_timezone

The mail, init, and timezone entity properties are copied directly from the source. Both mail and init are email addresses. The difference is that mail stores the current email, while init stores the one used when the account was first created. The former might change if the user updates its profile, while the latter will never change. The timezone needs to be a string taken from a specific set of values. Refer to this page for a list of supported timezones.

name:
  - plugin: machine_name
    source: public_name
  - plugin: make_unique_entity_field
    entity_type: user
    field: name
    postfix: _

The name, entity property stores the username. This has to be unique in the system. If the source data contained a unique value for each record, it could be used to set the username. None of the unique source columns (eg., legacy_id) is suitable to be used as username. Therefore, extra processing is needed. The machine_name plugin converts the public_name source column into transliterated string with some restrictions: any character that is not a number or letter will be converted to an underscore. The transformed value is sent to the make_unique_entity_field. This plugin makes sure its input value is not repeated in the whole system for a particular entity field. In this example, the username will be unique. The plugin is configured indicating which entity type and field (property) you want to check. If an equal value already exists, a new one is created appending what you define as postfix plus a number. In this example, there are two records with public_name set to Benjamin. Eventually, the usernames produced by running the process plugins chain will be: benjamin and benjamin_1.

process:
  pass:
    plugin: callback
    callable: md5
    source: user_password
destination:
  plugin: 'entity:user'
  md5_passwords: true

The pass, entity property stores the user’s password. In this example, the source provides the passwords in plain text. Needless to say, that is a terrible idea. But let’s work with it for now. Drupal uses portable PHP password hashes implemented by PhpassHashedPassword. Understanding the details of how Drupal converts one algorithm to another will be left as an exercise for the curious reader. In this example, we are going to take advantage of a feature provided by the migrate API to automatically convert MD5 hashes to the algorithm used by Drupal. The callback plugin is configured to use the md5 PHP function to convert the plain text password into a hashed version. The last part of the puzzle is set, in the process section, the md5_passwords configuration to true. This will take care of converting the already md5-hashed password to the value expected by Drupal.

Note: MD5-hash passwords are insecure. In the example, the password is encrypted with MD5 as an intermediate step only. Drupal uses other algorithms to store passwords securely.

status:
  plugin: static_map
  source: user_status
  map:
    inactive: 0
    active: 1

The status, entity property stores whether a user is active or blocked from the system. The source user_status values are strings, but Drupal stores this data as a boolean. A value of zero (0) indicates that the user is blocked while a value of one (1) indicates that it is active. The static_map plugin is used to manually map the values from source to destination. This plugin expects a map configuration containing an array of key-value mappings. The value from the source is on the left. The value expected by Drupal is on the right.

Technical note: Booleans are true or false values. Even though Drupal treats the status property as a boolean, it is internally stored as a tiny int in the database. That is why the numbers zero or one are used in the example. For this particular case, using a number or a boolean value on the right side of the mapping produces the same result.

In the next blog post, we will continue with the user migration. Particularly, we will explain how to migrate the user creation time, roles, and profile pictures.

What did you learn in today’s blog post? Have you migrated user passwords before, either in plain text or hashed? Did you know how to prevent duplicates for values that need to be unique in the system? Were you aware of the plugin that allows you to manually map values from source to destination? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

Next: Migrating users into Drupal - Part 2

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Aug 09 2019
Aug 09

At Promet Source, conversations with clients and among co-workers tend to revolve around various aspects of compliance, user experience, site navigation, and design clarity. We need a common nomenclature for referring to interface elements, but that leads to the question of who makes this stuff up and what makes these terms stick?
 
I asked that recently, during an afternoon of back-to-back meetings. In separate contexts, “cookies,” “breadcrumbs,” and “hamburgers” were all mentioned as they pertain to the sites we are building for clients. But I got to wondering: what is it about the evolving Web lexicon that seems inordinately slanted towards tasty snacks?
 

One Theory

As we all know, devs and designers work very hard, with incredible focus for long hours at a stretch. Are we trying to inject some fun language that evokes touch, taste, and smell to a web that can feel rather flat sometimes when we are in the trenches?


I couldn’t help but wonder about a potentially unifying theme to cookies, breadcrumbs and the buns that provide the top and bottom horizontal lines of the increasingly ubiquitous hamburger icon. That sparked my curiosity and a bit of research.

Data/Cookie Jar

Let’s start with cookies -- a term that refers to the extraction and storage of user data such as logins, previous searches, activity on a site, and items in a shopping cart.  Almost all Websites use and store cookies on Web browsers.

a stack of chocolate chip cookies

Generally speaking, cookies are designed to inform better and more personalized Web experiences, but they do, of course, give rise to all sorts of privacy and security concerns. 
 
Potential cookie constraints for Websites developed in the United States for a U.S. audience are moving in an uncertain direction. Up to this point, it’s essentially been the Wild West, with few restrictions governing their usage. 
 
In the European Union, it’s a different story. Assorted rules and regulations, collectively known as the “Cookie Law,” have been in place for nearly a decade -- forbidding the tracking of users’ Web activity without their consent. 
 
As is the case with U.S.-based Websites that need to ensure accessibility, compliance with the Cookie Law can be complicated -- requiring rewriting and reconfiguration of code, followed by careful testing to ensure that the site’s code, server and the user’s browser are aligned to prevent cookies from tracking user behavior and collecting information. And another issue that accessibility and cookies have in common: there’s more at stake than compliance. To an increasing degree, users avoid engaging with Websites when they believe that their activity is being tracked by the use of cookies and there’s no question that overall levels of trust appear to be on the decline as privacy concerns increase. This is among the reasons why many websites are starting to give users the option of just saying no to cookies and still allowing them access to the site.

Connecting the Crumbs

Considerably newer to the Web lexicon than cookies, a breadcrumb or breadcrumb trail is a navigational aid in user interfaces designed to help users track their own activity within programs or websites, providing them with a sense of place within the bigger picture of the site. 
 
Breadcrumbs can take different forms. Generally speaking, a breadcrumb trail tracks the order of each page viewed, as a horizontal list below the top headers. This provides a guide for the user to navigate back to any point where they’ve previously been on the site. Think about Grimms’ story of Hansel and Gretel.
 
Breadcrumbs can be very helpful on complex, content-heavy sites. Who among us hasn’t found themselves frustrated in an attempt to navigate back to a page that seems to have temporarily disappeared?

On the Table

Unlike cookies, which for better or for worse, are stored behind the scenes and consumed in a manner that’s usually not known to the user, a breadcrumb trail is out in the open -- right upfront for the user to see and follow. Breadcrumbs are designed solely to enhance the user experience, functioning as a reverse GPS on complex Websites.  
 
As more and more users come to count on breadcrumbs as a navigational aid, we can expect that the demand for them will increase. At the same time, we can expect that usage of cookies will come under increased scrutiny along with a trend toward escalation of privacy concerns and a growing skittishness about how personal information is being shared. At Promet, we consider cookies to be a must-have on any site.

Time for Some Protein

As for the third item in our list of tasty Web terms, the hamburger is essentially all good. This three-line icon that’s started to appear at the top of screens serves as a mini-portal to additional options or pages.

Actual hamburger on the left. A web hamburger icon on the right.

What’s not to love about this feature that takes up so little space on the screen, but opens the door to a trove additional navigation or features for apps and Websites? Fact is, UX/UI trends are constantly evolving, and users vary widely in the pace in which they pick up what’s new and next. The hamburger icon has a lot going for it and it’s not going away.

 

Meet the Search Sandwich

There’s a item on the table and we were just introduced to it by one of our UX savvy clients. As far as I know, it doesn’t have an official name yet, so we affectionately refer to it as the “search sandwich.” It’s an evolved hamburger combined with a search icon to indicate to users that both the navigation menu and the search bar can be accessed from this icon. It looks a bit like a ham sandwich with an olive on top and might make an appearance on a website soon. Stay tuned.
 
So there you have it. Key factors in our Web design world. -- possibly a reflection of a desire to take our high-tech conversations down a notch, with these playful metaphors for elements that we must all learn to identify with whether a designer, developer, or just a web user. They remind us that the Web is a rapidly evolving environment of UI/UX trends -- created and consumed by humans. 
 
Interested in serving up a tasty web experience? Contact us today

Aug 09 2019
Aug 09

At Promet Source, conversations with clients and among co-workers tend to revolve around various aspects of compliance, user experience, site navigation, and design clarity. We need a common nomenclature for referring to interface elements, but that leads to the question of who makes this stuff up and what makes these terms stick?
 
I asked that recently, during an afternoon of back-to-back meetings. In separate contexts, “cookies,” “breadcrumbs,” and “hamburgers” were all mentioned as they pertain to the sites we are building for clients, and I got to wondering: what is it about the evolving Web lexicon that seems inordinately slanted towards tasty snacks?
 

One Theory

As we all know, devs and designers work very hard, with incredible focus for long hours at a stretch. Are we trying to inject some fun language that evokes touch, taste, and smell to a web that can feel rather flat sometimes when we are in the trenches?

And then, I couldn’t help but wonder about a potentially unifying theme to cookies, breadcrumbs and the buns that provide the top and bottom horizontal lines of the increasingly ubiquitous hamburger icon. That sparked my curiosity and a bit of research.

Data/Cookie Jar

Let’s start with cookies -- a term that refers to the extraction and storage of user data such as logins, previous searches, activity on a site, and items in a shopping cart.  Almost all Websites use and store cookies on Web browsers.

a stack of chocolate chip cookies

Generally speaking, cookies are designed to inform better and more personalized Web experiences, but they do, of course, give rise to all sorts of privacy and security concerns. 
 
Potential cookie constraints for Websites developed in the United States for a U.S. audience are moving in an uncertain direction. Up to this point, it’s essentially been the Wild West, with few restrictions governing their usage. 
 
In the European Union, it’s a different story. Assorted rules and regulations, collectively known as the “Cookie Law,” have been in place for nearly a decade -- forbidding the tracking of users’ Web activity without their consent. 
 
As is the case with U.S.-based Websites that need to ensure accessibility, compliance with the Cookie Law can be complicated -- requiring rewriting and reconfiguration of code, followed by careful testing to ensure that the site’s code, server and the user’s browser are aligned to prevent cookies from tracking user behavior and collecting information. And another issue that accessibility and cookies have in common: there’s more at stake than compliance. To an increasing degree, users avoid engaging with Websites when they believe that their activity is being tracked by the use of cookies and there’s no question that overall levels of trust appear to be on the decline as privacy concerns increase. This is among the reasons why many websites are starting to give users the option of just saying no to cookies and still allowing them access to the site.

Connecting the Crumbs

Considerably newer to the Web lexicon than cookies, a breadcrumb or breadcrumb trail is a navigational aid in user interfaces designed to help users track their own activity within programs or websites, providing them with a sense of place within the bigger picture of the site. 
 
Breadcrumbs can take different forms. Generally speaking, a breadcrumb trail tracks the order of each page viewed, as a horizontal list below the top headers. This provides a guide for the user to navigate back to any point where they’ve previously been on the site. Think about Grimms’ story of Hansel and Gretel.
 
Breadcrumbs can be very helpful on complex, content-heavy sites. Who among us hasn’t found themselves frustrated in an attempt to navigate back to a page that seems to have temporarily disappeared?

On the Table

Unlike cookies, which for better or for worse, are stored behind the scenes and consumed in a manner that’s usually not known to the user, a breadcrumb trail is out in the open -- right upfront for the user to see and follow. Breadcrumbs are designed solely to enhance the user experience, functioning as a reverse GPS on complex Websites.  
 
As more and more users come to count on breadcrumbs as a navigational aid, we can expect that the demand for them will increase. At the same time, we can expect that usage of cookies will come under increased scrutiny along with a trend toward escalation of privacy concerns and a growing skittishness about how personal information is being shared. At Promet, we consider cookies to be a must-have on any site.

Time for Some Protein

As for the third item in our list of tasty Web terms, the hamburger is essentially all good. This three-line icon that’s started to appear at the top of screens serves as a mini-portal to additional options or pages.

Actual hamburger on the left. A web hamburger icon on the right.

What’s not to love about this feature that takes up so little space on the screen, but opens the door to a trove additional navigation or features for apps and Websites? Fact is, UX/UI trends are constantly evolving, and users vary widely in the pace in which they pick up what’s new and next. The hamburger icon has a lot going for it and it’s not going away.

 

Meet the Search Sandwich

There’s a item on the table and we were just introduced to it by one of our UX savvy clients. As far as I know, it doesn’t have an official name yet, so we affectionately refer to it as the “search sandwich.” It’s an evolved hamburger combined with a search icon to indicate to users that both the navigation menu and the search bar can be accessed from this icon. It looks a bit like a ham sandwich with an olive on top and might make an appearance on a website soon. Stay tuned.
 
So there you have it. Key factors in our Web design world. -- possibly a reflection of a desire to take our high-tech conversations down a notch, with these playful metaphors for elements that we must all learn to identify with whether a designer, developer, or just a web user. They remind us that the Web is a rapidly evolving environment of UI/UX trends -- created and consumed by humans. 
 
Interested in serving up a tasty web experience? Contact us today

Aug 09 2019
Aug 09

Today we continue the conversation about migration dependencies with a hierarchical taxonomy terms example. Along the way, we will present the process and syntax for migrating into multivalue fields. The example consists of two separate migrations. One to import taxonomy terms accounting for term hierarchy. And another to import into a multivalue taxonomy term field. Following this approach, any node and taxonomy term created by the migration process will be removed from the system upon rollback.

Syntax for multivalue field migration.

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD multivalue taxonomy terms whose machine name is ud_migrations_multivalue_terms. The two migrations to execute are udm_dependencies_multivalue_term and udm_dependencies_multivalue_node. Notice that both migrations belong to the same module. Refer to this article to learn where the module should be placed.

The example assumes Drupal was installed using the standard installation profile. Particularly, a Tags (tags) taxonomy vocabulary, an Article (article) content type, and a Tags (field_tags) field that accepts multiple values. The words in parenthesis represent the machine name of each element.

Migrating taxonomy terms and their hierarchy

The example data for the taxonomy terms migration is fruits and fruit varieties. Each row will contain the name and description of the fruit. Additionally, it is possible to define a parent term to establish hierarchy. For example, “Red grape” is a child of “Grape”. Note that no numerical identifier is provided. Instead, the value of the <code>name</code> is used as a <code>string</code> identifier for the migration. If term names could change over time, it is recommended to have another column that did not change (e.g., an autoincrementing number). The following snippet shows how the source section is configured:

source:
  plugin: embedded_data
  data_rows:
    - fruit_name: 'Grape'
      fruit_description: 'Eat fresh or prepare some jelly.'
    - fruit_name: 'Red grape'
      fruit_description: 'Sweet grape'
      fruit_parent: 'Grape'
    - fruit_name: 'Pear'
      fruit_description: 'Eat fresh or prepare a jam.'
  ids:
    fruit_name:
      type: string

The destination is quite short. The target entity is set to taxonomy terms. Additionally, you indicate which vocabulary to migrate into. If you have terms that would be stored in different vocabularies, you can use the <code>vid</code> property in the process section to assign the target vocabulary. If you write to a single one, the <code>default_bundle</code> key in the destination can be used instead. The following snippet shows how the destination section is configured:

destination:
  plugin: 'entity:taxonomy_term'
  default_bundle: tags

For the process section, three entity properties are set: name, description, and parent. The first two are strings copied directly from the source. In the case of <code>parent</code>, it is an entity reference to another taxonomy term. It stores the taxonomy term id (<code>tid</code>) of the parent term. To assign its value, the <code>migration_lookup</code> plugin is configured similar to the previous example. The difference is that, in this case, the migration to reference is the same one being defined. This sets an important consideration. Parent terms should be migrated before their children. This way, they can be found by the look up operation. Also note that the look up value is the term name itself, because that is what this migration set as the unique identifier in the source section. The following snippet shows how the process section is configured:

process:
  name: fruit_name
  description: fruit_description
  parent:
    plugin: migration_lookup
    migration: udm_dependencies_multivalue_term
    source: fruit_parent

Technical note: The taxonomy term entity contains other properties you can write to. For a list of available options check the baseFieldDefinitions() method of the Term class defining the entity. Note that more properties can be available up in the class hierarchy.

Migrating multivalue taxonomy terms fields

The next step is to create a node migration that can write to a multivalue taxonomy term field. To stay on point, only one more field will be set: the title, which is required by the node entity. Read this change record for more information on how the Migrate API processes Entity API validation. The following snippet shows how the source section is configured for the node migration:

source:
  plugin: embedded_data
  data_rows:
    - unique_id: 1
      thoughtful_title: 'Amazing recipe'
      fruit_list: 'Green apple, Banana, Pear'
    - unique_id: 2
      thoughtful_title: 'Fruit-less recipe'
  ids:
    unique_id:
      type: integer

The fruits column contains a comma separated list of taxonomies to apply. Note that the values match the unique identifiers of the taxonomy term migration. If you had used numbers as migration identifiers there, you would have to use those numbers in this migration to refer to the terms. An example of that was presented in the previous post. Also note that there is one record that has no terms associated. This will be considered during the field mapping. The following snippet shows how the process section is configured for the node migration:

process:
  title: thoughtful_title
  field_tags:
    - plugin: skip_on_empty
      source: fruit_list
      method: process
      message: 'No fruit_list listed.'
    - plugin: explode
      delimiter: ','
    - plugin: migration_lookup
      migration: udm_dependencies_multivalue_term

The title of the node is a verbatim copy of the thoughtful_title column. The Tags fields, mapped using its machine name field_tags, uses three chained process plugins. The skip_on_empty plugin reads the value of the fruit_list column and skips the processing of this field if no value is provided. This is done to accommodate the fact that some records in the source do not specify tags. Note that the method configuration key is set to process. This indicates that only this field should be skipped and not the entire record. Ultimately, tags are optional in this context and nodes should still be imported even if no tag is associated.

The explode plugin allows you to break a string into an array, using a delimiter to determine where to make the cut. Later, this array is passed to the migration_lookup plugin specifying the term migration as the one to use for the look up operation. Again, the taxonomy term names are used here because they are the unique identifiers of the term migration. Note that neither of these plugins has a source configuration. This is because when process plugins are chained, the result of one plugin is sent as source to be transformed by the next one in line. The end result is an array of taxonomy term ids that will be assigned to field_tags. The migration_lookup is able to process single values and arrays.

The last part of the migration is specifying the process section and any dependencies. Refer to this article for more details on setting migration dependencies. The following snippet shows how both are configured for the node migration:

destination:
  plugin: 'entity:node'
  default_bundle: article
migration_dependencies:
  required:
    - udm_dependencies_multivalue_term
  optional: []

More syntactic sugar

One way to set multivalue fields in Drupal migrations is assigning its value to an array. Another option is to set each value manually using field deltas. Deltas are integer numbers starting with zero (0) and incrementing by one (1) for each element of a multivalue field. Although you could set any delta in the Migrate API, consider the field definition in Drupal. It is possible that limits had been set to the number of values a field could hold. You can specify deltas and subfields at the same time. The full syntax is field_name/field_delta/subfield. The following example shows the syntax for a multivalue image field:

process:
  field_photos/0/target_id: source_fid_first
  field_photos/0/alt: source_alt_first
  field_photos/1/target_id: source_fid_second
  field_photos/1/alt: source_alt_second
  field_photos/2/target_id: source_fid_third
  field_photos/2/alt: source_alt_third

Manually setting a multivalue field is less flexible and error-prone. In today’s example, we showed how to accommodate for the list of terms not being provided. Imagine having to that for each delta and subfield combination, but the functionality is there in case you need it. In the end, Drupal offers more syntactic sugar so you can write shorted field mappings. Additionally, there are various process plugins that can handle arrays for setting multivalue fields.

Note: There are other ways to migrate multivalue fields. For example, when using the entity_generate plugin provided by Migrate Plus, there is no need to create a separate taxonomy term migration. This plugin is able to create the terms on the fly while running the import process. The caveat is that terms created this way are not deleted upon rollback.

What did you learn in today’s blog post? Have you ever done a taxonomy term migration before? Were you aware of how to migrate hierarchical entities? Did you know you can manually import multivalue fields using deltas? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether the migration series or other topics.

Aug 09 2019
Aug 09

One of Drupal’s biggest strengths is its data modeling capabilities. You can break the information that you need to store into individual fields and group them in content types. You can also take advantage of default behavior provided by entities like nodes, users, taxonomy terms, files, etc. Once the data has been modeled and saved into the system, Drupal will keep track of the relationship between them. Today we will learn about migration dependencies in Drupal.

As we have seen throughout the series, the Migrate API can be used to write to different entities. One restriction though is that each migration definition can only target one type of entity at a time. Sometimes, a piece of content has references to other elements. For example, a node that includes entity reference fields to users, taxonomy terms, and images. The recommended way to get them into Drupal is writing one migration definition for each. Then, you specify the relationships that exist among them.

Snippet of migration dependency definition

Breaking up migrations

When you break up your migration project into multiple, smaller migrations they are easier to manage and you have more control of process pipeline. Depending on how you write them, you can rest assured that imported data is properly deleted if you ever have to rollback the migration. You can also enforce that certain elements exist in the system before others that depend on them can be created. In today’s example, we are going to leverage the example from the previous post to demonstrate this. The portraits imported in the file migration will be used in the image field of nodes of type article.

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD migration dependencies introduction whose machine name is ud_migrations_dependencies_intro. Last time the udm_dependencies_intro_image was imported. This time udm_dependencies_intro_node will be executed. Notice that both migrations belong to the same module. Refer to this article to learn where the module should be placed.

Writing the source and destination definition

To keep things simple, the example will only write the node title and assign the image field. A constant will be provided to create the alternative text for the images. The following snippet shows how the source section is configured:

source:
  constants:
    PHOTO_DESCRIPTION_PREFIX: 'Photo of'
  plugin: embedded_data
  data_rows:
    - unique_id: 1
      name: 'Michele Metts'
      photo_file: 'P01'
    - unique_id: 2
      name: 'David Valdez'
      photo_file: 'P03'
    - unique_id: 3
      name: 'Clayton Dewey'
      photo_file: 'P04'
  ids:
    unique_id:
      type: integer

Remember that in this migration you want to use files that have already been imported. Therefore, no URLs to the image files are provided. Instead, you need a reference to the other migration. Particularly, you need a reference to the unique identifiers for each element of the file migration. In the process section, this value will be used to look up the portrait that will be assigned to the image field.

The destination section is quite short. You only specify that the target is a node entity and the content type is article. Remember that you need to use the machine name of the content type. If you need a refresher on how this is set up, have a look at the articles in the series. It is recommended to read them in order as some examples expand on topics that had been previously covered. The following snippet shows how the destination section is configured:

destination:
  plugin: 'entity:node'
  default_bundle: article

Using previously imported files in image fields

To be able to reuse the previously imported files, the migrate_lookup plugin is used. Additionally, an alternative text for the image is created using a contact plugin concat plugin. The following snippet shows how the process section is configured:

process:
  title: name
  field_image/target_id:
    plugin: migration_lookup
    migration: udm_dependencies_intro_image
    source: photo_file
  field_image/alt:
    plugin: concat
    source:
      - constants/PHOTO_DESCRIPTION_PREFIX
      - name
    delimiter: ' '

In Drupal, files and images are entity reference fields. That means, they only store a pointer to the file, not the file itself. The pointer is an integer number representing the file ID (fid) inside Drupal. The migration_lookup plugin allows you to query the file migration so imported elements can be reused in node migration.

The migration option indicates which migration to query specifying its migration id. Additionally, you indicate which columns in your source match the unique identifiers of the migration to query. In this case, the values of the photo_file column in udm_dependencies_intro_node matches those of the photo_url column in udm_dependencies_intro_image. If a match is found, this plugin will return the file ID which can be directly assigned to the target_id of the image field. That is how the relationship between the two migrations is established.

Note: The migration_lookup plugin allows you to query more than one migration at a time. Refer to the documentation for details on how to set that up and why you would do it. It also offers additional configuration options.

As a good accessibility practice, an alternative text is set for the image using the alt subfield. Other than that, only the node title is set. And with that, you have two migrations connected between them. If you were to rollback both of them, no file or node would remain in the system.

Being explicit about migration dependencies

The node migration depends on the file migration. That it, it is required for the files to be migrated first before they can be used to as images for the nodes. In fact, in the provided example, if you were to import the nodes before the files, the migration would fail and no node would be created. You can be explicit about migration dependencies. To do it, add a new configuration option to the node migration that lists which migrations it depends on. The following snippet shows how this is configured:

migration_dependencies:
  required:
    - udm_dependencies_intro_image
  optional: []

The migration_dependencies key goes at the root level of the YAML definition file. It accepts two configuration options: required and optional. Both accept an array of migration ids. The required migrations are hard prerequisites. They need to be executed in advance or the system will refuse to import the current one. The optional migrations do not have to be executed in advance. But if you were to execute multiple migrations at a time, the system will run them in the order suggested by the dependency hierarchy. Learn more about migration dependencies in this article. Also, check this comment on Drupal.org in case you have problems where the system reports that certain dependencies are not met.

Now that the dependency among migrations has been explicitly established you have two options. Either import each migration manually in the expected order. Or, import the parent migration using the --execute-dependencies flag. When you do that, the system will take care of determining the order in which all migrations need to be imported. The following two snippets will produce the same result for the demo module:

$ drush migrate:import udm_dependencies_intro_image
$ drush migrate:import udm_dependencies_intro_node
$ drush migrate:import udm_dependencies_intro_node --execute-dependencies

In this example, there are only two migrations, but you can have as many as needed. For example, a node with references to users, taxonomy terms, paragraphs, etc. Also note that the parent entity does not have to be a node. Users, taxonomy terms, and paragraphs are all fieldable entities. They can contain references the same way nodes do. In future entries, we will talk again about migration dependencies and provide more examples.

Tagging migrations

The core Migrate API offers another mechanism to execute multiple migrations at a time. You can tag them. To do that you add a migration_tags key at the root level of the YML definition file. Its value an array of arbitrary tag names to assign to the migration. Once set, you run them using the migrate import command with the --tag flag. You can also rollback migrations per tag. The first snippet shows how to set the tags and the second how to execute them:

migration_tags:
  - UD Articles
  - UD Example
$ drush migrate:import --tag=UD Articles,UD Example
$ drush migrate:rollback --tag=UD Articles,UD Example

It is important to note that tags and dependencies are different concepts. They allow you to run multiple migrations at a time. It is possible that a migration definition file contains both, either, or neither. The tag system is used extensively in Drupal core for migrations related to upgrading to Drupal 8 from previous versions. For example, you might want to run all migrations tagged ‘Drupal 7’ if you are coming from that version. It is possible to specify more than one tag when running the migrate import command separating each with a comma (,).

Note: The Migrate Plus module offers migration groups to organize migrations similarly to how tags work. This will be covered in a future entry. Just keep in mind that tags are provided out of the box by the Migrate API. On the other hand, migrations groups depend on a contributed module.

What did you learn in today’s blog post? Have you used the migration_lookup plugin to query imported elements from a separate migration? Did you know you can set required and optional dependencies? Have you used tags to organize your migrations? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with your colleagues.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether the migration series or other topics.

Aug 08 2019
Aug 08

Our lead community developer, Alona Oneill, has been sitting in on the latest Drupal Core Initiative meetings and putting together meeting recaps outlining key talking points from each discussion. This article breaks down highlights from meetings this past week. You'll find that the meetings, while also providing updates of completed tasks, are also conversations looking for community member involvement. There are many moving pieces as things are getting ramped up for Drupal 9, so if you see something you think you can provide insights on, we encourage you to get involved.

Drupal 9 Readiness (08/05/19)

Meetings are for core and contributed project developers as well as people who have integrations and services related to core. Site developers who want to stay in the know to keep up-to-date for the easiest Drupal 9 upgrade of their sites are also welcome.

  • Usually happens every other Monday at 18:00 UTC.
  • Is done over chat.
  • Happens in threads, which you can follow to be notified of new replies even if you don’t comment in the thread. You may also join the meeting later and participate asynchronously!
  • Has a public Drupal 9 Readiness Agenda anyone can add to.
  • Transcript will be exported and posted to the agenda issue.

Symfony 4/5 compatibility

The issue, Allow Symfony 4 to be installed in Drupal 8, has had a lot of work put into it by Michael Lutz.

Contrib deprecation testing on drupal.org

Gábor Hojtsy grabbed the data and did some analysis and posted his findings to prepare for Drupal 9. The main topic of the post is to stop using drupal_set_message() now.

Examples module Drupal 9 compatibility

Andrey Postnikov posted Kharkov code sprint on where 23 issues were addressed. Follow the example code for developers project for more information. There are still a bunch of issues to review there if anyone is interested!

Module Upgrader Drupal 9 compatibility

Deprecation cleanup status - blockers to Drupal 9 branch opening

Drupal core's own deprecation testing results, there are currently 13 children issues open, most of them need reviews.

Twig 2 upgrade guide and automation and other frontend deprecations tooling

Semantic versioning for contrib projects

Now that we've come to a path forward for how we plan on supporting semver in core (core key using semver + composer.json for advanced features), the Drupal Association is planning on auditing our infrastructure to start implementing semver.

New Drupal 8.8 deprecations that may need to be backported

  • Ryan Aslett and Gábor Hojtsy did some analysis on deprecations in contrib that are new in Drupal 8.8. They found 17% of all contrib deprecated API use now is from either code deprecated in 8.7 or 8.8. Ryan Aslett looked at the toplist of those deprecations and categorized them based on whether the replacements are also introduced in 8.8 or earlier.
  • We're not backporting API's, that carries far too much semver breaking risk. If a contrib maintainer has usages of code that were deprecated in 8.8.x, and they want their module to be 9.x compatible on the day that 9.0 comes out, they can:
    • do internal version checking,
    • open another branch,
    • wait until 9.1 to be 100% compatible with all supported versions, or
    • drop support for 8.7.x.

Renaming core modules, eg. actions

  • There's a meta about renames.
  • Renaming modules can have impacts on other modules that declare a dependency on those modules. 
  • We need some way to prove that a rename doesn't break contrib modules.

Migration Initiative Meeting (08/08/19)

This meeting:

  • Usually happens every Thursday and alternates between 14:00 and 21:00 UTC.
  • Is for core migrate maintainers and developers and anybody else in the community with an interest in migrations.
  • Is done over chat.
  • Happens in threads, which you can follow to be notified of new replies even if you don’t comment in the thread. You may also join the meeting later and participate asynchronously!
  • Has a public migration meeting agenda anyone can add to.
  • Transcript will be exported and posted to the agenda issue.
  • For anonymous comments, start with a bust in silhouette emoji. To take a comment or thread off the record, start with a no entry sign emoji.

Some issues need review:

  1. Add test of D7 term localized source plugin
  2. Migrate D7 synchronized fields
  3. Ensure language is not Null in translation source queries 
  4. Language specific term (i18n_taxonomy) should not rely on entity translation in D7 taxonomy term migration
  5. Migrate D6 and D7 node revision translations to D8 
  6. Migrate D7 i18n taxonomy term language 
  7. Use the lock service for migration locks 
  8. Undeprecate Drupal\comment\Plugin\migrate\source\d6\Comment::prepareComment() and mark as internal
  9. Create Migration Lookup service 
  10. Validate Migration State should load test fixture
  11. Boolean Field On and Off Label not Migrating
  12. Assert plural labels exist on migrate upgrade form
  13. Migrate UI - review help text
Aug 08 2019
Aug 08
Text on the Rosetta Stone

The web is constantly growing, evolving and—thankfully—growing more accessible and inclusive.

It is becoming expected that a user can interact with a website solely via keyboard or have the option to browse in their native language. There are many ways to serve the needs of non-native-language users, but one of the more robust is Drupal Multilingual.

Unlike 3rd party translation plugins like Google Translate or browser translation tools, Drupal's suite of core Multilingual tools allows you to write accurate and accessible translated content in the same manner as you write in your default language content. With no limit on the number languages, settings for right-to-left content, and the ability to translate any and all of your content, Drupal 8 can create a true multi-language experience like never before.

There is, however, a bit of planning and work involved.

Hopefully, this blog series will help smooth the path to truly inclusive content by highlighting some project management, design, site building, and development gotchas, as well as providing some tips and tricks to make the multilingual experience better for everyone. Part one will help you decide if you need multilingual as well as provide some tips on how to plan and budget for it.

Aug 08 2019
Aug 08

For most Drupal projects, patches are inevitable. It’s how we, in the Drupal community, share code. If that scares you, don’t worry-- the community is working hard to move to a pull/merge request workflow. Due to the collaborative nature of Drupal as a thriving open source community and the always growing ecosystem of contrib modules, patches are the ever-evolving glue that can hold a site together.  

Before Drupal 8, you may have seen projects use drush make which is a Drupal specific solution. As part of the “get off the island” movement,  Drupal adopted existing dependency manager Composer. Composer does a decent job alleviating the headaches of managing several sites with different dependencies. However, out of the box Composer will revert patched core files and contrib modules and it is for that reason composer-patches project was created. In this blog post, we are going to review how to set up composer-patches for a composer managed project and how to specify local or remote hosted patches.

The setup

In your favorite command line tool, you will want to add the composer-patches project:

composer require cweagans/composer-patches:~1.0 --update-with-dependencies

With this small change, your project is now set up for success because composer can manage your patches. 

Local patches

Sometimes you will find that you need patch contrib or core specifically for your project and therefore the patch exists locally. Composer-patches can apply that patch for you, we just need to tell it where it is.  Let’s look at an example project that has core patch applied and saved locally in the project root directory ‘patches/core-invalid-config-structures.patch’:
    ...
    "extra": {
      "patches": {
        "drupal/core": {
          "Core Invalid config structures ":"patches/core-invalid-config-structures.patch"
        }
      }
    }

In your composer.json, you will want to add an “extra” section if it doesn’t already exist.  Composer-patches will take the packages listed in “patches” and try to apply any listed patches. In our above example, the package we are patching is “drupal/core”. Patches are declared as follows:

“Patch description”: “path to patch file”

This information will be printed on the command line while composer tries to update the package which makes it important to summarize the patches purpose well.  If you would like to see what this looks like in the wild, take a look at our distribution Rain which leverages a couple of contrib patches.

After manually updating composer.json, it is always a good idea to run composer validate to confirm the json syntax is right.  If you get the green success message run composer update drupal/[projectname], e.g. composer update drupal/core to have the patch applied. 

You will know that the patch is applied based on the output:

patch output

As you can see, the package getting patched is removed, added back and the patch is applied. 

Note: Sometimes I feel like I have to give composer a nudge, always feel comfortable deleting /core, /vendor, or /modules/contrib, but if you delete composer.lock know that your dependencies could update based off your constraints.  Composer.json tracks our package dependencies at certain version constraints while composer.lock is the recipe of computed versions based off those constraints. I have found myself running the following:

rm -rf core && rm -rf modules/contrib && rm -rf vendor
composer install

Remote Patches

When possible we should open issues on Drupal.org and post patches there. That way, the community can work together to solve a problem and usually you’ll get a more reliable, lasting solution. Think about it this way - would you rather only you or your team review a critical patch to your project or hundreds of developers?

To make composer-patches grab a remote patch make the following changes:
    ...
    "extra": {
      "patches": {
        "drupal/core": {

          "#2925890-10: Invalid config structures ":"https://www.drupal.org/files/issues/2018-09-26/2925890-10.patch"
        }
      }
    } 

The only change here is rather than the path to the local patch, we have substituted it for the URL the patch. This will have a similar success message when applied correctly:

remote patches

Tips 

So far, I’ve shown you how to get going with composer-patches project but there are a lot of settings/plugins that can elevate your project.  A feature I turn on for almost all sites is exit on patch failure because it is a big deal when a patch fails.  If you too want to turn this feature on, add the following line to your “extras” section in your composer.json:

"composer-exit-on-patch-failure": true,

I have also found it helpful to add a link back to the original issue in the composer.json patch declaration. Imagine working on a release and one of your patches fail but the only reference you have to the issue is the patch file url? It is times like these that a link to the issue can make your day.  If we made the same change to our example before, it would look like the following:

 "drupal/core": {
          "#2925890-10: Invalid config structures (https://www.drupal.org/project/drupal/issues/2925890)" : "https://www.drupal.org/files/issues/2018-09-26/2925890-10.patch"
        }

Conclusion

Composer-patches is a critical package to any Drupal project managed by Composer. In this blog I showed you how to get started with the project and some of the tips and tricks I’ve learned along the way. How does your team user composer-packages? Do you have a favorite setting that I didn’t mention? Feel free to drop a comment and share what works for you and your team.

Aug 08 2019
Aug 08

If you have a local business — a restaurant, a bar, a dental clinic, a flower delivery service, a lawyer's office, and so on — you will benefit immensely from a strong online presence. In this post, we will discuss why Drupal 8 is a great choice to build a local business website. Read on to see how numerous Drupal 8’s benefits will play in favor of your local business.

Some stats about why your local business needs a website

Local businesses once used to rely on word-of-mouth marketing. But the new digital era has changed the game. Customers widely use local Google search to find places, services, or products, trust online customer reviews, and so on. 

So consider these stats about how things are going for local businesses in the digital world:

  • Users rely on search engines for finding local information. According to Google’s study, 4 in 5 people do so.
  • Local searches are also very goal-oriented. The same Google’s study says that 50% of users who performed a local search on their smartphones visited the store within the next 24 hours. 
  • Mobile local searches grow like crazy. According to Statista, mobile local searches are forecast to reach 141.9 billion in 2019 compared to 66.5 billion in 2014. At the same time, the desktop trend even slightly drops (62.3 and 66.5 billion, respectively).
  • Smartphone shoppers love local search. Statista also informs that 82% of smartphone shoppers in the US have used their device for local search with the “near me” keyword as of 7/2018.
  • Customers read reviews for local businesses. According to the study by BrightLocal, 86% of people do so.

Reasons to build a local business website on Drupal 8

SEO-friendliness with plenty of useful modules

First of all, local businesses shouldn’t miss their unique opportunity and make the best of SEO. Google has special approaches to local search. When users search by adding a city name or the “near me” keyword, Google lists the best results near the top of the SERPs in a variety of rich ways. Among them:

  • the Knowledge Panel
  • locations on the Google Map
  • carousels with images, news, reviews, etc.

Moreover, users are able to get all the necessary information like your business hours, get your contacts, read a review, get direction, book an appointment, and so on, without even clicking to your website (zero-click SERPs).

Local Google search on a mobile phone

How to get to these results? Among the recommendations are:

  • provide detailed information in Google My Business listing
  • have an optimized Knowledge Graph for your website
  • optimize content using local keywords
  • optimize content so it fits Google’s rich snippets
  • and, of course, follow overall general best SEO practices 

Here is where the Drupal 8 CMS can be your very helpful assistant. In addition to being SEO-friendly out-of-box, Drupal 8 has a wealth of useful SEO modules for various purposes. They include:

and many more.

Content easy to manage

Unique and relevant content, regularly updated and optimized with local keywords is one of your most important local SEO secrets. The richer the content is, the richer it looks in Google local search results. In addition, trimming your content to fit Google’s rich snippets is the key to optimizing your website for voice search

In Drupal 8, it is easy to create, edit, and present content in attractive ways. Drupal 8 offers you:

  • quick edits directly on the page
  • the Media Library to easily enrich your content with images, videos, and audios
  • handy content previews
  • drag-and-drop page layouts with Layout Builder
  • Drupal Views grids, slideshows, and carousels for the attractive content presentation
  • content revision history
  • mobile-friendly admin interfaces
  • content moderation workflows

and much more.  
 

Media Library in Drupal 8

Mobile optimization out of box

In addition to the above mobile statistics, here is more. The mobile share of organic search engine visits has reached 59% in 2019 against 27% in 2013. So your successful local business absolutely needs a mobile-friendly website.

Here is where Drupal 8 wins totally. It has been built around a mobile-first approach. The CMS features built-in modules for creating responsive web design — the Responsive Image and the Breakpoint. 

The responsive web design technique allows your website pages to adapt to any user’s screen by showing a different layout. The page elements resize, change their position, or disappear to provide the smoothest viewing experiences for everyone. 

Multi-language to attract more guests

Let’s suppose you run local business in your country in your local language. Consider adding English as an international language or another language based on your touristic audience. See how you can attract your city guests as they go ahead with Google search.

Drupal 8 is the best option for multilingual websites and allows you to easily add as many languages as you wish. Drupal 8 supports a hundred of them out-of-the-box with the interface translations included. 

Thanks to Drupal 8 Multilingual Initiative (D8MI), Drupal 8 has four powerful modules responsible for every aspect of translation. 

High accessibility standards

According to the CDC (Center for Disease Control and Prevention), 26% (1 in 4) adults in the US have some form of disability. This a quarter of your potential customers. Moreover, they are the ones who may need your local services more than others — for example, local delivery.

To be accessible to all users without barriers, your website should adhere to accessibility standards. Drupal 8 has a focus on them and offers advanced accessibility features. They include the use of WAI-ARIA attributes, accessible inline form errors, aural alerts, obligatory ALT text for images, and much more.  

Presence in multiple channels

Local businesses often benefit from the digital presence in multiple channels — imagine, for example, a pizza delivery mobile app connected to your website. 

Drupal 8 offers amazing opportunities to exchange your website’s data with third-party applications. It has five powerful built-in modules for creating REST APIs and sharing Drupal data in the JSON, XML, or other formats needed by the apps. 

Easy social media integration

It’s no longer possible to successfully manage a business without a social media presence. Drupal 8 allows third-party integration with any systems, and social networks are not an exception. 

It is incredibly easy to add social media icons to your website pages, provide social share buttons for your content, embed social media feeds, and much more. Social media modules in Drupal 8 are very numerous and useful. 

Among them, Easy Social, AddToAny Share Buttons, Social media share, Social Media Links Block and Field, and many more. In addition, there are network-specific modules like Video Embed Instagram, Pinterest Hover button, Facebook Album, and plenty of others.

Social media posts can also be embedded in your content using the Drupal 8 core Media module as a basis. 

Build a local business website on Drupal 8 with us!

The above reasons to build a local business website on Drupal 8 are just a few of a thousand. Contact our Drupal team and let’s discuss in more detail how we can help your local business flourish!

Aug 08 2019
Aug 08

Agiledrop is highlighting active Drupal community members through a series of interviews. Now you get a chance to learn more about the people behind Drupal projects.

In our latest interview, Ricardo Amaro of Acquia reveals how his discovery of Drupal has enabled him to work on projects he enjoys and that make a meaningful impact. Read on to learn more about his contributions and what the Drupal community in Portugal is like. 

1. Please tell us a little about yourself. How do you participate in the Drupal community and what do you do professionally?

My name is Ricardo Amaro. I live with my wife and 2 kids in Lisbon, Portugal. I’ve been working for Acquia since 2011 and recently promoted to Principal Site Reliability Engineer where we deal with all the challenges of helping ~55k Drupal production sites grow every day.

I’ve been contributing in several aspects to the Drupal Community and sometimes that effort goes beyond. An example of that is the published co-authoring of the “Seeking SRE” book (O’Reilly) with my chapter about Machine Learning for SRE, since that main idea came out of a presentation I did at DrupalCon Vienna 2017 explaining how automation and machine learning could help increase reliability on Drupal sites. 

Other projects I’ve initiated in the past within the Drupal community include:

On the local front I founded the Portuguese Drupal Association 8 years ago and I am its current elected president. That same year we organized our first DrupalCampLisbon2011. Nowadays we organize DrupalDays and Camps all over the country and meet regularly on Telegram and video-conferences. Last year we organized DrupalDevDays Lisbon 2018 which was a really good turn out for the entire community.

My main drivers are a passion for Free Software and Digital Rights. That started back in the 90’s when I found myself struggling with the proprietary/closed software available at the time, and installing Linux/Slackware in 1994 was an enlightening moment to my own question “isn’t there a better option?”. But I only switched all my machines to Linux in 2004 and that’s what I’ve used up to now. Because I think the GNU/Free Software ecosystem, where Drupal was able to grow, is fragile and needs to be nourished by all of us.

I have a degree in Arts and a second one in Computer Science & Engineering and I’m now taking a master in Enterprise Information Systems.

Before Acquia, I worked both in the public sector and in the private sector in Portugal, applying Agile techniques and encouraging the DevOps culture. I’ve managed teams, development projects and operations also in South Africa and around Europe. 

2. When did you first come across Drupal? What convinced you to stay, the software or the community, and why?

I came across Drupal in 2008, when searching for an OpenSource CMS software in order to create some Media Publishing sites for the company I was working for back at that time. My role as an IT Director was not easy, since the company was struggling with funding, so Drupal 6 was an amazing tool that enabled us to grow several of the sites and particularly create a self service on our main classified advertisement sites.

I found the Drupal Portuguese community at that time struggling to have a legal entity and to be able to grow and organize events inside the country. Portugal has always been mostly monopolized by large corporations like Microsoft and Oracle, while Free software has always been seen as “experimental” solutions, at best.

I took upon myself the commitment to bring the local Drupal community the pride and success they all deserve. I’ve grown a friendship for each and every person in our community and now I couldn't imagine myself without them, as I couldn't imagine myself without Drupal.

3. What impact has Drupal made on you? Is there a particular moment you remember?

Putting it simply: Drupal changed my life! Drupal brought justification to my values and aspirations. I honestly couldn’t have imagined, in a world that is more and more inclined to monopolistic visions, being able to exercise and contribute to the Free Software community and make a living out of it.

The particular moment I felt this more strongly the first time was around 2011 when some decision makers from one of these large corporations asked me if I could bring my Drupal presentation to them at the time, because they wanted to know what this Drupal thing was all about. So I organized a few of my usual slides and took them with me.

This was in a very fancy Vila in one of the most expensive areas near Lisbon. I did my pitch and by the end they seemed very impressed with what Drupal had to offer for free, so many powerful features, so much commitment. Naturally one of their questions was how they could make their proprietary software, that started having a descent curve, embark on this positive wave of growth. My obvious answer was “release your code as open source”. They looked at me in discredit of course and still invited me for a boat ride which I declined politely. 

I went back home and from time to time thought about that episode until it started to look like a mirage in the past. To my surprise, in the most recent years, that same corporation has started releasing open source code, created community projects and apparently changed their minds… 

4. How do you explain what Drupal is to other, non-Drupal people?

Drupal lets you turn big ideas into digital realities. An innovative web platform for creating engaging digital websites and experiences. Drupal is the world's most popular enterprise-class web content management system. It’s developed by more than 46,000 people that are part of the 1.3 million users registered on drupal.org.

Last year we had about 1,000 companies with 8,000 code contributions and this is reflected in millions of websites with 12% market share, plus an annual growth of 51%. If these people still had some more time I would present them the Drupal Pitch Deck. :)

5. How did you see Drupal evolving over the years? What do you think the future will bring?

From my perspective Drupal has been always growing and even making positive bonds with other Free Software initiatives out there.  One of the most interesting ones happened last year at Drupal Europe 2018 (11-14 Sept)  where we had the founders of RocketChat and Nextcloud met and they ended up announcing a partnership on the 17th of September…  

We should follow that example and support more interaction and collaboration with other projects in our ecosystem. For starters we should make an effort to use tools like RocketChat (see https://drupalchat.me) and grow awareness that companies like Slack have 0, or even less, to do with our values and we don’t gain anything with crossing our arms and letting people be driven there. The future is open, the future is community and inclusion.

6. What are some of the contributions to open source code or to the community that you are most proud of?

For sure the ongoing effort that I do on the Drupal Portuguese Association to keep people motivated, things organized and events happening is the first one. The highlight of this was DrupalDevDays Lisbon 2018. The second one was the DrupalCI which was of major impact for Drupal8’s final release.

7. Is there an initiative or a project in Drupal space that you would like to promote or highlight?

8. Is there anything else that excites you beyond Drupal? Either a new technology or a personal endeavor. 

I’m most excited about Containers and the power behind them. That is only possible because there is Gnu/Linux operating system supporting them. Kubernetes in particular is also of interest since it follows the reasoning of auto-scalability that we need for distributed systems. Drupal is flying to the future already with its headless/decoupled capabilities. I’m seeing containers even being applied to support machine learning algorithms and neural networks. 

Another thing that I’m particularly interested in is investigating better ways to make communities grow and ensure that they have the necessary tools to make that happen.  

My personal endeavor is, in the end, to see my kids grow in a healthy environment, rich in possibilities, and for that I need to keep information available for them and help the Free Software ecosystem stay alive. After all, what else is there that can guarantee our future human independence from “blackboxed” technology? If you can’t see, study or change the source, what role is left for you? 

 Drupal DevDays Lisbon 2018

Aug 08 2019
Aug 08

Back in early 2010, when Jason Grigsby pointed out that simply setting a percentage width on images was not enough, and that you needed to resize these images as well for a better user experience. He pointed out that if you served the right sized images on the original responsive demo site, more than 75% of the weight of those images can be shaved on smaller screens. 

Ever since, the debate on responsive images have evolved in what is the best solution to render the perfect, responsive images without any hassle.

We all know how Drupal 7 does a great job in handling responsive images with its modules. However, with Drupal 8, things are even better now!

Responsive Images in Drupal 8

The Responsive Image module in Drupal 8 provides an image formatter that maps the breakpoint of the original image to render a flawless responsive image using a picture tag.

When we observe how Drupal 8 handles responsive images when compared to Drupal 7, some of the features to be noted are:

Drupal 7 consists of the contributed module picture element, which in the latest version is known as Responsive Images.
In addition to this, Responsive images & Breakpoint modules are a part of the Drupal core in the latest version of the CMS.

The Problem

One of the major problems with the images in web development is, browsers do not know about the images, and are clueless about what sized images are rendering in relation with a viewport of different screens until the CSS & Javascripts are loaded.

However, the browser can know about the environment in which the images are rendering, which includes the size of the viewport and resolution of the screen.

The Solution 

As we mentioned in previous sections, responsive images use picture element which basically has sizes and srcset attributes which play a major role in notifying the browser to choose the best images based on the image style selections.  

So Drupal 8 has done a great job in providing the responsive images module in the core. This will download the lower sized images for the devices with lower screen resolution, resulting in better website load time and improved performance. 

Steps to reproduce

  1. Enable Responsive images and breakpoint module.
  2. Setup the breakpoints for your projects theme.
  3. Setting up the image styles for responsive images
  4. Creating a responsive image style for your theme
  5. Assigning the responsive image style to an image field.

Enable Responsive images and breakpoint module

Since it's a part of drupal 8 core, we will not require any other extra module. All you have to do is enable the responsive images module, since the breakpoint module will be installed with the standard profile. Else enable the breakpoint module.

To enable the module goto->admin->extends select the module and enable the module.

extend page

Setup the breakpoints for your project's theme
 

breakpoints

Setting up the theme’s breakpoint is the most important part for the responsiveness of your site.


If you are using a core theme like bartik , seven, umami or claro, you will already have the breakpoints file and you don’t have to create any new ones. 

However, if you are using a custom theme for your project, it is important that you define the breakpoints in "yourthemename.breakpoints.yml" which can be found in your theme directory, usually found in "/themes/custom/yourthemename".

Each breakpoint will assign the images to media query.  For example images which are rendering in mobile might be smaller i.e width less than 768px, where in medium screens will have a width between 768px to 1024px.


Each breakpoint will have: 

label:  Is the valid label given for the breakpoint.
mediaQuery:  Is the viewport within which the images are rendered.
weight:  For the order of display.
multipliers:  It's a measure of the viewport's device resolution normally 1x will be used for standard sizes and 2x for retina display.

Setting up the image styles for responsive images

Now we will have to create an image style for each of the breakpoints. You can configure your own Drupal 8 image styles at admin->config->media->image-styles. 

Click ‘Add image style’.  Give the valid name for your image style & use scale and crop effect which will provide the cropped images. If the images are stretched, add multiple image style for different viewports.

add image style

Creating a responsive image style for your theme 

This is where you provide the multiple image style options to the browser and let the browser choose the best out of the lot. 

responsive-image-styleresponsive image


To create new responsive Drupal 8  image style navigate to:
Home -> admin- > config-> media->responsive-image-style and click on ‘Add responsive image’. 

Give a valid name for your responsive image style & select the breakpoint group (choose your theme) & assign the image styles to the breakpoints listed 

There are multiple options for the image style configurations

  • Choose single image style: Where you can select the single image style that will be rendered on the particular screen
  • Choose multiple image style: Where you can select the multiple-image style and also specify the viewport width for the image style

At last, there is an option to select a fallback image style. The fallback image style should only appear on the site if an error occurs.

fallback responsive image

Assigning the responsive image style to an image field 

  • Once all the configurations are done, move to the image field by adding the responsive image style.
  • To do that go to the field’s manage display and select the responsive image style which we created.
  • Add content and see the results on the page with a responsive image style.assigning responsiveresponsive image style to an image field

Final Results 

responsive image style to an image field

 The image at a minimum width of 1024px (For large Devices).

minimum width of 1024px

Image at minimum width of 768px (For Medium Devices).

Responsive image style

Image at maximum width 767px (For Small Devices).

Aug 08 2019
Aug 08

With Dries’ latest announcement on the launch of Drupal 9 in 2020, enterprises are in an urgent need to upgrade from Drupal 7 and 8 to version 9.

Drupal 7 and 8 will reach their end of life in November 2021, and those who wish to stick to previous versions might possibly face security challenges.

Eager but unsure what the process would be like? This comprehensive guide aims to simplify the entire Drupal migration process for easy implementation.

Getting Started with the Migration Process

When site is upgraded to Drupal 7, the old database is upgraded to Drupal 7 structure. However, a different approach is followed when the site is upgraded from Drupal 7 to Drupal 8.

Upgrading D7 to D8

Step 1: Take back-up of your website

Start the migration process by making a local copy of your website. As making changes to live site is not recommended, it is a best practice to keep all data safe by taking a backup locally on your machine.

Step 2: Install fresh new site

Install a new Drupal 8 site by downloading the latest version of Drupal 8 from drupal.org.

Drupal 8.7 is the latest release.

Install the latest release of Drupal 8 along with installing dependencies with Composer.

Step 3: Prepare your Drupal 8 website for the migration

Setup a local Drupal 8 website on your machine as a destination website for the migration process.

Step 4: Verifying the modules are in core and enabled

Ensure Migrate, Migrate Drupal and Migrate Drupal UI modules are enabled on your Drupal 8 site. This can be done by navigating to the ‘Extend’ tab of your website and ensuring all the above modules are present in the core.

Now, check the three modules and click ‘Install’ button at the bottom of the page.

1-526074867869569243

 

Step 5: Upgrade your website

Go to your website and append the website address with /upgrade (www.<yourwebsitename>.com/upgrade) and follow the instructions. Now click ‘Continue’ button.

2-10

Step 6: Enter website details

On clicking ‘Continue’ the below screen comes up which asks you for the website credentials, database location and other details.

4-2

 

Step 7: Start the migration

If the database credentials to your source database are correct, the upgrade review page will appear on the Migrate UI. It will show the summary of the upgrade status for all installed modules on the old site.

As a site builder you should carefully review the modules that will not be upgraded and evaluate if your Drupal 8 site will work without the module.

click on ‘Perform Upgrade’ button.

Tip: Don’t proceed and perform the actual upgrade without first installing the missing Drupal 8 module.

Tip: If you get ID conflict warnings

If you manually create a node to the Drupal 8 site before upgrading and the source Drupal 6/7 site has a node with the same ID, the migration system will overwrite the node that was manually created in Drupal 8.

If conflicting IDs are detected, a warning about conflicting IDs will be shown which can be ignored to risk losing data or abort and take an alternative approach.

Depending on the size and types of content/configuration on the source site, the upgrade may take a very long time. Once the process is finished, you are directed to the site's frontpage with messages summarizing the results:

Upgrading D8 to D9

When it comes to migrating to Drupal 9 from Drupal 8, process is quite simpler. As D9 is an extended version of D8, it is much easier to upgrade. Read the complete guide of Drupal 8 to Drupal 9 upgrade to understand the complete process.

Alternate Method: Migration using Drush Command

Upgrading to Drupal 8 using Drush is useful when migrating complex sites as it allows you to run migrations one by one and it allows rollbacks.

If you are using Composer to build your Drupal 8 site, then you may already have Drush installed. However, if not, then you can install Drush from command line as follows:

composer require drush/drush

To migrate using Drush you need to download and enable the contributed modules: Migrate Upgrade, Migrate Plus and Migrate Tools.

Ensure the Drush is up to date (with the command: “drush –version”)

Now it’s time to start the migration through Drush with following drush command

“drush ://user:[email protected]/db — ://example.com –configure-only”

Where the below mentioned values can be with your values in the above command

  • ‘user’ is the username of the source database
  • ‘password’ is the source database user’s password
  • ‘server’ is the source database server
  • ‘db’ is the source database

Now check your migration status (with the command “drush migrate-status”)

Import the data with the command (“drush migrate-import –all”).

After successful migration, go to the structure->migrations to check the status of migration.

5-1

Check the list migration button next to the migration group ‘import from drupal 7’ to view the entire migrated data.

 

6-590

 

After clicking on all upgraded data will be visible. Click to the execute button and data will be imported.

7-2

 

Once you click on the execute button, you will be redirected to the page with below mentioned options.

8-2

Import button imports all previously unprocessed records from the source into destination Drupal objects.

With this we come to an end of our Drupal migration process. If the above steps are followed carefully, a website can be easily migrated to the latest version.

Srijan has more than 35 Acquia certified Drupal experts with expertise in migrating projects to newer versions of Drupal. Contact us to seamlessly get started with the latest Drupal version.

 

Aug 08 2019
Aug 08

We have already covered two of many ways to migrate images into Drupal. One example allows you to set the image subfields manually. The other example uses a process plugin that accomplishes the same result using plugin configuration options. Although valid ways to migrate images, these approaches have an important limitation. The files and images are not removed from the system upon rollback. In the previous blog post, we talked further about this topic. Today, we are going to perform an image migration that will clear after itself when it is rolled back. Note that in Drupal images are a special case of files. Even though the example will migrate images, the same approach can be used to import any type of file. This migration will also serve as the basis for explaining migration dependencies in the next blog post.

Code snippet for file entity migration

File entity migrate destination

All the examples so far have been about creating nodes. The migrate API is a full ETL framework able to write to different destinations. In the case of Drupal, the target can be other content entities like files, users, taxonomy terms, comments, etc. Writing to content entities is straightforward. For example, to migrate into files, the process section is configured like this:

destination:
  plugin: 'entity:file'

You use a plugin whose name is entity: followed by the machine name of your target entity. Other possible values that could be used are user, taxonomy_term, and comment. Remember that each migration definition file can only write to one destination.

Source section definition

The source of a migration is independent of its destination. The following code snippet shows the source definition for the image migration example:

source:
  constants:
    SOURCE_DOMAIN: 'https://agaric.coop'
    DRUPAL_FILE_DIRECTORY: 'public://portrait/'
  plugin: embedded_data
  data_rows:
    - photo_id: 'P01'
      photo_url: 'sites/default/files/2018-12/micky-cropped.jpg'
    - photo_id: 'P02'
      photo_url: ''
    - photo_id: 'P03'
      photo_url: 'sites/default/files/pictures/picture-94-1480090110.jpg'
    - photo_id: 'P04'
      photo_url: 'sites/default/files/2019-01/clayton-profile-medium.jpeg'
  ids:
    photo_id:
      type: string

Note that the source contains relative paths to the images. Eventually, we will need an absolute path to them. Therefore, the SOURCE_DOMAIN constant is created to assemble the absolute path in the process pipeline. Also, note that one of the rows contains an empty photo_url. No file can be created without a proper URL. In the process section we will accommodate for this. An alternative could be to filter out invalid data in a source clean up operation before executing the migration.

Another important thing to note is that the row identifier photo_id is of type string. You need to explicitly tell the system the name and type of the identifiers you want to use. The configuration for this varies slightly from one source plugin to another. For the embedded_data plugin, you do it using the ids configuration key. It is possible to have more than one source column as identifier. For example, if the combination of two columns (e.g. name and date of birth) are required to uniquely identify each element (e.g. person) in the source.

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD migration dependencies introduction whose machine name is ud_migrations_dependencies_intro. The migration to run is udm_dependencies_intro_image. Refer to this article to learn where the module should be placed.

Process section definition

The fields to map in the process section will depend on the target. For files and images, only one entity property is required: uri. It has to be set as a reference to the file using stream wrappers. In this example, the public stream (public://) is used to store the images in a location that is publicly accessible by any visitor to the site. If the file was already in the system and we knew the path the whole process section for this migration could be reduced to two lines:

process:
  uri: source_column_file_uri

That is rarely the case though. Fortunately, there are many process plugins that allow you to transform the available data. When combined with constants and pseudofields, you can come up with creative solutions to produce the format expected by your destination.

Skipping invalid records

The source for this migration contains one record that lacks the URL to the photo. No image can be imported without a valid path. Let’s accommodate for this. In the same step, a pseudofield will be created to extract the name of the file out of its path.

psf_destination_filename:
  - plugin: callback
    callable: basename
    source: photo_url
  - plugin: skip_on_empty
    method: row
    message: 'Cannot import empty image filename.'

The psf_destination_filename pseudofield uses the callback plugin to derive the filename from the relative path to the image. This is accomplished using the basename PHP function. Also, taking advantage of plugin chaining, the system is instructed to skip process the row if no filename could be obtained. For example, because an empty source value was provided. This is done by the skip_on_empty which is also configured log a message to indicate what happened. In this case, the message is hardcoded. You can make it dynamic to include the ID of the row that was skipped using other process plugins. This is left as an exercise to the curious reader. Feel free to share your answer in the comments below.

Tip: To read the messages log during any migration, execute the following Drush command: drush migrate:messages [migration-id].

Creating the destination URI

The next step is to create the location where the file is going to be saved in the system. For this, the psf_destination_full_path pseudofield is used to concatenate the value of a constant defined in the source and the file named obtained in the previous step. As explained before, order is important when using pseudofields as part of the migrate process pipeline. The following snippet shows how to do it:

psf_destination_full_path:
  - plugin: concat
    source:
      - constants/DRUPAL_FILE_DIRECTORY
      - '@psf_destination_filename'
  - plugin: urlencode

The end result of this operation would be something like public://portrait/micky-cropped.jpg. The URI specifies that the image should be stored inside a portrait subdirectory inside Drupal’s public file system. Copying files to specific subdirectories is not required, but it helps with file organizations. Also, some hosting providers might impose limitations on the number of files per directory. Specifying subdirectories for your file migrations is a recommended practice.

Also note that after the URI is created, it gets encoded using the urlencode plugin. This will replace special characters to an equivalent string literal. For example, é and ç will be converted to %C3%A9 and %C3%A7 respectively. Space characters will be changed to %20. The end result is an equivalent URI that can be used inside Drupal, as part of an email, or via another medium. Always encode any URI when working with Drupal migrations.

Creating the source URI

The next step is to create assemble an absolute path for the source image. For this, you concatenate the domain stored in a source constant and the image relative path stored in a source column. The following snippet shows how to do it:

psf_source_image_path:
  - plugin: concat
    delimiter: '/'
    source:
      - constants/SOURCE_DOMAIN
      - photo_url
  - plugin: urlencode

The end result of this operation will be something like https://agaric.coop/sites/default/files/2018-12/micky-cropped.jpg. Note that the concat and urlencode plugins are used just like in the previous step. A subtle difference is that a delimiter is specifying in the concatenation step. This is because, contrary to the DRUPAL_FILE_DIRECTORY constant, the SOURCE_DOMAIN constant does not end with a slash (/). This was done intentionally to highlight things. First, it is important to understand your source data. Second, you can transform it as needed by using various process plugins.

Copying the image file to Drupal

Only two tasks remain to complete this image migration: download the image and assign the uri property of the file entity. Luckily, both steps can be accomplished at the same time using the file_copy plugin. The following snippet shows how to do it:

uri:
  plugin: file_copy
  source:
    - '@psf_source_image_path'
    - '@psf_destination_full_path'
  file_exists: 'rename'
  move: FALSE

The source configuration of file_copy plugin expects an array of two values: the URI to copy the file from and the URI to copy the file to. Optionally, you can specify what happens if a file with the same name exists in the destination directory. In this case, we are instructing the system to rename the file to prevent name clashes. The way this is done is appending the string _X to the filename and before the file extension. The X is a number starting with zero (0) that keeps incrementing until the filename is unique. The move flag is also optional. If set to TRUE it tells the system that the file should be moved instead of copied. As you can guess, Drupal does not have access to the file system in the remote server. The configuration option is shown for completeness, but does not have any effect in this example.

In addition to downloading the image and place it inside Drupal’s file system, the file_copy also returns the destination URI. That is why this plugin can be used to assign the uri destination property. And that’s it, you have successfully imported images into Drupal! Clever use of the process pipeline, isn’t it? ;-)

One important thing to note is an image’s alternative text, title, width, and height are not associated with the file entity. That information is actually stored in a field of type image. This will be illustrated in the next article. To reiterate, the same approach to migrate images can be used to migrate any file type.

Technical note: The file entity contains other properties you can write to. For a list of available options check the baseFieldDefinitions() method of the File class defining the entity. Note that more properties can be available up in the class hierarchy. Also, this entity does not have multiple bundles like the node entity does.

What did you learn in today’s blog post? Had you created file migrations before? If so, had you followed a different approach? Did you know that you can do complex data transformations using process plugins? Did you know you can skip the processing of a row if the required data is not available? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with your colleagues.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether the migration series or other topics.

Aug 07 2019
Aug 07

Decoupled Drupal 8 and GatsbyJS webinar

How did the City of Sandy Springs, GA improve information system efficiency with a unified platform? Join our webinar to see how we built this city on decoupled Drupal 8, GatsbyJS, and Netlify.

We'll explore how a “build-your-own” software approach gives Sandy Springs the formula for faster site speed and the ability to publish messages across multiple content channels — including new digital signage.

What You'll Learn

  • The City of Sandy Springs’ challenges and goals before adopting Drupal 8 

  • How Sandy Springs manages multi channel publishing across the website, social media, and a network of digital signage devices. 

  • Benefits gained from Drupal 8 and GatsbyJS, including: a fast, reliable site, hosting costs, and ease of development for their team.  

Speakers

Jason Green, Visual Communications Manager at City of Sandy Springs, and Mediacurrent Director of Front End Development Zack Hawkins share an inside look at the project.

Registration

Follow the City of Sandy Springs on the path to government digital innovation.  Save your seat today!

Aug 07 2019
Aug 07

Lately, you can often hear that Drupal 9 is coming. Drupal 8.7 was released in May and Drupal 8.8 is planned for December 2019. At the same time, D9 is becoming more and more hotly discussed topic in the Drupal world. 

Drupal 9’s arrival perfectly fits into one of a thousand memes inspired by the Game of Thrones’ quote — “Brace yourself, winter is coming.” But is there a need to brace yourself because of D9? Well, it is promised to arrive easily and smoothly. Still, some important preparations are needed. Let’s review them in this post.

Drupal 9 is coming in June 2020

The year of D9 release became known back in September 2018. Drupal creator Dries Buytaert announced it at Drupal Europe in Darmstadt. Later on, in December, the exact date arrived — Drupal 9 is coming on June 3, 2020!

What will happen to Drupal 7 and Drupal 8? Both D7 and D8 will reach their end-of-life in November 2021. This means the stop of official support and no more updates in the functional and security areas. Some companies will come up with extended commercial support, but it’s way better to keep up with the times and upgrade. All the development ideas and innovations will be focused on “the great nine.”

The Drupal creator explained the planned release and end-of-life dates. In a nutshell, D8’s major dependency is the Symfony 3 framework that is reaching end-of-life in November 2021. Drupal 9 will ship with Symfony 4/5. So the Drupal team has to end-of-life Drupal 8 at that time, but they want to give website owners and developers enough time to prepare for Drupal 9 — hence the June 2020 release decision. 

Well, according to the timing, you need to be on Drupal 9 by By November 2021. In the meantime, it is necessary to prepare. 

Preparations for Drupal 9 in the coming

1. How to prepare for Drupal 9 if you are on Drupal 8

Hearing that Drupal 9 is coming, many D8 website owners could wonder “Hey, we have just had an epic upgrade from Drupal 7 to Drupal 8, and here we go again!”.

Keep calm — everything is on the right track. Your upgrade from Drupal 8 to Drupal 9 should be instantaneous. D9 will look like the latest version of D8, but without deprecated code and with third-party dependencies updated (Symfony 4/5, Twig 2, and so on).

Dries Buytaert's quote: we are building Drupal 9 in Drupal 8

There are two rules of thumb regarding the Drupal 9 preparations:

1) Using the latest versions of everything

To have a quick upgrade from Drupal 8 to Drupal 9, you need to stick to the newest versions of the core, modules, and themes. According to Gábor Hojtsy, Drupal Initiative Coordinator, you are gradually becoming a D9 user by keeping your D8 website up-to-date.

Gabor Hojtsy's quote: you become a Drupal 9 user by keeping up to date with Drupal 8.

“The great eight” has adopted a continuous innovation model, which means a new minor version every half a year. Our Drupal team is ready to help you with regular and smooth updates.

2) Getting rid of deprecated code

It is also necessary to keep your website clean from the deprecated code. Deprecated code means APIs and functions that have newer alternatives and are marked as deprecated, or obsolete. 

Any module that does not use deprecated code will just continue working in Drupal 9, Dries said.

Dries Buytaert's quote: without deprecated code websites will be ready for Drupal 9

How to discover deprecated code? Here are a few tools that check everything including custom modules:

  • The command-line tool Drupal Check that checks your code for deprecations
  • The Upgrade Status contributed module that offers a graphical interface to check the modules and theme and get the summary

Many deprecations are very easy to replace. You can always rely on our development team to have a thorough check and cleanup from deprecations. 

2. How to prepare for Drupal 9 if you are on Drupal 7

The best way to prepare for Drupal 9 in the coming is to upgrade to Drupal 8 now. Even if this might sound like a marketing mantra to you, it has very practical and pragmatic grounds.

There are plenty of reasons to upgrade and no reason to skip Drupal 8. These are words from Dries Buytaert's presentation

Dries Buytaert's presentation: there are many reasons to upgrade to Drupal 8 now

You will enjoy a wealth of Drupal 8’s benefits for business all the time before 2021. And when Drupal 9 arrives, you will just click your finger and move ahead to it!

Gabor Hojtsy's quote: skipping Drupal 8 does not bring benefits

Don’t worry, despite the immense difference between D7 and D8, the 7-to-8 upgrades are getting easier every day. Developers have studied the D7-to-D8 upgrade path well. In addition, very helpful migration modules have recently reached stability in D8 core.

Your upgrade to Drupal 8 will depend on your website’s custom functionality and overall complexity. In any case, our Drupal developers will take care of making it smooth. 

So make up your mind and upgrade now — welcome to the innovative path that will lead you further to “the great 9.”

Plan for Drupal 9 with us!

Yes, Drupal 9 is coming with a sure step. No matter which version of Drupal you are using now, we can help you make the right Drupal 9 preparation plan — and fulfill it, of course. Just contact our Drupal experts!

Aug 07 2019
Aug 07

Website Refresh: The Only Thing Missing is a Purring Sound

The Animal Humane Society (AHS), in Minneapolis, Minnesota is the leading animal welfare organization in the Upper Midwest, helping 25,000 dogs, cats and critters in need find loving homes each year, while providing a vast array of services to the community, from low-cost spay and neuter services to dog training to rescuing animals from neglectful and abusive situations. 

TEN7 has been working with AHS since 2008, making piecemeal updates to their website and finding creative solutions for desired changes with a limited budget. In 2016, the Animal Humane Society wanted to reimagine the animalhumanesociety.org website as not just an adoption source, but a resource, an authority, and an advocate for all things related to companion animals and the community that loves them. 

One of the main goals was to include even more information to support pet owners and animal lovers, including more photos, videos and shareable content. Other goals were to integrate the separate Kindest Cut website (a low-cost spay and neuter clinic) into the main site, and improve functionality of the Lost and Found bulletin boards.

“We wanted the user experience on the site to match the user experience when people come to the shelter. That it would be colorful and emotional and warm and inviting, and that it would give people that same wonderful feeling that they have when they walk in the door at the [shelter] and see the puppies and kittens.”—Paul Sorensen, Director of Brand and Communications, Animal Humane Society

To give AHS the increased functionality they desired (like the enhanced image and video capabilities), we embarked on building a complex Drupal 8 site from scratch. It was more than just a one-and-done update, however. Over a nine-year period, the site had evolved from a manually-updated custom CMS to a new Drupal 5 installation, and later Drupal 6. Additional functionality and one-off customizations to the codebase had created a great deal of technical debt, making the site difficult to maintain and support. 

Drupal 8 functionality allowed us to scrap some custom code, while in other cases we were able to replace custom code with contributed modules developed by the Drupal community. 

Integration with PetPoint (the animal information database) under Drupal 6 was challenging, requiring custom code from beginning to end. We were able to use Drupal 8’s built-in functionality to talk to PetPoint in a more standards-based way, which meant far less custom code.

As we were making these updates, we also followed best practices and implemented coding standards for the new site, which reduce the amount of technical debt that was created.

We launched the site in the summer of 2017, and although there were some hiccups, results were immediate: people LOVED the bold photos, video and shareable content. As a result of the site update, more Minnesotans are:

  • Visiting the website and staying longer. Traffic is up 8.5% from the previous year, and the average visit is over four minutes, up 8.6% percent from the previous year
  • Viewing animal profiles, with nearly 4 million views, leading to 10,751 animal adoptions
  • Sharing and responding to AHS content on social media, with double and triple-digit traffic increases on Twitter, Instagram, LinkedIn and Reddit
  • Donating online, with donations driven by site content up 18.2% from the previous year

We continue to support and collaborate with the Animal Humane Society, adding more functionality we couldn’t squeeze in during the big update, like setting up visitor accounts with the ability to “favorite” animals. And we still have to figure out how to make the site purr.

Aug 07 2019
Aug 07

We have presented several examples as part of this migration blog post series. They started very simple and have been increasing in complexity. Until now, we have been rather optimistic. Get the sample code, install any module dependency, enable the module that defines the migration, and execute it assuming everything works on the first try. But Drupal migrations often involve a bit of trial and error. At the very least, it is an iterative process. Today we are going to talk about what happens after import and rollback operations, how to recover from a failed migration, and some tips for writing definition files.

List of drush commands used in drupal migration workflows

Importing and rolling back migrations

When working on a migration project, it is common to write many migration definition files. Even if you were to have only one, it is very likely that your destination will require many field mappings. Running an import operation to get the data into Drupal is the first step. With so many moving parts, it is easy not to get the expected results on the first try. When that happens, you can run a rollback operation. This instructs the system to revert anything that was introduced when then migration was initially imported. After rolling back, you can make changes to the migration definition file and rebuild Drupal’s cache for the system to pick up your changes. Finally, you can do another import operation. Repeat this process until you get the results you expect. The following code snippet shows a basic Drupal migration workflow:

# 1) Run the migration.
$ drush migrate:import udm_subfields

# 2) Rollback migration because the expected results were not obtained.
$ drush migrate:rollback udm_subfields

# 3) Change the migration definition file.

# 4) Rebuild caches for changes to be picked up.
$ drush cache:rebuild

# 5) Run the migration again
$ drush migrate:import udm_subfields

The example above assumes you are using Drush to run the migration commands. Specifically, the commands provided by Migrate Run or Migrate Tools. You pick one or the other, but not both as the commands provided for two modules are the same. If you were to have both enabled, they will conflict with each other and fail.
Another thing to note is that the example uses Drush 9. There were major refactorings between versions 8 and 9 which included changes to the name of the commands. Finally, udm_subfields is the id of the migration to run. You can find the full code in this article.

Tip: You can use Drush command aliases to write shorter commands. Type drush [command-name] --help for a list of the available aliases.

Technical note: To pick up changes to the definition file, you need to rebuild Drupal’s caches. This is the procedure to follow when creating the YAML files using Migrate API core features and placing them under the migrations directory. It is also possible to define migrations as configuration entities using the Migrate Plus module. In those cases, the YAML files follow a different naming convention and are placed under the config/install directory. For picking up changes, in this case, you need to sync the YAML definition using configuration management workflows. This will be covered in a future entry.

Stopping and resetting migrations

Sometimes, you do not get the expected results due to an oversight in setting a value. On other occasions, fatal PHP errors can occur when running the migration. The Migrate API might not be able to recover from such errors. For example, using a non-existent PHP function with the callback plugin. Give it a try by modifying the example in this article. When these errors happen, the migration is left in a state where no import or rollback operations could be performed.

You can check the state of any migration by running the drush migrate:status command. Ideally, you want them in Idle state. When something fails during import or rollback, you would get the Importing or Rolling back states. To get the migration back to Idle, you stop the migration and reset its status. The following snippet shows how to do it:

# 1) Run the migration.
$ drush migrate:import udm_process_intro

# 2) Some non recoverable error occurs. Check the status of the migration.
$ drush migrate:status udm_process_intro

# 3) Stop the migration.
$ drush migrate:stop udm_process_intro

# 4) Reset the status to idle.
$ drush migrate:reset-status udm_process_intro

# 5) Rebuild caches for changes to be picked up.
$ drush cache:rebuild

# 6) Rollback migration because the expexted results were not obtained.
$ drush migrate:rollback udm_process_intro

# 7) Change the migration definition file.

# 8) Rebuild caches for changes to be picked up.
$ drush cache:rebuild

# 9) Run the migration again.
$ drush migrate:import udm_process_intro

Tip: The errors thrown by the Migrate API might not provide enough information to determine what went wrong. An excellent way to familiarize yourselves with the possible errors is by intentionally braking working migrations. In the example repository of this series, there are many migrations you can modify. Try anything that comes to mind: not leaving a space after a colon (:) in a key-value assignment; not using proper indentation; using wrong subfield names; using invalid values in property assignments; etc. You might be surprised by how Migrate API deals with such errors. Also, note that many other Drupal APIs are involved. For example, you might get a YAML file parse error, or an Entity API save error. When you have seen an error before, it is usually faster to identify the cause and fix it in the future.

What happens when you rollback a Drupal migration?

In an ideal scenario, when a migration is rolled back, it cleans after itself. That means, it removes any entity that was created during the import operation: nodes, taxonomy terms, files, etc. Unfortunately, that is not always the case. It is very important to understand this when planning and executing migrations. For example, you might not want to leave taxonomy terms or files that are no longer in use. Whether any dependent entity is removed or not has to do with how plugins or entities work.

For example, when using the file_import or image_import plugins provided by Migrate File, the created files and images are not removed from the system upon rollback. When using the entity_generate plugin from Migrate Plus, the create entity also remains in the system after a rollback operation.

In the next blog post, we are going to start talking about migration dependencies. What happens with dependent migrations (e.g., files and paragraphs) when the migration for host entity (e.g., node) is rolled back? In this case, the Migrate API will perform an entity delete operation on the node. When this happens, referenced files are kept in the system, but paragraphs are automatically deleted. For the curious, this behavior for paragraphs is actually determined by its module dependency: Entity Reference Revisions. We will talk more about paragraphs migrations in future blog posts.

The moral of the story is that the behavior migration system might be affected by other Drupal APIs. And in the case of rollback operations, make sure to read the documentation or test manually to find out when migrations clean after themselves and when they do not.

Note: The focus of this section was content entity migrations. The general idea can be applied to configuration entities or any custom target of the ETL process.

Re-import or update migrations

We just mentioned that Migrate API issues an entity delete action when rolling back a migration. This has another important side effect. Entity IDs (nid, uid, tid, fid, etc.) are going to change every time you rollback an import again. Depending on auto generated IDs is generally not a good idea. But keep it in mind in case your workflow might be affected. For example, if you are running migrations in a content staging environment, references to the migrated entities can break if their IDs change. Also, if you were to manually update the migrated entities to clean up edge cases, those changes would be lost if you rollback and import again. Finally, keep in mind test data might remain in the system, as described in the previous section, which could find its way to production environments.

An alternative to rolling back a migration is to not execute this operation at all. Instead, you run an import operation again using the update flag. This tells the system that in addition to migrating unprocessed items from the source, you also want to update items that were previously imported using their current values. To do this, the Migrate API relies on source identifiers and map tables. You might want to consider this option when your source changes overtime, when you have a large number of records to import, or when you want to execute the same migration many times on a schedule.

Note: On import operations, the Migrate API issues an entity save action.

Tips for writing Drupal migrations

When working on migration projects, you might end up with many migration definition files. They can set dependencies on each other. Each file might contain a significant number of field mappings. There are many things you can do to make Drupal migrations more straightforward. For example, practicing with different migration scenarios and studying working examples. As a reference to help you in the process of migrating into Drupal, consider these tips:

  • Start from an existing migration. Look for an example online that does something close to what you need and modify it to your requirements.
  • Pay close attention to the syntax of the YAML file. An extraneous space or wrong indentation level can break the whole migration.
  • Read the documentation to know which source, process, and destination plugins are available. One might exist already that does exactly what you need.
  • Make sure to read the documentation for the specific plugins you are using. Many times a plugin offer optional configurations. Understand the tools at your disposal and find creative ways to combine them.
  • Look for contributed modules that might offer more plugins or upgrade paths from previous versions of Drupal. The Migrate ecosystem is vibrant, and lots of people are contributing to it.
  • When writing the migration pipeline, map one field at a time. Problems are easier to isolate if there is only one thing that could break at a time.
  • When mapping a field, work on one subfield at a time if possible. Some field types like images and addresses offer many subfields. Again, try to isolate errors by introducing individual changes each time.
  • Commit to your code repository any and every change that produces right results. That way, you can go back in time and recover a partially working migration.
  • Learn about debugging migrations. We will talk about this topic in a future blog post.
  • See help from the community. Migrate maintainers and enthusiasts are very active and responsive in the #migrate channel of Drupal slack.
  • If you feel stuck, take a break from the computer and come back to it later. Resting can do wonders in finding solutions to hard problems.

What did you learn in today’s blog post? Did you know what happens upon importing and rolling back a migration? Did you know that in some cases, data might remain in the system even after rollback operations? Do you have a use case for running migrations with the update flag? Do you have any other advice on writing migrations? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with your colleagues.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether the migration series or other topics.

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web