Aug 22 2019
Aug 22

The first part of this article described why and how the stakeholders of a project can contribute to Drupal. This developer-oriented article is a summary of the Drupal.org documentation for new code contributors. We will cover: how to work on the issue queue, how to publish a project, and how to approach this process with Drupal 9 in mind. 

Before we start, let’s just remember two things:

  • Contributing is easy and not only for senior developers. Even the smallest contribution matters, this is the summation of those contributions that make the community and the codebase strong.
  • You are part of a community. A great way to get started with contribution is to find a mentor at a Drupal event, and there is always someone ready to help on the Drupal Slack #contribute channel.

The Drupal.org issue queue

While working on client projects, you might encounter core or contributed bugs or you may need a new feature.

Chances are that this issue has already been discussed in the issue queue that covers the Drupal core or all projects, including contributed. If your issue is not yet in the queue, use the issue template summary to file one. You might also check this page about creating good issues.

The main process for an issue is Active > Needs work > Needs review > Reviewed and tested by the community (RTBC) > Fixed > Closed.

When the issue is fixed or closed, the project maintainer will include the changes in the codebase (dev branch) and will most probably wrap it with several other closed issues into a  release. Read about the semantic versioning model.
If you need the patch before the release, you can include it in your client project with Composer.

If the issue has not been reviewed/tested yet, just make sure that you are using the right version of the Drupal core or contributed project.

Project Version

From there, the main tasks you can contribute to as a developer are:

  • When the issue needs review: manual or automated testing
  • When the issue needs work: re-roll or create a patch.

We will summarize the process of creating a patch below:

You can also find novice issues and continue your reading with the Novice code contribution guide and the New contributor tasks: programmers.

Create a patch

Now that the codebase is on GitLab, there was a proof of concept for fixing a trivial issue on Drupal.org using only the issue queue and GitLab.

For most cases, you still need to use the following flow.

Create the patch from the issue branch

The branch is the one from the Version meta on the issue.

Drupal core: https://www.drupal.org/project/drupal/git-instructions
Contributed project: click on the Version control tab

Version control tab

Follow then the git instructions on the page to create the patch, basically:

  • Create a new branch: git checkout -b [patch_branch]

  • Make your changes then review them with git diff

  • Add your changes with git add

  • Output them as a patch file
    git diff [base_branch] > [description]-[issue-number]-[comment-number].patch

Submit the patch

Once your changes have been made, upload the patch on the issue, then include a comment that describes your solution and everything that will facilitate the review, like screenshots or interdiff.

If there are unit tests available, request a test for your patch, then set the issue status as “Needs review”.

Needs review

Contribute a full project

Workflow

There are several ways to go, but this procedure works well for us while working for client projects on a module (or theme) that could be contributed.

The first iteration is built straight on the client project repository with a generic name until the related ticket testing is finished, so that:

  • reviewers can check all the code in one place and most likely they already have an environment running with exhaustive data for testing.
  • if the ticket is reopened, the process is simplified (there is no release to maintain or Drupal.org process involved).

Using a generic name still favours reusability among other client projects if we decide that it will not be contributed on Drupal.org yet. We can then isolate it in a dedicated repository, then include it via Composer.

Publish it on Drupal.org

1. Make sure that your project contains the following files

  • The composer.json file that contains the project metadata + system requirements (e.g. ext-json), third party libraries (e.g. solarium/solarium) and Drupal dependencies (e.g. drupal/search_api).
  • The [your_project].info.yml file with the Drupal dependencies, prefixed by their namespace (e.g. drupal:node or webform:webform_ui).
  • A readme with the use case, the dependencies, related projects (and how this one differs), a roadmap if the project is not feature complete.

2. Fix and check coding standards

  • Autofix with phpcbf: phpcbf --standard=Drupal --extensions=php,module,inc,install,test,profile,theme,css,info,txt,md /path/to/project
  • Check what needs to be fixed manually: phpcs --standard=Drupal --extensions=php,module,inc,install,test,profile,theme,css,info,txt,md /path/to/project

3. Create the project page

Most likely, it will be a Module or a Theme: https://www.drupal.org/project/add

Give it a name and a machine name.
Add the key points: project classification, description (the project readme is a good start) and organization. Make also sure to add other maintainers if you are working in a team.

4. Create a first, unversioned, dev release

  • Create a dev branch (e.g. 8-x-1.x) and push the code
  • Follow instructions on the Version control tab.
  • Not mandatory, but it is recommended to create a clean installation with the latest release of Drupal so we’re sure about 2 things: all the composer based dependencies are resolved properly and there are no side effects caused by the client project.
  • At this stage, other developers might include your code in the project with composer require drupal/your_project:1.x-dev
  • On your client project, as there is no semantic version release yet (e.g. 8.1.0), it is a good habit to add the commit hash so you make sure that you keep control over the codebase that is actually used:  composer require drupal/your_project:1.x-dev#commit

5. Create intermediate versioned releases

When the development is ready, you might create alpha then beta releases.
Beta releases should not be subject to api changes.

  • Check out the latest dev branch commit: git checkout  8.x-1.x
  • Tag it: git tag 8.x-1.0-alpha1
  • Push it: git push origin tag 8.x-1.0-alpha1
  • Create the release and document it. This article from Matt Glaman provides great information about creating better release notes.

6. Create a stable versioned release

When there are no open bugs and the project is feature complete, a first stable release will tell site builders that the module is ready to be used (e.g. 8.1.0). Follow the same steps as 5.

7. Apply for permission to opt into security advisory coverage

Follow the security review process.

8. Create a new major release

If there are backward compatibility changes, creating a new major release (e.g. 8.2.x)  will indicate site builders and developers that there are BC breaks.

Read more with Best practices for creating and maintaining projects.

Be prepared for Drupal 9

TLDR; Drupal 9 should be released in June 2020. The first version will be about removing deprecated APIs and getting new versions of third-party libraries (Symfony 4 or 5, Twig 2, …). So, if a Drupal 8 project does not use deprecated code, it will work in Drupal 9 :)

Drupal 8.7 started the deprecation work and 8.8 will define the full deprecation list.

Code contribution is needed there as well: on the core with issues like deprecating legacy include files, or by helping contributed modules to be Drupal 9 ready and fixing contributed modules Drupal 9 compatibility issues.

Codebase analysis tools

Several tools are here to help if you are contributing to issues or maintaining a project.

The upgrade status module scans the code of the contributed and custom projects you have installed, and reports on any deprecated code that must be replaced before the next major version.

Drupal-check, a static analysis tool based on PHPStan, will check for correctness (e.g. using a class that doesn't exist), deprecation errors, and more.

Further read about Drupal 9, how to prepare for it and analysis of top uses of deprecated code in Drupal contributed projects in May 2019.

Developer toolbox

Here is a brief recap of the tools that might help you while contributing.

Other ways to contribute than code

There are more ways to contribute, even if you are not a developer, the Drupal community is looking for

Read more about contribution on the Getting Involved Guide.

Aug 22 2019
Aug 22

With Drupal 9 stated to be released in June 2020, the Drupal community has around 11 months to go. So, before it maps out a transition plan, now is the time to discuss what to expect from Drupal 9.

Switch to Drupal 9

You must be wondering:

Is Drupal 9 a reasonable plan for you?

Is it easy to migrate from recent Drupal versions to the new one?

This blog post has all your questions answered.

Let’s see the major changes in Drupal 9

The latest version of Drupal is said to be built on Drupal 8 and the migration will be far easy this time.

  • Updated dependencies version so that they can be supported
  • Removal of deprecated code before release

The foremost update to be made in Drupal 9 is Symfony 4 or 5 and the team is working hard for its implementation.

Planning to Move to Drupal 9?

With the release of Drupal 8.7 in 2019, it has optionally supported Twig 2 that has helped developers start testing their code against the version of Twig. Drupal 8.8 will virtually support the recent version of Symfony. Ideally, the Drupal community would like to release Drupal 9 with support for Symfony 5, that is to be released at the end of 2019.

Drupal 9

If you are already using Drupal 8, the best advice is to keep your site up to date. Drupal 9 is an updated version of Drupal 8 with the updates for third-party dependencies and depreciated code removed.

Ensure that you are not using any deprecated APIs and modules and wherever possible use the recent versions of dependencies. If you do that, your upgrade experience will not encounter any problems.

Since Drupal 9 is being built within version 8, developers will have the choice to test their code and make updates before the release of Drupal 9. This is an outstanding update and was not possible with the previous versions of Drupal!

So, where are you in the Drupal journey?

Here are some scenarios to support your migration process:

Are you on Drupal 6?

You are way behind! . We strongly suggest you move to Drupal 8 as soon as you can. Migration from 8 to 9 will be straight forward. Drupal 8 includes migration facilitations for Drupal 6 which probably won’t be included in Drupal 9. While there is a possibility that there might be some contribution modules available, but it is better to be safe.

Are you on Drupal 7?

Both Drupal 7 and 8 support will end by 2021. Since the release of Drupal 9 is set for June 2020, you should plan your upgrade and go live. If not migrated by the required time, your website may be vulnerable to security threats. Since Drupal 7 to 9 are similar to new development, you just need to consider the timeline involved.

Are you on Drupal 8?

Great work if you are already on Drupal 8! For you, it would be easier to move to the next major version with zero effort.

The big difference between the last version of Drupal 8 and the first order of Drupal 9 is that of deprecated code being removed. You just need to check that your themes, modules, and profiles don’t include any such code. So, there’s no need to worry about migrating your content at all!

Want to know more about Drupal and its migration process?

Aug 21 2019
Aug 21

Today we will learn how to migrate content from LibreOffice Calc and Microsoft Excel files into Drupal using the Migrate Spreadsheet module. We will give instructions on getting the module and its dependencies. Then, we will present how to configure the module for spreadsheets with or without a header row. There are two example migrations: images and paragraphs. Let’s get started.

Example configuration for Microsoft Excel and LibreOffice Calc migration.

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD Google Sheets, Microsoft Excel, and LibreOffice Calc source migration whose machine name is ud_migrations_sheets_sources. It comes with four migrations: udm_google_sheets_source_node.yml, udm_libreoffice_calc_source_paragraph.yml, udm_microsoft_excel_source_image.yml, and udm_backup_csv_source_node.yml. The image migration uses a Microsoft Excel file as source. The paragraph migration uses a LibreOffice Calc file as source. The CSV migration is a backup in case the Google Sheet is not available. To execute the last one you would need the Migrate Source CSV module.

You can get the Migrate Google Sheets module using composer: composer require drupal/migrate_spreadsheet:^1.0. This module depends on the PHPOffice/PhpSpreadsheet library and many PHP extensions including ext-zip. Check this page for a full list of dependencies. If any required extension is missing the installation will fail. If your Drupal site is not composer-based, you will not be able to use Migrate Spreadsheet, unless you go around a lot of hoops.

Understanding the example set up

This migration will reuse the same configuration from the introduction to paragraph migrations example. Refer to that article for details on the configuration. The destinations will be the same content type, paragraph type, and fields. The source will be changed in today's example, as we use it to explain Microsoft Excel and LibreOffice Calc migrations. The end result will again be nodes containing an image and a paragraph with information about someone’s favorite book. The major difference is that we are going to read from different sources.

Note: You can literally swap migration sources without changing any other part of the migration.  This is a powerful feature of ETL frameworks like Drupal’s Migrate API. Although possible, the example includes slight changes to demonstrate various plugin configuration options. Also, some machine names had to be changed to avoid conflicts with other examples in the demo repository.

Understanding the source document and plugin configuration

In any migration project, understanding the source is very important. For Microsoft Excel and LibreOffice Calc migrations, the primary thing to consider is whether or not the file contains a row of headers. Also, a workbook (file) might contain several worksheets (tabs). You can only migrate from one worksheet at a time. The example documents have two worksheets: UD Example Sheet and Do not peek in here. We are going to be working with the first one.

The spreadsheet source plugin exposes seven configuration options. The values to use might change depending on the presence of a header row, but all of them apply for both types of document. Here is a summary of the available configurations:

  • file is required. It stores the path to the document to process. You can use a relative path from the Drupal root, an absolute path, or stream wrappers.
  • worksheet is required. It contains the name of the one worksheet to process.
  • header_row is optional. This number indicates which row containing the headers. Contrary to CSV migrations, the row number is not zero-based. So, set this value to 1 if headers are on the first row, 2 if they are on the second, and so on.
  • origin is optional and defaults to A2. It indicates which non-header cell contains the first value you want to import. It assumes a grid layout and you only need to indicate the position of the top-left cell value.
  • columns is optional. It is the list of columns you want to make available for the migration. In case of files with a header row, use those header values in this list. Otherwise, use the default title for columns: A, B, C, etc. If this setting is missing, the plugin will return all columns. This is not ideal, especially for very large files containing more columns than needed for the migration.
  • row_index_column is optional. This is a special column that contains the row number for each record. This can be used as unique identifier for the records in case your dataset does not provide a suitable value. Exposing this special column in the migration is up to you. If so, you can come up with any name as long as it does not conflict with header row names set in the columns configuration. Important: this is an autogenerated column, not any of the columns that come with your dataset.
  • keys is optional and, if not set, it defaults to the value of row_index_column. It contains an array of column names that uniquely identify each record. For files with a header row, you can use the values set in the columns configuration. Otherwise, use default column titles like A, B, C, etc. In both cases, you can use the row_index_column column if it was set. Each value in the array will contain database storage details for the column.

Note that nowhere in the plugin configuration you specify the file type. The same setup applies for both Microsoft Excel and LibreOffice Calc files. The library will take care of detecting and validating the proper type.

Migrating spreadsheet files with a header row

This example is for the paragraph migration and uses a LibreOffice Calc file. The following snippets shows the UD Example Sheet worksheet and the configuration of the source plugin:

book_id, book_title, Book author
B10, The definitive guide to Drupal 7, Benjamin Melançon et al.
B20, Understanding Drupal Views, Carlos Dinarte
B30, Understanding Drupal Migrations, Mauricio Dinarte
source:
  plugin: spreadsheet
  file: modules/custom/ud_migrations/ud_migrations_sheets_sources/sources/udm_book_paragraph.ods
  worksheet: 'UD Example Sheet'
  header_row: 1
  origin: A2
  columns:
    - book_id
    - book_title
    - 'Book author'
  row_index_column: 'Document Row Index'
  keys:
    book_id:
      type: string

The name of the plugin is spreadsheet. Then you use the file configuration to indicate the path to the file. In this case, it is relative to the Drupal root. The UD Example Sheet is set as the worksheet to process. Because the first row of the file contains the header rows, then header_row is set to 1 and origin to A2.

Then specify which columns to make available to the migration. In this case, we listed all of them so this setting could have been left unassigned. It is better to get into the habit of being explicit about what you import. If the file were to change and more columns were added, you would not have to update the file to prevent unneeded data to be fetched. The row_index_column is not actually used in the migration, but it is set to show all the configuration options in the example. The values will be 1, 2, 3, etc.  Finally, the keys is set the column that serves as unique identifiers for the records.

The rest of the migration is almost identical to the CSV example. Small changes were made to prevent machine name conflicts with other examples in the demo repository. For reference, the following snippet shows the process and destination sections for the LibreOffice Calc paragraph migration.

process:
  field_ud_book_paragraph_title: book_title
  field_ud_book_paragraph_author: 'Book author'
destination:
  plugin: 'entity_reference_revisions:paragraph'
  default_bundle: ud_book_paragraph

Migrating spreadsheet files without a header row

Now let’s consider an example of a spreadsheet file that does not have a header row. This example is for the image migration and uses a Microsoft Excel file. The following snippets shows the UD Example Sheet worksheet and the configuration of the source plugin:

P01, https://agaric.coop/sites/default/files/pictures/picture-15-1421176712.jpg
P02, https://agaric.coop/sites/default/files/pictures/picture-3-1421176784.jpg
P03, https://agaric.coop/sites/default/files/pictures/picture-2-1421176752.jpg
source:
  plugin: spreadsheet
  file: modules/custom/ud_migrations/ud_migrations_sheets_sources/sources/udm_book_paragraph.ods
  worksheet: 'UD Example Sheet'
  header_row: 1
  origin: A2
  columns:
    - book_id
    - book_title
    - 'Book author'
  row_index_column: 'Document Row Index'
  keys:
    book_id:
      type: string

The plugin, file, amd worksheet configurations follow the same pattern as the paragraph migration. The difference for files with no header row is reflected in the other parameters. header_row is set to null to indicate the lack of headers and origin is to A1. Because there are no column names to use, you have to use the ones provided by the spreadsheet. In this case, we want to use the first two columns: A and B. Contrary to CSV migrations, the spreadsheet plugin does not allow you to define aliases for unnamed columns. That means that you would have to use A, B in the process section to refer to these columns.

row_index_column is set to null because it will not be used. And finally, in the keys section, we use the A column as the primary key. This might seem like an odd choice. Why use that value if you could use the row_index_column as the unique identifier for each row? If this were an isolated migration, that would be a valid option. But this migration is referenced from the node migration explained in the previous example. The lookup is made based on the values stored in the A column. If we used the index of the row as the unique identifier, we would have to update the other migration or the lookup would fail. In many cases, that is not feasible nor desirable.

Except for the name of the columns, the rest of the migration is almost identical to the CSV example. Small changes were made to prevent machine name conflicts with other examples in the demo repository. For reference, the following snippet shows part of the process and destination section for the Microsoft Excel image migration.

process:
  psf_destination_filename:
    plugin: callback
    callable: basename
    source: B # This is the photo URL column.
destination:
  plugin: 'entity:file'

Refer to this entry to know how to run migrations that depend on others. In this case, you can execute them all by running: drush migrate:import --tag='UD Sheets Source'. And that is how you can use Microsoft Excel and LibreOffice Calc files as the source of your migrations. This example is very interesting because each of the migration uses a different source type. The node migration explained in the previous post uses a Google Sheet. This is a great example of how powerful and flexible the Migrate API is.

What did you learn in today’s blog post? Have you migrated from Microsoft Excel and LibreOffice Calc files before? If so, what challenges have you found? Did you know the source plugin configuration is not dependent on the file type? Share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Aug 21 2019
Aug 21

Today we will learn how to migrate content from Google Sheets into Drupal using the Migrate Google Sheets module. We will give instructions on how to publish them in JSON format to be consumed by the migration. Then, we will talk about some assumptions made by the module to allow easier plugin configurations. Finally, we will present the source plugin configuration for Google Sheets migrations. Let’s get started.

Example configuration for Google Sheets migration

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD Google Sheets, Microsoft Excel, and LibreOffice Calc source migration whose machine name is ud_migrations_sheets_sources. It comes with four migrations: udm_google_sheets_source_node.yml, udm_libreoffice_calc_source_paragraph.yml, udm_microsoft_excel_source_image.yml, and udm_backup_csv_source_node.yml. The last one is a backup in case the Google Sheet is not available. To execute it you would need the Migrate Source CSV module.

You can get the Migrate Google Sheets module and its dependency using composer: composer require drupal/migrate_google_sheets:^1.0'. It depends on Migrate Plus. Installing via composer will get you both modules.  If your Drupal site is not composer-based, you can download them manually.

Understanding the example set up

This migration will reuse the same configuration from the introduction to paragraph migrations example. Refer to that article for details on the configuration. The destinations will be the same content type, paragraph type, and fields. The source will be changed in today's example, as we use it to explain Google Sheets migrations. The end result will again be nodes containing an image and a paragraph with information about someone’s favorite book. The major difference is that we are going to read from different sources. In the next article, two of the migrations will be explained. They read from Microsoft Excel and LibreOffice Calc files.

Note: You can literally swap migration sources without changing any other part of the migration.  This is a powerful feature of ETL frameworks like Drupal’s Migrate API. Although possible, the example includes slight changes to demonstrate various plugin configuration options. Also, some machine names had to be changed to avoid conflicts with other examples in the demo repository.

Migrating nodes from Google Sheets

In any migration project, understanding the source is very important. For Google Sheets, there are many details that need your attention. First, the module works on top of Migrate Plus and extends its JSON data parser. In fact, you have to publish your Google Sheet and consume it in JSON format. Second, you need to make the JSON export publicly available. Third, you must understand the JSON format provided by Google Sheets and the assumptions made by the module to configure your fields properly. Specific instructions for Google Sheets migrations will be provided. That being said, everything explained in the JSON migration example is applicable in this case too.

Publishing a Google Sheet in JSON format

Before starting the migration, you need the source from where you will extract the data. For this, create a Google Sheet document. The example will use this one:

https://docs.google.com/spreadsheets/d/1YVJt9isPNjkUNHf3YgoTx38r04TwqRYnp1LFrik3TAk/edit#gid=0

The 1YVJt9isPNjkUNHf3YgoTx38r04TwqRYnp1LFrik3TAk value is the worksheet ID which will be used later. Once you are done creating the document, you need to publish it so it can be consumed by the Migrate API. To do this, go to the File menu and then click on Publish to the web. A modal window will appear where you can configure the export. Note that it is possible to publish the Entire document or only some of the worksheets (tabs). The example document has two: UD Example Sheet and Do not peek in here. Make sure that all the worksheets that you need are published or export the entire document. Unless multiple urls are configured, a migration can only import from one worksheet at a time. If you fetch from multiple urls they need to have homogeneous structures. When you click the Publish button, a new URL will be presented. In the example it is:

https://docs.google.com/spreadsheets/d/e/2PACX-1vTy2-CGzsoTBkmvYbolFh0UDWenwd9OCdel55j9Qa37g_earT1vA6y-6phC31Xkj8sTWF0o6mZTM90H/pubhtml

The previous URL will not be used. Publishing a document is a required step, but the URL that you get should be ignored. Note that you do not have to share the document. It is fine that the document is private to you as long as it is published. It is up to you if you want to make it available to Anyone with the link or Public on the web and potentially grant edit or comment access. The Share setting does not affect the migration. The final step is getting the JSON representation of the document. You need to assemble a URL with the following pattern:

http://spreadsheets.google.com/feeds/list/[workbook-id]/[worksheet-index]/public/values?alt=json

Replace the [workbook-id] by worksheet ID mentioned at the beginning of this section, the one that is part of the regular document URL, not the published URL. The worksheet-index is an integer number starting that represents the order in which worksheets appear in the document. Use 1 for the first, 2 for the second, and so on. This means that changing the order of the worksheets will affect your migration. At the very least, you will have to update the path to reflect the new index. In the example migration, the UD Example Sheet worksheet will be used. It appears first in the document so worksheet index is 1. Therefore, the exported JSON will be available at the following URL:

http://spreadsheets.google.com/feeds/list/1YVJt9isPNjkUNHf3YgoTx38r04TwqRYnp1LFrik3TAk/1/public/values?alt=json

Understanding the published Google Sheet JSON export

Take a moment to read the JSON export and try to understand its structure. It contains much more data than what you need. The records to be imported can be retrieved using this XPath expression: /feed/entry. You would normally have to assign this value to the item_selector configuration of the Migrate Plus’ JSON data parser. But, because the value is the same for all Google Sheets, the module takes care of this automatically. You do not have to set that configuration in the source section. As for the data cells, have a look at the following code snippet to see how they appear on the export:

{
  "feed": {
    "entry": [
      {
        "gsx$uniqueid": {
          "$t": "1"
        },
        "gsx$name": {
          "$t": "One Uno Un"
        },
        "gsx$photo-file": {
          "$t": "P01"
        },
        "gsx$bookref": {
          "$t": "B10"
        }
      }
    ]
  }
}

Tip: Firefox includes a built-in JSON document viewer which helps a lot in understanding the structure of the document. If your browser does not include a similar tool out of the box, look for one in their extensions repository. You can also use a file formatter to pretty print the JSON output.

The following is a list of headers as they appear in the Google Sheet compared to how they appear in the JSON export:

  • unique_id appears like gsx$uniqueid.
  • name appears like gsx$name.
  • photo-file appears like gsx$photo-file.
  • Book Ref appears like gsx$bookref.

So, the header name from Google Sheet gets transformed in the JSON export. They get a prefix of gsx$ and the header name is transformed to all lowercase letters with spaces and most special characters removed. On top of this, the actual cell value, that you will eventually import, is in a $t property one level under the header name. Now, you should create a list of fields to migrate using XPath expressions as selectors. For example, for the Book Ref header, the selector would be gsx$bookref/$t. But that is not the way to configure the Google Sheets data parser. The module makes some assumptions to make the selector clearer. So, the gsx$ prefix and /$t hierarchy are assumed. For the selector, you only need to use the transformed name. In this case: uniqueid, name, photo-file, and bookref.

Configuring the Migrate Google Sheets source plugin

With the JSON export of the Google Sheet and the list of transformed header names, you can proceed to configure the plugin. It will be very similar to configuring a remote JSON migration. The following code snippet shows source configuration for the node migration:

source:
  plugin: url
  data_fetcher_plugin: http
  data_parser_plugin: google_sheets
  urls: 'http://spreadsheets.google.com/feeds/list/1YVJt9isPNjkUNHf3YgoTx38r04TwqRYnp1LFrik3TAk/1/public/values?alt=json'
  fields:
    - name: src_unique_id
      label: 'Unique ID'
      selector: uniqueid
    - name: src_name
      label: 'Name'
      selector: name
    - name: src_photo_file
      label: 'Photo ID'
      selector: photo-file
    - name: src_book_ref
      label: 'Book paragraph ID'
      selector: bookref
  ids:
    src_unique_id:
      type: integer

You use the url plugin, the http fetcher, and the google_sheets parser. The latter is provided by the module. The urls configuration is set to the exported JSON link. The item_selector is not configured because the /feed/entry value is assumed. The fields are configured as in the JSON migration with the caveat of using the transformed header values for the selector. Finally, you need to set the ids key to a combination of fields that uniquely identify each record.

The rest of the migration is almost identical to the JSON example. Small changes were made to prevent machine name conflicts with other examples in the demo repository. For reference, the following snippet shows part of the process, destination, and dependencies section for the Google Sheets migration.

process:
  field_ud_image/target_id:
    plugin: migration_lookup
    migration: udm_microsoft_excel_source_image
    source: src_photo_file
destination:
  plugin: 'entity:node'
  default_bundle: ud_paragraphs
migration_dependencies:
  required:
    - udm_microsoft_excel_source_image
    - udm_libreoffice_calc_source_paragraph
  optional: []

Note that the node migration depends on an image and paragraph migration. They are already available in the example. One uses a Microsoft Excel file as the source while the other a LibreOffice Calc document. Both of these migrations will be explained in the next article. Refer to this entry to know how to run migrations that depend on others. For example, you can run: drush migrate:import --tag='UD Sheets Source'.

What did you learn in today’s blog post? Have you migrated from Google Sheets before? If so, what challenges have you found? Did you know the procedure to export a sheet in JSON format? Did you know that the Migrate Google Sheets module is an extension of Migrate Plus? Share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

Aug 21 2019
Aug 21

Do you like people who are warm and friendly or cold and hostile? You’ve got it right! I’m comparing Interactive to Non-interactive (static) websites here. In this increasingly digital generation, it isn’t sufficient to place some content on your website and wait for it to work its magic. Providing a web User experience without interactivity is like opening a store filled with inventory without a salesperson to interact with. 
When you create an interactive website, you are forming a connection with your audience. It propels a two-way communication on a medium where you cannot directly interact with a user. Studies have proven that people are more likely to convert on, return to or recommend websites that are interactive. Drupal CMS offers a wide variety of interactive themes and modules that can be easily adapted to your website and further customized.

What is an interactive website?

Put simply, an interactive website is a website that communicates and allows for interaction with users. And by interaction, we don’t just mean allowing users to “click” and “scroll”. Offering users with content that is amusing, collaborative and engaging is the essential objective of an interactive website. An interactive website design will not just display attractive content, it will exhibit interactive content. Content that will compel users to communicate and deeply engage with the website. 
 

Interactive website design 
Interactive Website Designs Communicate & Engage with users 

Why do you need one?

Today, all businesses in the digital market are racing to expand their audience. Most of them, however, forget that increasing traffic is simply not enough. Retaining and engaging users is what converts. Engaging your users should be your prime motive and for this you will first need an interactive business website. 

  • Drives more engagement. Interactive business websites can make your website less boring, thus garnering more action. 
  • Users will spend more time on a website that interacts with them. This increases your conversion rate, decreases bounce rate and can boost the SEO of your website.
  • Develops a more personalized user experience that can result in happy users. 
  • Engaged users are more likely to maintain a long-term relationship with websites.
  • Interactive website designs can create lasting effects in user’s minds. This improves your brand awareness and reach. 
  • Interactive websites encourages users to recommend your website and link back to it.
  • More conversions means you have a better chance in making a sale!

How to make interactive websites? 

Creating an interactive website from scratch is easier and more effective as you envision and plan the customer journey from day one. Nevertheless, if you already have a website that you think is static or needs more interactive website features, it is never too late. The first step is to define your business objectives and then identify various touch points from where you can interact with your customers.
If budgets and timelines are constraints you could also look at HTML5 interactive website templates (not recommended if you need customizations).
There are various interactive website features that can increase user engagement but you should pick the ones that suit your business goals. For example, if you are sell financial services, having an interest calculator in your website can prove to be very useful. Nonetheless, the most essential interactive feature that you just cannot ignore is responsiveness. Users will respond to your website on various devices only when it looks and feels presentable.
So what kind of interactive website features or elements can you utilize for your benefit?

  • Social Media Applications

There is no denying that Social media marketing can give you the visibility like no other marketing programs if done right. Provide your users with an option to like and share your content on social media platforms like LinkedIn Twitter or Facebook. Or just to be able to follow your page. You can also display live feed from your social media page to keep users updated. 

  • Simple Interactive Tools

Offer your users with simple interactive tools like Quizzes, short Games, math tools, tax calculators, etc. connected to your business objectives. Integrating simple software tools that can provide your users with instant results have proven to boost user engagement. 

  • Interactive Page Elements

You can enhance your page elements by adding something interesting and attractive to it. For example, colourful and dynamic hover-states on links or images, on-scroll or on-click loading/animation, navigation with clicks on image stories, and much more. Add videos or animations to say more about your business in an interactive way.

  • Forms and Feedback

Allowing users to get in touch with you via a contact form is a great way to connect with them. Not only does it let you increase your database of leads, it is a nice way of saying “We care”. Feedback forms lets you identify your strengths and weaknesses via the best source – your audience! 

  • Chat Widgets

What’s better than a live person chatting with you, answering all your questions about the products or services being offered?! That’s probably the highest level of interactivity you can offer in an interactive business website. If live chat sounds like too much commitment, you could also opt for Chat bots that can be configured to answer predictive questions.

  • User-generated Content

Letting users add their content on your website is a great way to improve interactivity. This can be done in the form of Comments (in your blogs/articles section), inviting them to write guest posts, submit images or even creating a small discussion Forum.

  • Other interactive website Features

You can get creative with the interactive features you want for your audience but here’s a short list of commonly used interactive elements –

  • Google Maps makes you a more trust-worthy brand and provide a great way to improve interaction especially when they are clickable.
  • Newsletters can keep your users coming back to your website for more updates.
  • Voting and showing them results of previous polls helps increase engagement.
  • Search functionalities eases the user from the pain of navigating through your website.
  • Ratings can be a quick and interactive method of getting instant feedback that can improve your products/services/work.
  • Slideshows offer a great way to engage users and can make them want to keep going to the next image.
interactive elements     
           Interactive Website Features and Elements 
                                         

Drupal for Interactive Websites

When you build your website with Drupal, you will come across multiple options in the form of modules and features that can instantly turn your static website into an interactive one. With Drupal 8, responsiveness comes out of the box. Which means that you don’t need any additional modules to make your Drupal website look great irrespective of the devices. In addition there are a variety of modules that encourage interactivity like the Search API, Contact forms module, Social Media module, Slideshow module,  SimpleNews (or newsletters) and much more! 

Aug 21 2019
Aug 21

The JS frameworks have changed quite a lot in Drupal especially with API-first concept adding to the scenario. It is only expected that developers are inclined towards learning more about JS and related possibilities.

Recently, I was tasked to render blocks on a page while keeping individual components encapsulated in their own. JavaScript has the potential to make web pages dynamic and interactive without hindering the page speed and so I decided to opt for progressively decoupled blocks.

I came across this Drupal module Progressive Decoupled Blocks, which allows us to render blocks in a decoupled way and seemed a perfect fit for the situation.

Anatomy of Progressive Decoupled Blocks

The module is javascript-framework-agnostic, progressive decoupling tool to allow custom blocks to be written in the javascript framework of one’s choice. What makes it unique is one can do all this without needing to know any Drupal API.

It keeps individual components compact in their own directories containing all the CSS, JS, and template assets necessary for them to work, and using an info.yml file to declare these components and their framework dependencies to Drupal.

Did it work?

I decided to test the module in the new Umami Demo profile, that comes out of the box with Drupal, with React and Angular blocks.

I used the Layout module and picked a page to place these blocks. This is one of the best parts I liked about the module, every piece of block content that we need to be placed on the page, can be done without even visiting the block page and setting visibility.

Module Architecture

The module has a custom block derivative, which searches for all components that have been added and exposes them as blocks. This architecture makes it super easy to add a new block.

You can refer to this git for the block derivative - https://git.drupalcode.org/project/pdb/blob/8.x-1.x/src/Plugin/Derivative/PdbBlockDeriver.php

Refer to this git for component discovery - https://git.drupalcode.org/project/pdb/blob/8.x-1.x/src/ComponentDiscovery.php

Possibilities

For React JS there are 2 examples in this module.

First is with a simple React Create element class, without JSX. This example renders a simple text inside a React component. This is a very good starter example for anyone who wants to understand how to integrate a react component with Drupal in a progressive way.

The second example is a ToDo app, which allows you to add and remove todo items to a list. It makes use of local storage to store any item that has been added. This app creates React components, integrate these components in other component and renders a fully functional DOM inside Drupal DOM in a progressive way.

This example comes with a package.json which needs to be installed before making it functional. However, the component did not render perfectly on Umami theme, I made certain changes to make sure it renders correctly. Patch attached in the end.

progressive decoupled block

I decided to extend this module and added a new component to render a banner block (comes OOB with Umami), and exposed this block as an API via JSON:API. As this was a very simple block, I decided to create this block without JSX. Also, I decided to generate the URL with static ID in the path. This can, however, be made dynamic, which I plan to do later in my implementation. I also decided to choose the same class names to save on styling. The classes can be found in the Umami theme profile.

(function ($, Drupal, drupalSettings) {
Drupal.behaviors.pddemo = {
attach: function (context, settings) {
$.ajax(drupalSettings.pdb.reactbanner.url, {
type: 'get',
success: function (result) {
var image = result.included[0].attributes.uri.url;
var title = result.data.attributes.field_title;
ReactDOM.render(React.createElement(
'div',
{
className: 'block-type-banner-block',
style: {backgroundImage: 'url(' + image + ')'}
},
React.createElement(
'div',
{className: 'block-inner'},
React.createElement(
'div',
{className: 'summary'},
React.createElement(
'div',
{className: 'field--name-field-title'},
title
)
)
)
), document.getElementById('reactbanner'));
},
error: function (XMLHttpRequest, textStatus, errorThrown) {
console.log(textStatus || errorThrown);
},
});
}
};
})(jQuery, Drupal, drupalSettings);


Angular JS

Angular JS comes with many more easy and complex examples. Things didn’t work as smoothly as it did with React. There were a couple of changes we were required to make it work.

Patch is attached in the end. You can refer to this gist for the patch.

You will have to install node modules before, to make it work. Also, all JS is in the form of Typescript, which needs to be processed to JS before you can make it work.

Conclusion

The module gives you a great kickstart to move any block, rendered by Drupal, to a progressively decoupled block, using React, Angular or Ember as JS framework. However, you may want to extend the module or create your own to render the blocks.

Aug 21 2019
Aug 21

Today we will learn how to migrate content from a XML file into Drupal using the Migrate Plus module. We will show how to configure the migration to read files from the local file system and remote locations. We will also talk about the difference between two data parsers provided the module. The example includes node, images, and paragraphs migrations. Let’s get started.

Example configuration of XML source migration

Note: Migrate Plus has many more features. For example, it contains source plugins to import from JSON files and SOAP endpoints. It provides many useful process plugins for DOM manipulation, string replacement, transliteration, etc. The module also lets you define migration plugins as configurations and create groups to share settings. It offers a custom event to modify the source data before processing begins. In today’s blog post, we are focusing on importing XML files. Other features will be covered in future entries.

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD XML source migration whose machine name is ud_migrations_xml_source. It comes with four migrations: udm_xml_source_paragraph, udm_xml_source_image, udm_xml_source_node_local, and udm_xml_source_node_remote.

You can get the Migrate Plus module using composer: composer require 'drupal/migrate_plus:^5.0'. This will install the 8.x-5.x branch where new development will happen. This branch was created to introduce breaking changes in preparation for Drupal 9. As of this writing, the 8.x-4.x branch has feature parity with the newer branch. If your Drupal site is not composer-based, you can download the module manually.

Understanding the example set up

This migration will reuse the same configuration from the introduction to paragraph migrations example. Refer to that article for details on the configuration: the destinations will be the same content type, paragraph type, and fields. The source will be changed in today's example, as we use it to explain XML migrations. The end result will again be nodes containing an image and a paragraph with information about someone’s favorite book. The major difference is that we are going to read from XML. In fact, three of the migrations will read from the same file. The following snippet shows a reduced version of the file to get a sense of its structure:


<?xml version="1.0" encoding="UTF-8" ?>
<data>
<udm_people>
<unique_id>1</unique_id>
<name>Michele Metts</name>
<photo_file>P01</photo_file>
<book_ref>B10</book_ref>
</udm_people>
<udm_people>
...
</udm_people>
<udm_people>
...
</udm_people>
<udm_book_paragraph>
<book_id>B10</book_id>
<book_details>
<title>The definite guide to Drupal 7</title>
<author>Benjamin Melançon et al.</author>
</book_details>
</udm_book_paragraph>
<udm_book_paragraph>
...
</udm_book_paragraph>
<udm_book_paragraph>
...
</udm_book_paragraph>
<udm_photos>
<photo_id>P01</photo_id>
<photo_url>https://agaric.coop/sites/default/files/pictures/picture-15-1421176712.jpg</photo_url>
<photo_dimensions>
<width>240</width>
<height>351</height>
</photo_dimensions>
</udm_photos>
<udm_photos>
...
</udm_photos>
<udm_photos>
...
</udm_photos>

Note: You can literally swap migration sources without changing any other part of the migration.  This is a powerful feature of ETL frameworks like Drupal’s Migrate API. Although possible, the example includes slight changes to demonstrate various plugin configuration options. Also, some machine names had to be changed to avoid conflicts with other examples in the demo repository.

Migrating nodes from a XML file

In any migration project, understanding the source is very important. For XML migrations, there are two major considerations. First, where in the XML tree hierarchy lies the data that you want to import. It can be at the root of the file or several levels deep in the hierarchy. You use an XPath expression to select a set of nodes from the XML document. In this article, the term element when referring to an XML document node to distinguish it from a Drupal node.  Second, when you get to the set of elements that you want to import, what child elements are going to be made available to the migration. It is possible that each element contains more data than needed. In XML imports, you have to manually include the child elements that will be required for the migration. The following code snippet shows part of the local XML file relevant to the node migration:


<?xml version="1.0" encoding="UTF-8" ?>
<data>
<udm_people>
<unique_id>1</unique_id>
<name>Michele Metts</name>
<photo_file>P01</photo_file>
<book_ref>B10</book_ref>
</udm_people>
<udm_people>
...
</udm_people>
<udm_people>
...
</udm_people>
</data>

The set of elements containing node data lies two levels deep in the hierarchy. Starting with data at the root and then descending one level to udm_people. Each element of this array is an object with four properties:

  • unique_id is the unique identifier for each element returned by the data/udm_people hierarchy.
  • name is the name of a person. This will be used in the node title.
  • photo_file is the unique identifier of an image that was created in a separate migration.
  • book_ref is the unique identifier of a book paragraph that was created in a separate migration.

The following snippet shows the configuration to read a local XML file for the node migration:


source:
  plugin: url
  # This configuration is ignored by the 'xml' data parser plugin.
  # It only has effect when using the 'simple_xml' data parser plugin.
  data_fetcher_plugin: file
  # Set to 'xml' to use XMLReader https://www.php.net/manual/en/book.xmlreader.php
  # Set to 'simple_xml' to use SimpleXML https://www.php.net/manual/en/ref.simplexml.php
  data_parser_plugin: xml
  urls:
    - modules/custom/ud_migrations/ud_migrations_xml_source/sources/udm_data.xml
  # XPath expression. It is common that it starts with a slash (/).
  item_selector: /data/udm_people
  fields:
    - name: src_unique_id
      label: 'Unique ID'
      selector: unique_id
    - name: src_name
      label: 'Name'
      selector: name
    - name: src_photo_file
      label: 'Photo ID'
      selector: photo_file
    - name: src_book_ref
      label: 'Book paragraph ID'
      selector: book_ref
  ids:
    src_unique_id:
      type: integer

The name of the plugin is url. Because we are reading a local file, the data_fetcher_plugin  is set to file and the data_parser_plugin to xml. The urls configuration contains an array of file paths relative to the Drupal root. In the example we are reading from one file only, but you can read from multiple files at once. In that case, it is important that they have a homogeneous structure. The settings that follow will apply equally to all the files listed in urls.

Technical note: Migrate Plus provides two data parser plugins for XML files. xml uses XMLReader while simple_xml uses SimpleXML. The parser to use is configured in the data_parser_plugin configuration. Also note that when you use the xml parser, the data_fetcher_plugin setting is ignored. More details below.

The item_selector configuration indicates where in the XML file lies the set of elements to be migrated. Its value is an XPath expression used to traverse the file hierarchy. In this case, the value is /data/udm_people. Verify that your expression is valid and select the elements you intend to import. It is common that it starts with a slash (/).

fields has to be set to an array. Each element represents a field that will be made available to the migration. The following options can be set:

  • name is required. This is how the field is going to be referenced in the migration. The name itself can be arbitrary. If it contained spaces, you need to put double quotation marks (") around it when referring to it in the migration.
  • label is optional. This is a description used when presenting details about the migration. For example, in the user interface provided by the Migrate Tools module. When defined, you do not use the label to refer to the field. Keep using the name.
  • selector is required. This is another XPath-like string to find the field to import. The value must be relative to the subtree specified by the item_selector configuration. In the example, the fields are direct children of the elements to migrate. Therefore, the XPath expression only includes the element name (e.g., unique_id). If you had nested elements, you could use a slash (/) character to go deeper in the hierarchy. This will be demonstrated in the image and paragraph migrations.

Finally, you specify an ids array of field names that would uniquely identify each record. As already stated, the unique_id field servers that purpose. The following snippet shows part of the process, destination, and dependencies configuration of the node migration:

process:
  field_ud_image/target_id:
    plugin: migration_lookup
    migration: udm_xml_source_image
    source: src_photo_file
destination:
  plugin: 'entity:node'
  default_bundle: ud_paragraphs
migration_dependencies:
  required:
    - udm_xml_source_image
    - udm_xml_source_paragraph
  optional: []

The source for the setting the image reference is src_photo_file. Again, this is the name of the field, not the label nor selector. The configuration of the migration lookup plugin and dependencies point to two XML migrations that come with this example. One is for migrating images and the other for migrating paragraphs.

Migrating paragraphs from a XML file

Let’s consider an example where the elements to migrate have many levels of nesting. The following snippets show part of the local XML file and source plugin configuration for the paragraph migration:


<?xml version="1.0" encoding="UTF-8" ?>
<data>
  <udm_book_paragraph>
    <book_id>B10</book_id>
    <book_details>
      <title>The Definitive Guide to Drupal 7</title>
      <author>Benjamin Melançon et al.</author>
    </book_details>
  </udm_book_paragraph>
  <udm_book_paragraph>
    ...
  </udm_book_paragraph>
  <udm_book_paragraph>
    ...
  </udm_book_paragraph>
</data>
source:
  plugin: url
  # This configuration is ignored by the 'xml' data parser plugin.
  # It only has effect when using the 'simple_xml' data parser plugin.
  data_fetcher_plugin: file
  # Set to 'xml' to use XMLReader https://www.php.net/manual/en/book.xmlreader.php
  # Set to 'simple_xml' to use SimpleXML https://www.php.net/manual/en/ref.simplexml.php
  data_parser_plugin: xml
  urls:
    - modules/custom/ud_migrations/ud_migrations_xml_source/sources/udm_data.xml
  # XPath expression. It is common that it starts with a slash (/).
  item_selector: /data/udm_book_paragraph
  fields:
    - name: src_book_id
      label: 'Book ID'
      selector: book_id
    - name: src_book_title
      label: 'Title'
      selector: book_details/title
    - name: src_book_author
      label: 'Author'
      selector: book_details/author
  ids:
    src_book_id:
      type: string

The plugin, data_fetcher_plugin, data_parser_plugin and urls configurations have the same values as in the node migration. The item_selector and ids configurations are slightly different to represent the path to paragraph elements and the unique identifier field, respectively.

The interesting part is the value of the fields configuration. Taking data/udm_book_paragraph as a starting point, the records with paragraph data have a nested structure. Particularly, the book_details element has two children: title and author. To refer to them, the selectors are book_details/title and book_details/author, respectively. Note that you can go as many level deeps in the hierarchy to find the value that should be assigned to the field. Every level in the hierarchy could be separated by a slash (/).

In this example, the target is a single paragraph type. But a similar technique can be used to migrate multiple types. One way to configure the XML file is having two children. paragraph_id would contain the unique identifier for the record. paragraph_data would contain a child element to specify the paragraph type. It would also have an arbitrary number of extra child elements with the data to be migrated. In the process section, you would iterate over the children to map the paragraph fields.

The following snippet shows part of the process configuration of the paragraph migration:

process:
  field_ud_book_paragraph_title: src_book_title
  field_ud_book_paragraph_author: src_book_author

Migrating images from a XML file

Let’s consider an example where the elements to migrate have more data than needed. The following snippets show part of the local XML file and source plugin configuration for the image migration:


<!--?xml version="1.0" encoding="UTF-8" ?-->
<data>
  <udm_photos>
<photo_id>P01</photo_id>
<photo_url>https://agaric.coop/sites/default/files/pictures/picture-15-1421176712.jpg</photo_url>
<photo_dimensions>
      <width>240</width>
      <height>351</height>
    </photo_dimensions>
  </udm_photos>
  <udm_photos>
    ...
  </udm_photos>
  <udm_photos>
    ...
  </udm_photos>
</data>
source:
  plugin: url
  # This configuration is ignored by the 'xml' data parser plugin.
  # It only has effect when using the 'simple_xml' data parser plugin.
  data_fetcher_plugin: file
  # Set to 'xml' to use XMLReader https://www.php.net/manual/en/book.xmlreader.php
  # Set to 'simple_xml' to use SimpleXML https://www.php.net/manual/en/ref.simplexml.php
  data_parser_plugin: xml
  urls:
    - modules/custom/ud_migrations/ud_migrations_xml_source/sources/udm_data.xml
  # XPath expression. It is common that it starts with a slash (/).
  item_selector: /data/udm_photos
  fields:
    - name: src_photo_id
      label: 'Photo ID'
      selector: photo_id
    - name: src_photo_url
      label: 'Photo URL'
      selector: photo_url
  ids:
    src_photo_id:
      type: string

The following snippet shows part of the process configuration of the image migration:

process:
  psf_destination_filename:
    plugin: callback
    callable: basename
    source: src_photo_url

The plugin, data_fetcher_plugin, data_parser_plugin and urls configurations have the same values as in the node migration. The item_selector and ids configurations are slightly different to represent the path to image elements and the unique identifier field, respectively.

The interesting part is the value of the fields configuration. Taking data/udm_photos as a starting point, the elements with image data have extra children that are not used in the migration. Particularly, the photo_dimensions element has two children representing the width and height of the image. To ignore this subtree, you simply omit it from the fields configuration. In case you wanted to use it, the selectors would be photo_dimensions/width and photo_dimensions/height, respectively.

XML file location

Important: What is described in this section only applies when you use either (1) the xml data parser or (2) the simple_xml parser with the file data fetcher.

When using the file data fetcher plugin, you have three options to indicate the location to the XML files in the urls configuration:

  • Use a relative path from the Drupal root. The path should not start with a slash (/). This is the approach used in this demo. For example, modules/custom/my_module/xml_files/example.xml.
  • Use an absolute path pointing to the XML location in the file system. The path should start with a slash (/). For example, /var/www/drupal/modules/custom/my_module/xml_files/example.xml.
  • Use a fully-qualified URL to any built-in wrapper like http, https, ftp, ftps, etc. For example, https://understanddrupal.com/xml-files/example.xml.
  • Use a custom stream wrapper.

Being able to use stream wrappers gives you many more options. For instance:

Migrating remote XML files

Important: What is described in this section only applies when you use the http data fetcher plugin.

Migrate Plus provides another data fetcher plugin named http. Under the hood, it uses the Guzzle HTTP Client library. You can use it to fetch files using any protocol supported by curl like http, https, ftp, ftps, sftp, etc. In a future blog post we will explain this data fetcher in more detail. For now, the udm_xml_source_node_remote migration demonstrates a basic setup for this plugin. Note that only the data_fetcher_plugindata_parser_plugin, and urls configurations are different from the local file example. The following snippet shows part of the configuration to read a remote XML file for the node migration:

source:
  plugin: url
  data_fetcher_plugin: http
  # 'simple_xml' is configured to be able to use the 'http' fetcher.
  data_parser_plugin: simple_xml
  urls:
    - https://sendeyo.com/up/d/478f835718
  item_selector: /data/udm_people
  fields: ...
  ids: ...

And that is how you can use XML files as the source of your migrations. Many more configurations are possible when you use the simple_xml parser with the http fetcher. For example, you can provide authentication information to get access to protected resources. You can also set custom HTTP headers. Examples will be presented in a future entry.

XMLReader vs SimpleXML in Drupal migrations

As noted in the module’s README file, the xml parser plugin uses the XMLReader interface to incrementally parse XML files. The reader acts as a cursor going forward on the document stream and stopping at each node on the way. This should be used for XML sources which are potentially very large. On the other than, the simple_xml parser plugin uses the SimpleXML interface to fully parse XML files. This should be used for XML sources where you need to be able to use complex XPath expressions for your item selectors, or have to access elements outside of the current item element via XPath.

What did you learn in today’s blog post? Have you migrated from XML files before? If so, what challenges have you found? Did you know that you can read local and remote files? Did you know that the data_fetcher_plugin configuration is ignored when using the xml data parser? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series is made possible thanks to these generous sponsors. Contact us if your organization would like to support this documentation project, whether it is the migration series or other topics.

Next: Adding HTTP request headers and authentication to remote JSON and XML in Drupal migrations

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors: Drupalize.me by Osio Labs has online tutorials about migrations, among other topics, and Agaric provides migration trainings, among other services.  Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Aug 20 2019
Aug 20

Welcome to Mediacurrent’s Open Waters, a podcast about open source solutions. In this episode, we catch up with Cristina Chumillas. Cristina comes from the design world and is passionate about front-end development. She works at Lullabot (though when we recorded this, she worked at Ymbra) and has been involved in the Drupal community for years, contributing with code, design, and organizing events. Her contributions to Drupal Core are mainly focused on front-end, design and UX. Nowadays, she's a co-organizer of the Drupal Admin UI & JS Modernization Initiative and a Drupal core UX maintainer.

Audio Download Link

Project Pick

 Claro

Interview with Cristina Chumillas

  1. Tell us about yourself: What is your role, who do you work for, and where are you from?
  2. You are a busy woman, what events have you recently attended and/or are scheduled to attend in the near future?
  3. Which Drupal core initiatives are you currently contributing to?
  4. How does a better admin theme UI help site owners?  
  5. What are the main goals?
  6. Is this initiative sponsored by anyone? 
  7. Who is the target for the initiative? 
  8. How is the initiative organized? 
  9. What improvements will it bring in a short/mid/long term?
  10. How can people get involved in helping with these initiatives?

Quick-takes

  •  Currently contributing to The Out of the Box initiative for a while, together with podcast co-host Mario
  • 3 reasons why Drupal needs a better admin theme UI: Content Productivity, savings, less frustration
  • Main goals: We have 2 separate paths: the super-fancy JS app that will land in an undefined point in the future and Claro as the new realistic & releasable short term work that will introduce improvements on each release.
  • Why focus on admin UI?  We’re focusing on the content author's experience because that’s one of the main pain points mentioned in an early survey we did last year.)
  • How is the initiative organized? JS, UX&User studies, New design system (UI), Claro (new theme)
  • What improvements will it bring in a short/mid/long term? Short: New theme/UI, Mid: editor role with specific features, autosave, Long: JS app. 

That’s it for today’s show, thanks for joining us!  Looking for more useful tips, technical takeaways, and creative insights? Visit mediacurrent.com/podcast for more episodes and to subscribe to our newsletter.

Aug 20 2019
Aug 20

Firstly - what does End-of-Life mean?

When your version of Drupal reaches end of life, it doesn’t mean that your website will stop working immediately. It does, however, mean that you will be running a version that is no longer supported. Practically speaking, it means that there will cease to be any security updates provided for Drupal 7 core by the Drupal security team. Without security updates your website and hosting infrastructure could be vulnerable to new exploits/hackers. 

What are the options for your Drupal 7 website?

By far the best option is to upgrade your Drupal 7 site to the most current version of Drupal prior to the end-of-life date. At the time of writing, we are building all our new projects on Drupal 8 and that’s really the only option at this point. Fortunately we’ve gone through the upgrade process with a number of clients already and have some tools at our disposal to make the process as streamlined as possible. 

Now you may be asking: Shouldn’t I just wait until Drupal 9 is released? I don’t want to have to upgrade again so soon after I upgrade to 8? The short answer is…..No. 

Due to the modern architecture employed by Drupal 8, the upgrade path to Drupal 9 (and beyond) is vastly improved and will be a much smoother process. In Dries Buytaert's (Drupal's Founder) words, “Drupal 9 will be released in 2020, and it will be an easy upgrade.

If the cost of an upgrade from Drupal 7 to Drupal 8 or 9 is prohibitive for your organization, another option would be to leave your site on Drupal 7 and secure a commercial Long-Term Support agreement (while saving your shekels for an upgrade). There will be approved third-party commercial vendors offering LTS security support for Drupal 7 core (and the major contributed modules) just as there are now for Drupal 6 but there are no guarantees for how long Drupal 7 will be supported by LTS vendors. By going this route you would also be missing out on some of the great benefits of Drupal 8 which include:
 

  • Modern architecture that will allow for much easier future upgrades
  • Enhanced authoring experience with inline editing & layout builder
  • Core support for responsive images
  • Core support for multilingual websites
     

What are the options for your Drupal 8 website?

If you’ve already invested in an upgrade or a fresh build on Drupal 8, then you’re in a great position! Because Drupal 9 is being built on the same architecture as Drupal 8, the upgrade path is straightforward. The most frequently used contributed modules will be usable in both Drupal 8 and 9 which is not how it worked with major version releases in the past where we were waiting months or sometimes years for modules to be ported over to the latest version of Drupal. 

You will not be looking at a full rebuild of your site or waiting for contributed modules to be updated for compatibility with Drupal 9. Drupal 9 is really just Drupal 8 with updates for third-party dependencies and with deprecated code removed.

So if your site is already on Drupal 8, the best thing you can do to make your site easy to upgrade is to keep your Drupal 8 site up-to-date. That means staying on top of the minor version releases and security updates. That way when the time comes, an upgrade to Drupal 9 will be akin to a Drupal 8 minor version upgrade. This is great news! 
 

How can Fuse help?

We’d love to hear from you in any of the following scenarios:

  • You have questions about your Drupal site that this post hasn’t answered for you 
  • You have a Drupal 7 site that needs to be upgraded and want to discuss the options
  • You have a Drupal site that needs ongoing security monitoring and updates
  • You are starting on a new Drupal project and require migration of content from another Drupal site or CMS
Aug 20 2019
Aug 20

It is no mystery why Drupal has been the chosen one for over a million diverse organizations all across the globe. Unsurprisingly, the reason behind the success of this open-source software is the devoted Drupal community. A diverse group of individuals who relentlessly work towards making Drupal stronger and more powerful every single day! To them, Drupal isn’t just a web CMS platform - Drupal is a Religion. A religion that unites everyone who believe that giving back is the only way to move forward. Where contributing to the Drupal project gives them meaning and purpose.

Recently, I had the privilege of interacting with a few of the most decorated and remarkable members of the Drupal community - who also happen to be Drupal’s top contributors. I questioned them about the reason(s) behind them contributing to Drupal and what do they do to make a difference. Their responses were incredible, honest and unfeigned.

Adrian_Cid_Almaguer

Adrian Cid Almaguer

Senior Drupal Developer. Acquia Certified Grand Master - Drupal 8

quotes I use Drupal every day and my career in the last years are focused to it, so I want to work with something that I feel comfortable and that meets my needs. If I find errors or something that can be done in a better way in projects I´m using or in the Drupal Core, I open an issue in the project queue and if I have the knowledge and the time, I create a patch for it. This is a way I can says THANKS to the Drupal community.

The strength of Drupal is the community and the contributes modules you can use to create your project, one person can’t create and maintain all the modules you will need, but if several of us give ourselves the task of doing it, all will be more easy, and is not just code, we need documentation, we need examples, translations and many other things in the community, the only way to do this is if each of the Drupal user give at least a small contribution to the community. So, when I contribute to Drupal, I’m helping you to have time to contribute to something that I may need in the future.

I maintain many Drupal modules, so basically the main contributions are create, update and migrate Drupal modules, but I contribute too in other areas. I contribute translating Drupal to the Spanish language and moderating the user translations, I create patches for some projects I do not maintain, sometimes I review some patches in the issue queue, I write and update modules documentation, I make some contributions creating tests for Drupal modules, I give support to the community in the Slack channels and in the Drupal Stack-exchange site and help new contributors to learn how to contribute projects to Drupal in the correct way. And as I’m a former teacher, I participate in regional Drupal events promoting how and why is important to contribute to Drupal projects and how to do it.

I will love to maintain a Drupal core module but I don’t know if I will have the time to do it, so for the moment I will continue migrating to Drupal 8, evolving and having up to date the modules I maintain.

Alex_Moreno

Alex Moreno

Technical Architect at Acquia

quotesContributing to open source is not just a good and healthy habit for the communities. It is also a healthy habit for your own projects and your self-improvement. Contributing validates your knowledge opening your knowledge to everyone else. So you can get feedback that helps yourself to improve, and also ensures that your project is taking the right direction. For example when patching other contributed modules with fixes or improvements.

I enjoy writing code. My main contributions have been always on that direction. Although more recently I have been also helping on other tasks, like Spanish translations in Drupal 8 Umami.

Baddy Sonja Breidert.

Baddy Sonja Breidert

Co-Founder of 1xINTERNET

quotes One of the reasons why I contribute to Drupal is to make Drupal more known in my area, get more people involved, attract new users, etc. I do my bit in contributing to the Drupal project by organising events like Drupal Europe and Drupal Camps in Germany and Iceland.

It is extremely gratifying to see new people from all over the world join the Drupal community - be it as developers, designers, volunteers, event organisers, testers or for example writing documentation. There are so many different ways to contribute!

And what happens over and over again is that people originally come for a very specific purpose, say a project they want to launch, and then stay in the community just because it is such a friendly, diverse and welcoming place! My work in the board of the Drupal Association confirms the old slogan over and over again: Come for the code, stay for the community!

Daniel_Wehner

Daniel Wehner

Senior Drupal Engineer at Times Higher Education

quotesUnlike many other projects the Drupal community tries to create a sustainable environment. Both from the technical site, but probably on the long run more important from the community side. Initiatives like Drupal Diversity & Inclusion lead the foundation for a project which won't just go away like many others

Jacob_Rockowitz>

Jacob Rockowitz

Drupal developer. Built and maintains the Webform module for Drupal 8

quotesContributing to open source software provides me with an endless collaborative challenge. My professional livelihood is tied to the success of Drupal which inspires me to give something back to the Drupal community. Contributing to Drupal also provides me with an intellectual and social hobby where I get to interact with new people every day.

Everyone has a personal groove/style for building software. After 20 years of writing software, I have come to accept that I like working towards a single goal/project, which is the Webform module for Drupal 8. At the same time, I also have learned that building open source software is more than just contributing code; it is about supporting and creating a community around the code. Supporting the Drupal community has led to also write documentation, blog about Drupal, Webform, and sustainability, present at conferences, and address the bigger picture around building and maintaining software

Joel_Pittet

Joel Pittet

Web Coder. Drupal 8 Theme System Co-maintainer

quotesI feel that I should give back to ensure the tools I use keep working. Monetarily or with my time. And with Drupal it’s a bit of both:

I started submitting patches for the Twig initiative for Drupal core, then mentoring and talks at DrupalCons and camps, followed by some contrib patches, then offered to co-maintain some commerce modules, which snowballed into more and more contrib module co-maintaining, mostly for ones I use at work.

I pay the Drupal Association individual membership to help the teams for all the Drupal.orgwork and event work they do.

Joachim_Noreiko

Joachim Noreiko

Freelance Drupal developer. Built and Maintains Drupal Code Builder

quotesI guess, I like fixing stuff, I like to code a bit in my spare time, I like to contribute to Drupal, and as a freelancer, it’s good to be visible in the community.

Lately I’ve actually been feeling a bit demotivated. I’ve been contributing to core a bit, but it’s always an uphill struggle getting beyond an initial patch. I maintain a few contrib modules, and my Drupal Code Builder tool as well.

Joris_Vercammen

Joris Vercammen (borisson)

Drupal developer, Search API + Facets

quotesBeing able to pull so many awesome modules for free really makes the work we all do in building good solutions for our customers a lot easier. This system doesn’t work without some of us putting things (code/time/blogposts/…) back into it. The Drupal community has given me a lot of things unrelated to just the software as well (really awesome friends, a better job, the ability to travel all over Europe, etc.). To enable others that come after me to have a similar experience, I think that it is important to give back, as long as it fits in the schedule.

Most of my contributions are under the form of code. I try to do some mentoring but while that is a lot more effective, it is really hard and I’m not that great at it, yet. I’m mostly interested in the Search API ecosystem because that’s what I got roped in to when I started contributing. A lot of my core contributions are for blockers (of blockers of blockers) for things that we need. I try to focus a little bit on the Facets module, since that is what I’m responsible for, but it’s not always easy or the most fun to do. Especially since I’ve still not built a Drupal 8 site with facets on it.

Malabya_Tiwari

Malabya

Open-source evangelist. Drupal Practice Head at Specbee

quotesCommunity. That’s what motivates me to contribute. The feeling I get when someone uses your code or module or theme is great. Which is a good drive to motivate for more contributions. Drupal being an open-source software, it is where it is just of the contributions by thousands of contributors. So, when we use Drupal it is our responsibility to contribute back to the software to make it even better for a wider reach

Apart from contributing modules, theme & distributions I help in organising local meetups in Bangalore and mentoring new developers to contribute and begin their contribution journey from the root level. This gives me immense pleasure when I can help someone to introduce to the world of Drupal and make them understand about the importance of contributions and community. Going forward, I would definitely strive towards introducing Drupal to students giving them a career choice and bring in more members to the Drupal community.

Nick_Wilde

Nick Wilde

Drupal developer at Taoti Creative

quotesMy main motivation has always been improving what I use - first OS contribution before my Drupal days was a bug-fix for an abandoned at the time project that was impairing my Modding of TES-III Morrowind ;). I like the challenges and benefits of working in a community. Code reviews both that I've done and those done on my code have been incredibly important to my growth as a developer. I also have used it as a portfolio/career advancement method, although that is important it is only of tertiary importance to me. Seeing a test go green or a getting confirmation that a bug is fixed is incredibly satisfying to me personally. Also, I believe if you use an open source project especially professionally, contributing back is the right thing.

My level of contributions vary a fair bit depending on my personal and professional level of busy, but mostly through contrib module maintenance/patch submissions. Also in the last year or so, I've been getting into a lot more mentorship roles - both in my new company and within the broader community. Restarted my local Drupal meetup and am doing presentations there regularly.

Rachel_Norfolk

Rachel Norfolk

Community Liaison at Drupal Association

quotes Contribution for me is, at least partly, a selfish act. I have learned so much from some of the best people in the industry, simply by following along and helping where I can. I have also built up an amazing network of people who, because they know I help others, are more prepared to help me when I need it. Both code and other ways of contributing. I’m occasionally in the Drupal core issue queues, I help mentor others and I get involved in community issues.

Renato_Goncalves

Renato Goncalves

Software Engineer at CI&T's Drupal Competence Office ()

quotesMy first motivation to contribute to the Drupal community is helping others that have the same requirement as mine. To be honest, I get very happy when someone uses my community code in their projects. I'm glad to know that I'm helping people. When I'm developing a new feature I check if my solution can be useful to other projects and that way I create my code using a generic way. - Usually, I'm the first to reuse the code several times. I think this is important to make Drupal a powerful and collaborative framework. I liked my first experience using the framework because for each requirement of my project, Drupal has a solution. I think contributing to the community is important for that. More and more new people are going to use the framework, and consequently new contributors, and in that way, it becomes increasingly powerful and efficient. An example of this is the Drupal Security Team, where they work hard to ensure that Drupal is a secure framework. I'm making contributions at the same time I delivery projects. Today I write my code in a generic way, that is, the code can be reused in other times. A good example of this model is the Janrain Connect project. This project is official in the community (contrib project) and my team and I w hard using 100% of the generic code, so we can reuse this code on other cases.

When we need to make some improvement in the code, the first point is checking a way to make this improvement using a generic solution. Using this approach we can help our project and help the community. In this way, we are contributing to making an organized and agile framework. The goal is that other people don't need to re-write code. It is a way of transforming the framework into a collaborative model.

Thomas_Seidl

Thomas Seidl

Drupal developer, “The Search API Guy”

quotesMy motivation comes from several sources: First off, I just like programming, and while fixing bugs, writing tests or giving support isn’t always fun, a lot of the time working on my modules is. It’s just one of my hobbies in that regard. Then, with my modules running on more than 100,000 sites (based on the report), there’s both a sense of accomplishment and responsibility – I feel proud in providing functionality for so many sites, and while, as a volunteer, I don’t feel directly responsible for them, I still want to help improve them where I can, take away pain points and ensure they keep running. And lastly, having a popular, well-maintained module is also the base of my business as a freelancer: it not only provides marketing for my abilities, but also the very market of users who want customizations. So, maintaining and improving my modules is also, indirectly, important for my income, even though the vast majority of my contributed work is unpaid.

Apart from participating in coding standards discussions, I almost exclusively contribute by maintaining my modules (and, increasingly rarely, adding new ones) – fixing bugs, adding features, answering support requests, etc. I sometimes also provide patches for other modules, but generally only when I’m paid to do so. (“My modules” being Search API and its add-on modules Database Search, Autocomplete, Saved Searches and, for D7 only, Solr, Pages, Location and Multi-Index Searches.)

And Lastly....

It’s not just brands that have adopted Drupal as their CMS – they are the cream of brands. From NASA to the Emmy Awards. From Harvard University to eBay. From Twitter to the New York State. These brands have various reasons to choose Drupal as their Content Management System. Drupal’s adaptability to any business process, advanced UX and UI capabilities for an interactive and personalized experience, load-time optimization functionalities, easy content authoring and management, high-security standards, the API-first architecture and so much more!

The major reason why Drupal is being accepted and endorsed by more than a million websites today is because Drupal is always ahead of the curve. Especially since Drupal adopted a continuous innovation model wherein updated versions are released every 6-months with seamless upgrade paths. All of this is possible because of the proactive and ever-evolving Drupal community. The goals for their contributions may vary - from optimizing projects for personal/professional success to creating an impact on others or simply to gain more experience. Either way, they are making a difference and taking Drupal to the next level every time they contribute. Thanks to all the contributors who are making Drupal a better place.

I’d like to end with an excerpt from Dries - “It’s really the Drupal community and not so much the software that makes the Drupal project what it is. So fostering the Drupal community is actually more important than just managing the code base.”

Warmly thanking all the mentioned contributors for helping me put this article together.

drupal community infographic
Aug 19 2019
Aug 19

This blog has been re-posted and edited with permission from Dries Buytaert's blog.

Low-code and no-code tools for the web are on a decade-long rise; they enable self-service for marketers, and allow developers to focus on innovation.

Low code no code

A version of this article was originally published on Devops.com.

Twelve years ago, I wrote a post called Drupal and Eliminating Middlemen. For years, it was one of the most-read pieces on my blog. Later, I followed that up with a blog post called The Assembled Web, which remains one of the most read posts to date.

The point of both blog posts was the same: I believed that the web would move toward a model where non-technical users could assemble their own sites with little to no coding experience of their own.

This idea isn't new; no-code and low-code tools on the web have been on a 25-year long rise, starting with the first web content management systems in the early 1990s. Since then no-code and low-code solutions have had an increasing impact on the web. Examples include:

While this has been a long-run trend, I believe we're only at the beginning.

Trends driving the low-code and no-code movements

According to Forrester Wave: Low-Code Development Platforms for AD&D Professionals, Q1 2019, In our survey of global developers, 23% reported using low-code platforms in 2018, and another 22% planned to do so within a year..

Major market forces driving this trend include a talent shortage among developers, with an estimated one million computer programming jobs expected to remain unfilled by 2020 in the United States alone.

What is more, the developers who are employed are often overloaded with work and struggle with how to prioritize it all. Some of this burden could be removed by low-code and no-code tools.

In addition, the fact that technology has permeated every aspect of our lives — from our smartphones to our smart homes — has driven a desire for more people to become creators. As the founder of Product HuntRyan Hoover, said in a blog post: "As creating things on the internet becomes more accessible, more people will become makers."

But this does not only apply to individuals. Consider this: the typical large organization has to build and maintain hundreds of websites. They need to build, launch and customize these sites in days or weeks, not months. Today and in the future, marketers can embrace no-code and low-code tools to rapidly develop websites.

Abstraction drives innovation

As discussed in my middleman blog post, developers won't go away. Just as the role of the original webmaster (FTP hand-written HTML files, anyone?) has evolved with the advent of web content management systems, the role of web developers is changing with the rise of low-code and no-code tools.

Successful no-code approaches abstract away complexity for web development. This enables less technical people to do things that previously could only be done by developers. And when those abstractions happen, developers often move on to the next area of innovation.

When everyone is a builder, more good things will happen on the web. I was excited about this trend more than 12 years ago, and remain excited today. I'm eager to see the progress no-code and low-code solutions will bring to the web in the next decade.

Aug 19 2019
Aug 19

Experience

My experience with healthcare, Drupal, and webforms

For the past 20 years, I have worked in healthcare helping Memorial Sloan Kettering Cancer Center (MSKCC) evolve their digital platform and patient experience. About ten years ago, I persuaded MSKCC to switch to Drupal 6, which was followed by a migration to Drupal 8. More recently, I have become the maintainer of the Webform module for Drupal 8. Now, I want to leverage my experience and expertise in healthcare, webforms, and Drupal, to start exploring how we can improve patient and caregiver’s digital experience related to online appointment requests.

It’s important that we understand the problem/challenge of requesting an appointment online, examine how hospitals are currently solving this problem, and then offer some recommendations and ways to improve existing approaches. Instead of writing one very long blog post, I’m going to break up this discussion into a series of three blog posts. This initial post is going to address the patient journey and experience around an appointment request form.

These blog posts are not Drupal-specific, but my goal is to create and share an exemplary "Request an appointment" form template for the Webform module for Drupal 8.

Improving patient and caregiver’s digital experience

Improving the patient and caregiver digital experience is a very broad, massive, and challenging topic. Personally, my goal when working with doctors, researcher, and caregivers is…

Make it easier for patients to find the care they need as well as make it easier for caregivers to give the care that patients need.

Making things "easy" for patients and caregivers in healthcare is easier said than done. Healthcare systems are complex machines with multiple players dealing with different siloed technologies. Complexity in healthcare leads to a lot of frustration, which has become expected and accepted when entering into a doctor's office or hospital, or going online to file an insurance claim. The best way to make something easier is to simplify it.

Patients and caregivers want a simpler digital experience

In technology, the biggest disruptors of different industries share a commonality of making things simpler. Ride-sharing services, like Uber and Lyft, have disrupted the taxi and limousine industry by making it simple and easy to book-a-ride using a phone. In healthcare, services like ZocDoc, empower patients to book doctor appointments on a computer or phone. Healthcare systems and hospitals need to recognize that a simpler digital and in-person experience is what consumers want. Simpler digital experiences need to be thought about and provided by all aspects of a healthcare organization. Stepping back, we need to remember that digital is one part of a patient’s journey through a healthcare system.

Waiting room

Waiting room

Where does a patient’s journey begin?

There are dozens of doors that a patient may enter to get care; it could be a referral from a friend, an ambulance pulling into an ER, a banner advertisement, or a comprehensive search on the internet. Online searches inevitably lead to a hospital's website. A hospital's website is where many patients digital journeys begin.

A hospital's website is its digital entrance to the hospital

The notion that a hospital should envision their website as another entrance to the building gives people a tangible comparison for a patient's digital experience. The hospital should view the digital and the physical experiences as equally important. A hospital's lobby and website both need to be clean, organized, and easy to navigate. When a patient walks into a hospital or clicks on a link, we want them to know where they need to go as well as how to get the care they need. Both the website and the lobby need to have clear, straightforward instructions lest they find they have a frustrated and annoyed patient in their midst.

To get care, patients need to provide information about themselves; Who are they? What is wrong? Do they have insurance? The questions are answered using paper or digital form, which begins a patient's journey.

Forms are important to healthcare

Healthcare is driven by information. This information needs to be collected and stored. The more information that can be collected about a patient, the easier it is to provide them with the care they need. At the same time, it is also important to add that it is equally important to collect the "right" information about a patient at the "appropriate time." Obtaining the "right" information requires asking the "right" questions and collecting the most complete and accurate answers. Knowing when is the "appropriate time" to ask a question is even more challenging. For example, collecting a patient's entire medical history is not required to book a patient's first appointment. Many patient's digital experiences begin with an appointment form.

Knowing when is the "appropriate time" to ask a question is more challenging. For example, collecting a patient's entire medical history is not required to book a patient's first appointment. Many patient's digital experiences begin with an appointment form.

An appointment form begins or ends a patient's journey

Many years ago, while discussing the usability of some minor aspect of a request an appointment, I noted that:

If someone wants to request an appointment, they are going to fill out this form no matter what.

There is some truth to this statement because patients have lots of patience (pun intended) when dealing with healthcare. Would an unusable request an appointment form ultimately end a patient's journey? Probably not.

The hospital that provides the cleanest, simplest, and most accessible digital experience will distinguish themselves.

Appointment forms set the tone for a patient's experience with a hospital.

Types of appointment form

Booking an appointment online is not an easy task. Most hospital websites offer a "request an appointment'" form that collects a potential patient's contact information, which leads to someone calling the patient back to "schedule an appointment." Services like ZocDoc, allow patients to "schedule an appointment" without having to interact with a physical person. Even though we are primarily talking about a patient digital journey, we can't forget that people still pick up the phone and call to schedule an appointment. We should never underestimate the power of a phone call. For example, parents of pediatric patients need the caring voice of a live person telling them everything is going to be okay.

We are going to focus on a hospital's request for an appointment, which is used to collect the information needed to for a patient to schedule an appointment.

Recommendations

Recommendations for all forms

Before diving into healthcare specific forms, it helps to step back and understand what makes a form easy to understand and use. Drupal and the Webform module follows industry standard best practices around ensuring that forms are accessible to people with disabilities, available to mobile and desktop users, and provide a clean and predictable layout with standard error validation. For example, in the Webform module for Drupal, forms are laid out vertically with top-aligned labels, which have been shown make it easier for users to complete forms.

My top three general recommendations for building webform are

Keep it simple. This will reduce frustration and ensure that users will complete the webform.

Group and organize labels and inputs. The visual and information layout of a form matters. Grouping related inputs with proper labeling help users to understand what information is needed to complete a form.

Communicate what information is expected, required, and optional. Users need visual indicators that show how to complete a form and with clear error messages when data is missing.

Using the Webform module, or any other enterprise form builder, can significantly help when building accessible and usable forms. The challenge for a building a request an appointment form is asking the right questions using the best approach.

The best approach for a request an appointment form

Figuring out who is filling out the form should be the first question on any request appointment form. Knowing this single piece of information makes it possible to present the user with a form that asks the right questions. Once we know who someone is, a form can be customized, or we can direct the user to the appropriate form. For example, international appointment requests tend to require additional contact information, including callback times, which can be better addressed in a dedicated form.

Request appointment forms ask questions that lead to conversations.

Request an appointment forms are conversations

Appointment request forms result in a call back from a person, which results in conversation. Forms are essentially a digital conversation with an end user. Appointment request forms should start with general questions and lead to more specific information. The three questions that need to be answered are…

Not surprising these questions are identical to what a nurse or doctor asks when in-taking a new patient. These questions lead to the collection of different types of information which can be organized into groupings. For example, the answer to "Who are you?" is mainly contact information and the answer to "What brought you here today?" is healthcare information. Appointment requests form should guide patients through the discussion about their specific health-related issue.

In-person, when a nurse or doctor asks a question, based on the patient's answer, they will choose which question should or should not be asked next. Online forms can provide a similar user experience by using conditional logic to hide and show questions based on previous answers. Conditional logic can also be used to determine which questions should be required vs optional.

If a request an appointment form is a digital conversation, nothing about this experience should be frustrating. We want to ensure that a patient can book an appointment. With this goal in mind, every appointment form should include a phone number that a patient or caregiver can call to schedule an appointment.

Once the conversation has ended via the patient submitting the form, make sure to confirm their appointment request. Then, tell the patient what to expect next.

How top US hospitals approach their online appointment request form?

My next blog post is going to examine how top US hospitals (ranked by US News) are handling online appointment requests. Before reading my evaluations and opinions, you might want to visit the Mayo Clinic, Cleveland Clinic, Johns Hopkins Hospital, and Massachusetts General's appointment request form and ask yourselves if I was a new patient…

  • How would I feel when filling out these request an appointment form?

  • Are these appointment request forms providing a good digital experience?

  • What type of patient journey would one expect at this institution?

Almost done…

We just sent you an email. Please click the link in the email to confirm your subscription!

OKSubscriptions powered by Strikingly

Aug 19 2019
Aug 19

In the previous two blog posts, we learned to migrate data from JSON and XML files. We presented to configure the migrations to fetch remote files. In today's blog post, we will learn how to add HTTP request headers and authentication to the request. . For HTTP authentication, you need to choose among three options: Basic, Digest, and OAuth2. To provide this functionality, the Migrate API leverages the Guzzle HTTP Client library. Usage requirements and limitations will be presented. Let's begin.

Example config to add HTTP headers and authentication

Migrate Plus architecture for remote data fetching

The Migrate Plus module provides an extensible architecture for importing remote files. It makes use of different plugin types to fetch file, add HTTP authentication to the request, and parse the response. The following is an overview of the different plugins and how they work together to allow code and configuration reuse.

Source plugin

The url source plugin is at the core of the implementation. Its purpose is to retrieve data from a list of URLs. Ingrained in the system is the goal to separate the file fetching from the file parsing. The url plugin will delegate both tasks to other plugin types provided by Migrate Plus.

Data fetcher plugins

For file fetching, you have two options. A general-purpose file fetcher for getting files from the local file system or via stream wrappers. This plugin has been explained in detail on the posts about JSON and XML migrations. Because it supports stream wrapper, this plugin is very useful to fetch files from different locations and over different protocols. But it has two major downsides. First, it does not allow setting custom HTTP headers nor authentication parameters. Second, this fetcher is completely ignored if used with the xml or soap data parser (see below).

The second fetcher plugin is http. Under the hood, it uses the Guzzle HTTP Client library. This plugin allows you to define a headers configuration. You can set it to a list of HTTP headers to send along with the request. It also allows you to use authentication plugins (see below). The downside is that you cannot use stream wrappers. Only protocols supported by curl can be used: http, https, ftp, ftps, sftp, etc.

Data parsers plugins

Data parsers are responsible for processing the files considering their type: JSON, XML, or SOAP. These plugins let you select a subtree within the file hierarchy that contains the elements to be imported. Each record might contain more data than what you need for the migration. So, you make a second selection to manually indicate which elements will be made available to the migration. Migrate plus provides four data parses, but only two use the data fetcher plugins. Here is a summary:

  • json can use any of the data fetchers. Offers an extra configuration option called include_raw_data. When set to true, in addition to all the fields manually defined, a new one is attached to the source with the name raw. This contains a copy of the full object currently being processed.
  • simple_xml can use any data fetcher. It uses the SimpleXML class.
  • xml does not use any of the data fetchers. It uses the XMLReader class to directly fetch the file. Therefore, it is not possible to set HTTP headers or authentication.
  • xml does not use any data fetcher. It uses the SoapClient class to directly fetch the file. Therefore, it is not possible to set HTTP headers or authentication.

The difference between xml and simple_xml were presented in the previous article.

Authentication plugins

These plugins add authentication headers to the request. If correct, you could fetch data from protected resources. They work exclusively with the http data fetcher. Therefore, you can use them only with json and simple_xml data parsers. To do that, you set an authentication configuration whose value can be one of the following:

  • basic for HTTP Basic authentication.
  • digest for HTTP Digest authentication.
  • oauth2 for OAuth2 authentication over HTTP.

Below are examples for JSON and XML imports with HTTP headers and authentication configured. The code snippets do not contain real migrations. You can also find them in the ud_migrations_http_headers_authentication directory of the demo repository https://github.com/dinarcon/ud_migrations.

Important: The examples are shown for reference only. Do not store any sensitive data in plain text or commit it to the repository.

JSON and XML Drupal migrations with HTTP request headers and Basic authentication.

source:
  plugin: url
  data_fetcher_plugin: http
  # Choose one data parser.
  data_parser_plugin: json|simple_xml

  urls:
    - https://understanddrupal.com/files/data.json

  item_selector: /data/udm_root

  # This configuration is provided by the http data fetcher plugin.
  # Do not disclose any sensitive information in the headers.
  headers:
    Accept-Encoding: 'gzip, deflate, br'
    Accept-Language: 'en-US,en;q=0.5'
    Custom-Key: 'understand'
    Arbitrary-Header: 'drupal'

  # This configuration is provided by the basic authentication plugin.
  # Credentials should never be saved in plain text nor committed to the repo.
  autorization:
    plugin: basic
    username: totally
    password: insecure

  fields:
    - name: src_unique_id
      label: 'Unique ID'
      selector: unique_id
    - name: src_title
      label: 'Title'
      selector: title
  ids:
    src_unique_id:
      type: integer
process:
  title: src_title
destination:
  plugin: 'entity:node'
  default_bundle: page

JSON and XML Drupal migrations with HTTP request headers and Digest authentication.

source:
  plugin: url
  data_fetcher_plugin: http
  # Choose one data parser.
  data_parser_plugin: json|simple_xml

  urls:
    - https://understanddrupal.com/files/data.json

  item_selector: /data/udm_root

  # This configuration is provided by the http data fetcher plugin.
  # Do not disclose any sensitive information in the headers.
  headers:
    Accept: 'application/json; charset=utf-8'
    Accept-Encoding: 'gzip, deflate, br'
    Accept-Language: 'en-US,en;q=0.5'
    Custom-Key: 'understand'
    Arbitrary-Header: 'drupal'

  # This configuration is provided by the digest authentication plugin.
  # Credentials should never be saved in plain text nor committed to the repo.
  autorization:
    plugin: digest
    username: totally
    password: insecure

  fields:
    - name: src_unique_id
      label: 'Unique ID'
      selector: unique_id
    - name: src_title
      label: 'Title'
      selector: title
  ids:
    src_unique_id:
      type: integer
process:
  title: src_title
destination:
  plugin: 'entity:node'
  default_bundle: page

JSON and XML Drupal migrations with HTTP request headers and OAuth2 authentication.

source:
  plugin: url
  data_fetcher_plugin: http
  # Choose one data parser.
  data_parser_plugin: json|simple_xml

  urls:
    - https://understanddrupal.com/files/data.json

  item_selector: /data/udm_root

  # This configuration is provided by the http data fetcher plugin.
  # Do not disclose any sensitive information in the headers.
  headers:
    Accept: 'application/json; charset=utf-8'
    Accept-Encoding: 'gzip, deflate, br'
    Accept-Language: 'en-US,en;q=0.5'
    Custom-Key: 'understand'
    Arbitrary-Header: 'drupal'

  # This configuration is provided by the oauth2 authentication plugin.
  # Credentials should never be saved in plain text nor committed to the repo.
  autorization:
    plugin: oauth2
    grant_type: client_credentials
    base_uri: https://understanddrupal.com
    token_url: /oauth2/token
    client_id: some_client_id
    client_secret: totally_insecure_secret

  fields:
    - name: src_unique_id
      label: 'Unique ID'
      selector: unique_id
    - name: src_title
      label: 'Title'
      selector: title
  ids:
    src_unique_id:
      type: integer
process:
  title: src_title
destination:
  plugin: 'entity:node'
  default_bundle: page

What did you learn in today’s blog post? Did you know the configuration names for adding HTTP request headers and authentication to your JSON and XML requests? Did you know that this was limited to the parsers that make use of the http fetcher? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors: Drupalize.me by Osio Labs has online tutorials about migrations, among other topics, and Agaric provides migration trainings, among other services.  Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Aug 19 2019
Aug 19

Making sure your website is accessible is becoming a necessity - and with all the right reasons. The web is for everyone and, as such, everyone should be able to use it effectively, no matter their physical ability. Sites that are inaccessible automatically prevent a large number of people from using them.

We’ve already written a series of blog posts on Drupal and accessibility - you can check them out here: part 1 & part 2. As you can probably glean from these two posts, Drupal offers a lot of accessibility features out-of-the-box, e.g. the requirement of alt text for images in Drupal 8 (another strong case, by the way, for migrating to Drupal 8 ASAP). 

The second part of the series also takes a look at a few contributed modules with which you can further improve the accessibility of a Drupal website. During the time since the blog post’s publication, however, there have been many more accessibility-focused modules contributed to the Drupal project - and these are what we’ll take a closer look at in this post. 

Accessibility toolkit (& Accessibility)

While only available for Drupal 7, the Accessibility toolkit (the a11y module) is an invaluable resource for Drupal developers that are tasked with building user-friendly and accessible sites. It allows for: dyslexic font support, high contrast mode, inverted colors mode and text scaling. 

On top of that, it also provides support for simulating specific disabilities. Since it’s quite difficult for an able-bodied person to put themselves in the shoes of a disabled person, these simulations greatly help developers to feel empathy by reproducing the symptoms of certain disabilities such as dyslexia or colorblindness. 

If you’re looking for a module with similar capabilities that can also be used in Drupal 8, the Accessibility module is the one closest to the a11y module - it’s geared more towards content editors and site maintainers, though. It provides a set of available accessibility tests that check the content published by your editors and other users for any accessibility errors, such as a missing alt text (granted, with Drupal 8 this is already automatic). 

So, for a Drupal 7 site, these two modules can be employed in tandem: one is used for ensuring accessibility in development, while the other is used in the live environment to make sure that the content and design meet accessibility standards. Just a disclaimer, though: the Accessibility module is not covered by Drupal’s security advisory policy, since it uses the QUAIL jQuery plugin which is no longer supported.

A11y

Accessibility

Accessibility Scanner

Accessibility Scanner is a relatively new module; the first development version was released in March, while the latest alpha version was released just about two months ago (June 20). With this module, you can use Drupal together with achecker to perform web accessibility scans directly in the Drupal admin interface. 

Accessibility Scanner

Style Switcher

The Style Switcher module provides incredibly useful functionality for visitors that suffer from color blindness. It allows themers to create themes with alternate stylesheets, and site builders to add other alternate stylesheets right in the admin section. 

A site visitor is then presented with all those styles as links in a block, and they can choose the one that they prefer, e.g. one with the optimal contrast for their specific type of color blindness.

The module is available for both Drupal 7 and 8, but the Drupal 8 version is still only in alpha.

Style Switcher

Block ARIA Landmark Roles

This module was already mentioned in part 2 of our series on web accessibility in Drupal; it’s available for Drupal 7 and 8. It allows you to assign ARIA landmark roles and/or ARIA labels to a block, which makes it easier for screen readers and other assistive technologies to identify the type and purpose of a certain piece of content. This greatly simplifies site navigation for visitors using such technologies. 

Block ARIA Landmark Roles

Text Resize

While it’s quite easy to resize the text of a page using the keyboard (‘ctrl’ and either ‘+’ or ‘-’), not everyone browsing the web is aware of that. The Text Resize module, available for both Drupal 7 and Drupal 8, allows visitors to change the font size of a text through a special block. It also comes with a ‘reset’ option which has to be enabled from the admin page.

Text Resize

Automatic Alternative Text

With this Drupal 8 module, you can automatically generate an alt text for an image for which the user hasn’t provided any. This is done using the Microsoft Azure Cognitive Services API.

It provides one or more descriptions of an image which are ordered according to their confidence. The default descriptions are in English, but it is also possible to translate them into other languages. 

Providing an alternative text is crucial for blind or visually impaired visitors using screen readers, as it is pretty much the only means for them to take in the full content of a page. On top of that, images with the provided alt text are more SEO-friendly and thus help with your site's search engine ranking.

Even though Drupal 8 demands alt text by default for content creators, content submitted by users should also include it, and this module enables just that.

Automatic Alternative Text

Fluidproject UI Options

The UI Options module by Fluid enables users to modify a page’s font size, line height, font style, contrast and link style according to their preferences. All changes made are retained thanks to cookies. 

The module does have some limitations, however. Bootstrap themes, for example, need some additional CSS for font-sizing and line heights to work as they should, and elements that use CSS gradients can’t have their contrast settings changed. 

Fluidproject UI Options

htmLawed

This is a very useful module not just in the context of accessibility, but also security. It restricts and purifies HTML code so that it complies with the site administrator policy and standards and security best practices. 

Using this module, you’re able to autocorrect and beautify HTML markup as well as restrict HTML elements, attributes and URL protocols in the input. Moreover, it also balances tags and ensures that HTML elements are properly nested, transforms deprecated tags and attributes, etc. 

htmLawed

HTML Purifier

A very similar module to the just mentioned htmLawed, the HTML Purifier filter library is again perfect for meeting both security and accessibility requirements. It removes malicious code from your website while also ensuring W3C standards compliance. 

HTML Purifier is a great fit for Drupal as it works really well with WYSIWYG editors. With it, you get a lot of options, such as custom fonts, tables, inline styling, and many more. It’s available both for Drupal 7 and 8.

HTML Purifier

Conclusion

This was our list of modules for Drupal 7 and 8 that take care of different aspects of web accessibility. Depending on what security measures you’ve already implemented and what your team’s best practices are, you likely won’t need to employ every single module on this list.

Still, we wanted to give an overview of different options so that you can pick and choose the one that best fits your needs. These modules provide accessibility resources for both developers and content editors, as well as visitors using the site, so you’re sure to find a combination that works for you.

If you're still experiencing accessibility issues or are in need of a complete accessibility overhaul, give us a shout out and let our experienced and proven developers help you make your site accessible to everyone.
 

Aug 19 2019
Aug 19

Low-code and no-code tools for the web are on a decade-long rise; they enable self-service for marketers, and allow developers to focus on innovation.

Low code no code

A version of this article was originally published on Devops.com.

Twelve years ago, I wrote a post called Drupal and Eliminating Middlemen. For years, it was one of the most-read pieces on my blog. Later, I followed that up with a blog post called The Assembled Web, which remains one of the most read posts to date.

The point of both blog posts was the same: I believed that the web would move toward a model where non-technical users could assemble their own sites with little to no coding experience of their own.

This idea isn't new; no-code and low-code tools on the web have been on a 25-year long rise, starting with the first web content management systems in the early 1990s. Since then no-code and low-code solutions have had an increasing impact on the web. Examples include:

While this has been a long-run trend, I believe we're only at the beginning.

Trends driving the low-code and no-code movements

According to Forrester Wave: Low-Code Development Platforms for AD&D Professionals, Q1 2019, In our survey of global developers, 23% reported using low-code platforms in 2018, and another 22% planned to do so within a year..

Major market forces driving this trend include a talent shortage among developers, with an estimated one million computer programming jobs expected to remain unfilled by 2020 in the United States alone.

What is more, the developers who are employed are often overloaded with work and struggle with how to prioritize it all. Some of this burden could be removed by low-code and no-code tools.

In addition, the fact that technology has permeated every aspect of our lives — from our smartphones to our smart homes — has driven a desire for more people to become creators. As the founder of Product Hunt Ryan Hoover said in a blog post: As creating things on the internet becomes more accessible, more people will become makers..

But this does not only apply to individuals. Consider this: the typical large organization has to build and maintain hundreds of websites. They need to build, launch and customize these sites in days or weeks, not months. Today and in the future, marketers can embrace no-code and low-code tools to rapidly develop websites.

Abstraction drives innovation

As discussed in my middleman blog post, developers won't go away. Just as the role of the original webmaster has evolved with the advent of web content management systems, the role of web developers is changing with the rise of low-code and no-code tools.

Successful no-code approaches abstract away complexity for web development. This enables less technical people to do things that previously could only by done by developers. And when those abstractions happen, developers often move on to the next area of innovation.

When everyone is a builder, more good things will happen on the web. I was excited about this trend more than 12 years ago, and remain excited today. I'm eager to see the progress no-code and low-code solutions will bring to the web in the next decade.

August 19, 2019

2 min read time

Aug 19 2019
Aug 19

This is a short story on an interesting problem we were having with the Feeds module and Feeds directory fetcher module in Drupal 7.

Background on the use of Feeds

Feeds for this site is being used to ingest XML from a third party source (Reuters). The feed perhaps ingests a couple of hundred articles per day. There can be updates to the existing imported articles as well, but typically they are only updated the day the article is ingested.

Feeds was working well for over a few years, and then all of a sudden, the ingests started to fail. The failure was only on production, whilst the other (lower environments) the ingestion worked as expected.

The bizarre error

On production we were experiencing the error during import:

PDOStatement::execute(): MySQL server has gone away database.inc:2227 [warning] 
PDOStatement::execute(): Error reading result set's header [warning] 
database.inc:2227PDOException: SQLSTATE[HY000]: General error: 2006 MySQL server has [error]

The error is not so much that the database server is not alive, more so that PHP's connection to the database has been severed due to exceeding MySQL's wait_timeout value.

The reason why this would occur on only production happens on Acquia typically when you need to read and write to the shared filesystem a lot. As lower environments, the filesystem is local disk (as the environments are not clustered) the access is a lot faster. On production, the public filesystem is a network file share (which is slower).

Going down the rabbit hole

Working out why Feeds was wanting to read and/or write many files from the filesystem was the next question, and immediately one thing stood out. The shear size of the config column in the feeds_source table:

mysql> SELECT id,SUM(char_length(config))/1048576 AS size FROM feeds_source GROUP BY id;
+-------------------------------------+---------+
| id                                  | size    |
+-------------------------------------+---------+
| apworldcup_article                  |  0.0001 |
| blogs_photo_import                  |  0.0003 |
| csv_infographics                    |  0.0002 |
| photo_feed                          |  0.0002 |
| po_feeds_prestige_article           |  1.5412 |
| po_feeds_prestige_gallery           |  1.5410 |
| po_feeds_prestige_photo             |  0.2279 |
| po_feeds_reuters_article            | 21.5086 |
| po_feeds_reuters_composite          | 41.9530 |
| po_feeds_reuters_photo              | 52.6076 |
| example_line_feed_article           |  0.0002 |
| example_line_feed_associate_article |  0.0001 |
| example_line_feed_blogs             |  0.0003 |
| example_line_feed_gallery           |  0.0002 |
| example_line_feed_photo             |  0.0001 |
| example_line_feed_video             |  0.0002 |
| example_line_youtube_feed           |  0.0003 |
+-------------------------------------+---------+
What 52 MB of ASCII looks like in a single cell.

Having to deserialize 52 MB of ASCII in PHP is bad enough.

The next step was dumping the value of the config column for a single row:

drush --uri=www.example.com sqlq 'SELECT config FROM feeds_source WHERE id = "po_feeds_reuters_photo"' > /tmp/po_feeds_reuters_photo.txt
Get the 55 MB of ASCII in a file for analysis

Then open the resulting file in vim:

"/tmp/po_feeds_reuters_photo.txt" 1L, 55163105C
Vim struggles to open any file that has 55 million characters on a single line

And sure enough, inside this config column was a reference to every single XML file ever imported, a cool ~450,000 files.

a:2:{s:31:"feeds_fetcher_directory_fetcher";a:3:{s:6:"source";s:23:"private://reuters/pass1";s:5:"reset";i:0;
s:18:"feed_files_fetched";a:457065:{
s:94:"private://reuters/pass1/topnews/2018-07-04T083557Z_1_KBN1JU0WQ_RTROPTC_0_US-CHINA-AUTOS-GM.XML";i:1530693632;
s:94:"private://reuters/pass1/topnews/2018-07-04T083557Z_1_KBN1JU0WR_RTROPTT_0_US-CHINA-AUTOS-GM.XML";i:1530693632;
s:96:"private://reuters/pass1/topnews/2018-07-04T083557Z_1_LYNXMPEE630KJ_RTROPTP_0_USA-TRADE-CHINA.XML";i:1530693632;
s:97:"private://reuters/pass1/topnews/2018-07-04T083617Z_147681_KBE99T04E_RTROPTT-LNK_0_OUSBSM-LINK.XML";i:1530693632;
s:102:"private://reuters/pass1/topnews/2018-07-04T083658Z_1_KBN1JU0X2_RTROPTT_0_JAPAN-RETAIL-ZOZOTOWN-INT.XML";i:1530693632
457,065 is the array size in feed_files_fetched

So this is the root cause of the problem, Drupal is attempting to stat() ~450,000 files that do not exist, and these files are mounted on a network file share. This process took longer than MySQL's wait_timeout and MySQL closed the connection. When Drupal finally wanted to talk to the database, it was not to be found.

Interesting enough, the problem of the config column running out of space came up in 2012, and "the solution" was just to change the type of the column. Now you can store 4GB of content in this 1 column. In hindsight, perhaps this was not the smartest solution.

Also in 2012, you see the comment from @valderama:

However, as feed_files_fetched saves all items which were already imported, it grows endless if you have a periodic import.

Great to see we are not the only people having this pain.

The solution

The simple solution to limp by is to increase the wait_timeout value of your Database connection. This gives Drupal more time to scan for the previously imported files prior to importing the new ones.

$databases['default']['default']['init_commands'] = [
  'wait_timeout' => "SET SESSION wait_timeout=1500",
];
Increasing MySQL's wait_timeout in Drupal's settings.php.

As you might guess, this is not a good long term solution for sites with a lot of imported content, or content that is continually being imported.

Instead we opted to do a fairly quick update hook that would loop though all of the items in the feed_files_fetched key, and unset the older items.

<?php

/**
 * @file
 * Install file.
 */

/**
 * Function to iterate through multiple strings.
 *
 * @see https://www.sitepoint.com/community/t/strpos-with-multiple-characters/2004/2
 * @param $haystack
 * @param $needles
 * @param int $offset
 * @return bool|int
 */
function multi_strpos($haystack, $needles, $offset = 0) {
  foreach ($needles as $n) {
    if (strpos($haystack, $n, $offset) !== FALSE) {
      return strpos($haystack, $n, $offset);
    }
  }
  return false;
}

/**
 * Implements hook_update_N().
 */
function example_reuters_update_7001() {
  $feedsSource = db_select("feeds_source", "fs")
    ->fields('fs', ['config'])
    ->condition('fs.id', 'po_feeds_reuters_photo')
    ->execute()
    ->fetchObject();

  $config = unserialize($feedsSource->config);

  // We only want to keep the last week's worth of imported articles in the
  // database for content updates.
  $cutoff_date = [];
  for ($i = 0; $i < 7; $i++) {
    $cutoff_date[] = date('Y-m-d', strtotime("-$i days"));
  }

  watchdog('FeedSource records - Before trimmed at ' . time(), count($config['feeds_fetcher_directory_fetcher']['feed_files_fetched']));

  // We attempt to match based on the filename of the imported file. This works
  // as the files have a date in their filename.
  // e.g. '2018-07-04T083557Z_1_KBN1JU0WQ_RTROPTC_0_US-CHINA-AUTOS-GM.XML'
  foreach ($config['feeds_fetcher_directory_fetcher']['feed_files_fetched'] as $key => $source) {
    if (multi_strpos($key, $cutoff_date) === FALSE) {
      unset($config['feeds_fetcher_directory_fetcher']['feed_files_fetched'][$key]);
    }
  }

  watchdog('FeedSource records - After trimmed at ' . time(), count($config['feeds_fetcher_directory_fetcher']['feed_files_fetched']));

  // Save back to the database.
  db_update('feeds_source')
    ->fields([
      'config' => serialize($config),
    ])
    ->condition('id', 'po_feeds_reuters_photo', '=')
    ->execute();
}

Before the code ran, there were > 450,000 items in the array, and after we are below 100. So a massive decrease in database size.

More importantly, the importer now runs a lot quicker (as it is not scanning the shared filesystem for non-existent files).

Aug 18 2019
Aug 18

Today we will learn how to migrate content from a JSON file into Drupal using the Migrate Plus module. We will show how to configure the migration to read files from the local file system and remote locations. The example includes node, images, and paragraphs migrations. Let’s get started.

Example configuration of JSON source migration

Note: Migrate Plus has many more features. For example, it contains source plugins to import from XML files and SOAP endpoints. It provides many useful process plugins for DOM manipulation, string replacement, transliteration, etc. The module also lets you define migration plugins as configurations and create groups to share settings. It offers a custom event to modify the source data before processing begins. In today’s blog post, we are focusing on importing JSON files. Other features will be covered in future entries.

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD JSON source migration whose machine name is ud_migrations_json_source. It comes with four migrations: udm_json_source_paragraph, udm_json_source_image, udm_json_source_node_local, and udm_json_source_node_remote.

You can get the Migrate Plus module using composer: composer require 'drupal/migrate_plus:^5.0'. This will install the 8.x-5.x branch where new development will happen. This branch was created to introduce breaking changes in preparation for Drupal 9. As of this writing, the 8.x-4.x branch has feature parity with the newer branch. If your Drupal site is not composer-based, you can download the module manually.

Understanding the example set up

This migration will reuse the same configuration from the introduction to paragraph migrations example. Refer to that article for details on the configuration: the destinations will be the same content type, paragraph type, and fields. The source will be changed in today's example, as we use it to explain JSON migrations. The end result will again be nodes containing an image and a paragraph with information about someone’s favorite book. The major difference is that we are going to read from JSON. In fact, three of the migrations will read from the same file. The following snippet shows a reduced version of the file to get a sense of its structure:

{
  "data": {
    "udm_people": [
      {
        "unique_id": 1,
        "name": "Michele Metts",
        "photo_file": "P01",
        "book_ref": "B10"
      },
      {...},
      {...}
    ],
    "udm_book_paragraph": [
      {
        "book_id": "B10",
        "book_details": {
          "title": "The definite guide to Drupal 7",
          "author": "Benjamin Melançon et al."
        }
      },
      {...},
      {...}
    ],
    "udm_photos": [
      {
        "photo_id": "P01",
        "photo_url": "https://agaric.coop/sites/default/files/pictures/picture-15-1421176712.jpg",
        "photo_dimensions": [240, 351]
      },
      {...},
      {...}
    ]
  }
}

Note: You can literally swap migration sources without changing any other part of the migration.  This is a powerful feature of ETL frameworks like Drupal’s Migrate API. Although possible, the example includes slight changes to demonstrate various plugin configuration options. Also, some machine names had to be changed to avoid conflicts with other examples in the demo repository.

Migrating nodes from a JSON file

In any migration project, understanding the source is very important. For JSON migrations, there are two major considerations. First, where in the file hierarchy lies the data that you want to import. It can be at the root of the file or several levels deep in the hierarchy. Second, when you get to the array of records that you want to import, what fields are going to be made available to the migration. It is possible that each record contains more data than needed. For improved performance, it is recommended to manually include only the fields that will be required for the migration. The following code snippet shows part of the local JSON file relevant to the node migration:

{
  "data": {
    "udm_people": [
      {
        "unique_id": 1,
        "name": "Michele Metts",
        "photo_file": "P01",
        "book_ref": "B10"
      },
      {...},
      {...}
    ]
  }
}

The array of records containing node data lies two levels deep in the hierarchy. Starting with data at the root and then descending one level to udm_people. Each element of this array is an object with four properties:

  • unique_id is the unique identifier for each record within the data/udm_people hierarchy.
  • name is the name of a person. This will be used in the node title.
  • photo_file is the unique identifier of an image that was created in a separate migration.
  • book_ref is the unique identifier of a book paragraph that was created in a separate migration.

The following snippet shows the configuration to read a local JSON file for the node migration:

source:
  plugin: url
  data_fetcher_plugin: file
  data_parser_plugin: json
  urls:
    - modules/custom/ud_migrations/ud_migrations_json_source/sources/udm_data.json
  item_selector: data/udm_people
  fields:
    - name: src_unique_id
      label: 'Unique ID'
      selector: unique_id
    - name: src_name
      label: 'Name'
      selector: name
    - name: src_photo_file
      label: 'Photo ID'
      selector: photo_file
    - name: src_book_ref
      label: 'Book paragraph ID'
      selector: book_ref
  ids:
    src_unique_id:
      type: integer

The name of the plugin is url. Because we are reading a local file, the data_fetcher_plugin  is set to file and the data_parser_plugin to json. The urls configuration contains an array of file paths relative to the Drupal root. In the example, we are reading from one file only, but you can read from multiple files at once. In that case, it is important that they have a homogeneous structure. The settings that follow will apply equally to all the files listed in urls.

The item_selector configuration indicates where in the JSON file lies the array of records to be migrated. Its value is an XPath-like string used to traverse the file hierarchy. In this case, the value is data/udm_people. Note that you separate each level in the hierarchy with a slash (/).

fields has to be set to an array. Each element represents a field that will be made available to the migration. The following options can be set:

  • name is required. This is how the field is going to be referenced in the migration. The name itself can be arbitrary. If it contained spaces, you need to put double quotation marks (") around it when referring to it in the migration.
  • label is optional. This is a description used when presenting details about the migration. For example, in the user interface provided by the Migrate Tools module. When defined, you do not use the label to refer to the field. Keep using the name.
  • selector is required. This is another XPath-like string to find the field to import. The value must be relative to the location specified by the item_selector configuration. In the example, the fields are direct children of the records to migrate. Therefore, only the property name is specified (e.g., unique_id). If you had nested objects or arrays, you would use a slash (/) character to go deeper in the hierarchy. This will be demonstrated in the image and paragraph migrations.

Finally, you specify an ids array of field names that would uniquely identify each record. As already stated, the unique_id field servers that purpose. The following snippet shows part of the process, destination, and dependencies configuration of the node migration:

process:
  field_ud_image/target_id:
    plugin: migration_lookup
    migration: udm_json_source_image
    source: src_photo_file
destination:
  plugin: 'entity:node'
  default_bundle: ud_paragraphs
migration_dependencies:
  required:
    - udm_json_source_image
    - udm_json_source_paragraph
  optional: []

The source for the setting the image reference is src_photo_file. Again, this is the name of the field, not the label nor selector. The configuration of the migration lookup plugin and dependencies point to two JSON migrations that come with this example. One is for migrating images and the other for migrating paragraphs.

Migrating paragraphs from a JSON file

Let’s consider an example where the records to migrate have many levels of nesting. The following snippets show part of the local JSON file and source plugin configuration for the paragraph migration:

{
  "data": {
    "udm_book_paragraph": [
      {
        "book_id": "B10",
        "book_details": {
          "title": "The definite guide to Drupal 7",
          "author": "Benjamin Melançon et al."
        }
      },
      {...},
      {...}
    ]
}
source:
  plugin: url
  data_fetcher_plugin: file
  data_parser_plugin: json
  urls:
    - modules/custom/ud_migrations/ud_migrations_json_source/sources/udm_data.json
  item_selector: data/udm_book_paragraph
  fields:
    - name: src_book_id
      label: 'Book ID'
      selector: book_id
    - name: src_book_title
      label: 'Title'
      selector: book_details/title
    - name: src_book_author
      label: 'Author'
      selector: book_details/author
  ids:
    src_book_id:
      type: string

The plugin, data_fetcher_plugin, data_parser_plugin and urls configurations have the same values as in the node migration. The item_selector and ids configurations are slightly different to represent the path to paragraph records and the unique identifier field, respectively.

The interesting part is the value of the fields configuration. Taking data/udm_book_paragraph as a starting point, the records with paragraph data have a nested structure. Notice that book_details is an object with two properties: title and author. To refer to them, the selectors are book_details/title and book_details/author, respectively. Note that you can go as many level deeps in the hierarchy to find the value that should be assigned to the field. Every level in the hierarchy would be separated by a slash (/).

In this example, the target is a single paragraph type. But a similar technique can be used to migrate multiple types. One way to configure the JSON file is to have two properties. paragraph_id would contain the unique identifier for the record. paragraph_data would be an object with a property to set the paragraph type. This would also have an arbitrary number of extra properties with the data to be migrated. In the process section, you would iterate over the records to map the paragraph fields.

The following snippet shows part of the process configuration of the paragraph migration:

process:
  field_ud_book_paragraph_title: src_book_title
  field_ud_book_paragraph_author: src_book_author

Migrating images from a JSON file

Let’s consider an example where the records to migrate have more data than needed. The following snippets show part of the local JSON file and source plugin configuration for the image migration:

{
  "data": {
    "udm_photos": [
      {
        "photo_id": "P01",
        "photo_url": "https://agaric.coop/sites/default/files/pictures/picture-15-1421176712.jpg",
        "photo_dimensions": [240, 351]
      },
      {...},
      {...}
    ]
  }
}
source:
  plugin: url
  data_fetcher_plugin: file
  data_parser_plugin: json
  urls:
    - modules/custom/ud_migrations/ud_migrations_json_source/sources/udm_data.json
  item_selector: data/udm_photos
  fields:
    - name: src_photo_id
      label: 'Photo ID'
      selector: photo_id
    - name: src_photo_url
      label: 'Photo URL'
      selector: photo_url
  ids:
    src_photo_id:
      type: string

The plugin, data_fetcher_plugin, data_parser_plugin and urls configurations have the same values as in the node migration. The item_selector and ids configurations are slightly different to represent the path to image records and the unique identifier field, respectively.

The interesting part is the value of the fields configuration. Taking data/udm_photos as a starting point, the records with image data have extra properties that are not used in the migration. Particularly, the photo_dimensions property contains an array with two values representing the width and height of the image, respectively. To ignore this property, you simply omit it from the fields configuration. In case you wanted to use it, the selectors would be photo_dimensions/0 for the width and photo_dimensions/1 for the height. Note that you use a zero-based numerical index to get the values out of arrays. Like with objects, a slash (/) is used to separate each level in the hierarchy. You can go as far as necessary in the hierarchy.

The following snippet shows part of the process configuration of the image migration:

process:
  psf_destination_filename:
    plugin: callback
    callable: basename
    source: src_photo_url

JSON file location

When using the file data fetcher plugin, you have three options to indicate the location to the JSON files in the urls configuration:

  • Use a relative path from the Drupal root. The path should not start with a slash (/). This is the approach used in this demo. For example, modules/custom/my_module/json_files/example.json.
  • Use an absolute path pointing to the CSV location in the file system. The path should start with a slash (/). For example, /var/www/drupal/modules/custom/my_module/json_files/example.json.
  • Use a stream wrapper.

Being able to use stream wrappers gives you many more options. For instance:

  • Files located in the public, private, and temporary file systems managed by Drupal. This leverages functionality already available in Drupal core. For example: public://json_files/example.json.
  • Files located in profiles, modules, and themes. You can use the System stream wrapper module or apply this core patch to get this functionality. For example, module://my_module/json_files/example.json.
  • Files located in remote servers including RSS feeds. You can use the Remote stream wrapper module to get this functionality. For example, https://understanddrupal.com/json-files/example.json.

Migrating remote JSON files

Migrate Plus provides another data fetcher plugin named http. You can use it to fetch files using the http and https protocols. Under the hood, it uses the Guzzle HTTP Client library. In a future blog post we will explain this data fetcher in more detail. For now, the udm_json_source_node_remote migration demonstrates a basic setup for this plugin. Note that only the data_fetcher_plugin and urls configurations are different from the local file example. The following snippet shows part of the configuration to read a remote JSON file for the node migration:

source:
  plugin: url
  data_fetcher_plugin: http
  data_parser_plugin: json
  urls:
    - https://api.myjson.com/bins/110rcr
  item_selector: data/udm_people
  fields: ...
  ids: ...

And that is how you can use JSON files as the source of your migrations. Many more configurations are possible. For example, you can provide authentication information to get access to protected resources. You can also set custom HTTP headers. Examples will be presented in a future entry.

What did you learn in today’s blog post? Have you migrated from JSON files before? If so, what challenges have you found? Did you know that you can read local and remote files? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors: Drupalize.me by Osio Labs has online tutorials about migrations, among other topics, and Agaric provides migration trainings, among other services.  Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Aug 17 2019
Aug 17

Today we will learn how to migrate content from a Comma-Separated Values (CSV) file into Drupal. We are going to use the latest version of the Migrate Source CSV module which depends on the third-party library league/csv. We will show how configure the source plugin to read files with or without a header row. We will also talk about a new feature that allows you to use stream wrappers to set the file location. Let’s get started.

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD CSV source migration whose machine name is ud_migrations_csv_source. It comes with three migrations: udm_csv_source_paragraph, udm_csv_source_image, and udm_csv_source_node.

You can get the Migrate Source CSV module is using composer: composer require drupal/migrate_source_csv. This will also download its dependency: the league/csv library. The example assumes you are using 8.x-3.x branch of the module, which requires composer to be installed. If your Drupal site is not composer-based, you can use the 8.x-2.x branch. Continue reading to learn the difference between the two branches.

Understanding the example set up

This migration will reuse the same configuration from the introduction to paragraph migrations example. Refer to that article for details on the configuration: the destinations will be the same content type, paragraph type, and fields. The source will be changed in today's example, as we use it to explain JSON migration. The end result will again be nodes containing an image and a paragraph with information about someone’s favorite book. The major difference is that we are going to read from JSON.

Note that you can literally swap migration sources without changing any other part of the migration. This is a powerful feature of ETL frameworks like Drupal’s Migrate API. Although possible, the example includes slight changes to demonstrate various plugin configuration options. Also, some machine names had to be changed to avoid conflicts with other examples in the demo repository.

Migrating CSV files with a header row

In any migration project, understanding the source is very important. For CSV migrations, the primary thing to consider is whether or not the file contains a row of headers. Other things to consider are what characters to use as delimiter, enclosure, and escape character. For now, let’s consider the following CSV file whose first row serves as column headers:

unique_id,name,photo_file,book_ref
1,Michele Metts,P01,B10
2,Benjamin Melançon,P02,B20
3,Stefan Freudenberg,P03,B30

This file will be used in the node migration. The four columns are used as follows:

  • unique_id is the unique identifier for each record in this CSV file.
  • name is the name of a person. This will be used as the node title.
  • photo_file is the unique identifier of an image that was created in a separate migration.
  • book_ref is the unique identifier of a book paragraph that was created in a separate migration.

The following snippet shows the configuration of the CSV source plugin for the node migration:

source:
  plugin: csv
  path: modules/custom/ud_migrations/ud_migrations_csv_source/sources/udm_people.csv
ids: [unique_id]

The name of the plugin is csv. Then you define the path pointing to the file itself. In this case, the path is relative to the Drupal root. Finally, you specify an ids array of columns names that would uniquely identify each record. As already stated, the unique_id column servers that purpose. Note that there is no need to specify all the columns names from the CSV file. The plugin will automatically make them available. That is the simplest configuration of the CSV source plugin.

The following snippet shows part of the process, destination, and dependencies configuration of the node migration:

process:
  field_ud_image/target_id:
    plugin: migration_lookup
    migration: udm_csv_source_image
    source: photo_file
destination:
  plugin: 'entity:node'
  default_bundle: ud_paragraphs
migration_dependencies:
  required:
    - udm_csv_source_image
    - udm_csv_source_paragraph
optional: []

Note that the source for the setting the image reference is photo_file. In the process pipeline you can directly use any column name that exists in the CSV file. The configuration of the migration lookup plugin and dependencies point to two CSV migrations that come with this example. One is for migrating images and the other for migrating paragraphs.

Migrating CSV files without a header row

Now let’s consider two examples of CSV files that do not have a header row. The following snippets show the example CSV file and source plugin configuration for the paragraph migration:

B10,The definite guide to Drupal 7,Benjamin Melançon et al.
B20,Understanding Drupal Views,Carlos Dinarte
B30,Understanding Drupal Migrations,Mauricio Dinarte

source:
  plugin: csv
  path: modules/custom/ud_migrations/ud_migrations_csv_source/sources/udm_book_paragraph.csv
  ids: [book_id]
  header_offset: null
  fields:
    - name: book_id
    - name: book_title
- name: 'Book author'

When you do not have a header row, you need to specify two more configuration options. header_offset has to be set to null. fields has to be set to an array where each element represents a column in the CSV file. You include a name for each column following the order in which they appear in the file. The name itself can be arbitrary. If it contained spaces, you need to put quotes (') around it. After that, you set the ids configuration to one or more columns using the names you defined.

In the process section you refer to source columns as usual. You write their name adding quotes if it contained spaces. The following snippet shows how the process section is configured for the paragraph migration:

process:
  field_ud_book_paragraph_title: book_title
field_ud_book_paragraph_author: 'Book author'

The final example will show a slight variation of the previous configuration. The following two snippets show the example CSV file and source plugin configuration for the image migration:

P01,https://agaric.coop/sites/default/files/pictures/picture-15-1421176712.jpg
P02,https://agaric.coop/sites/default/files/pictures/picture-3-1421176784.jpg
P03,https://agaric.coop/sites/default/files/pictures/picture-2-1421176752.jpg

source:
  plugin: csv
  path: modules/custom/ud_migrations/ud_migrations_csv_source/sources/udm_photos.csv
  ids: [photo_id]
  header_offset: null
  fields:
    - name: photo_id
      label: 'Photo ID'
    - name: photo_url
label: 'Photo URL'

For each column defined in the fields configuration, you can optionally set a label. This is a description used when presenting details about the migration. For example, in the user interface provided by the Migrate Tools module. When defined, you do not use the label to refer to source columns. You keep using the column name. You can see this in the value of the ids configuration.

The following snippet shows part of the process configuration of the image migration:

process:
  psf_destination_filename:
    plugin: callback
    callable: basename
source: photo_url

CSV file location

When setting the path configuration you have three options to indicate the CSV file location:

  • Use a relative path from the Drupal root. The path should not start with a slash (/). This is the approach used in this demo. For example, modules/custom/my_module/csv_files/example.csv.
  • Use an absolute path pointing to the CSV location in the file system. The path should start with a slash (/). For example, /var/www/drupal/modules/custom/my_module/csv_files/example.csv.
  • Use a stream wrapper. This feature was introduced in the 8.x-3.x branch of the module. Previous versions cannot make use of them.

Being able to use stream wrappers gives you many options for setting the location to the CSV file. For instance:

  • Files located in the public, private, and temporary file systems managed by Drupal. This leverages functionality already available in Drupal core. For example: public://csv_files/example.csv.
  • Files located in profiles, modules, and themes. You can use the System stream wrapper module or apply this core patch to get this functionality. For example, module://my_module/csv_files/example.csv.
  • Files located in remote servers including RSS feeds. You can use the Remote stream wrapper module to get this functionality. For example, https://understanddrupal.com/csv-files/example.csv.

CSV source plugin configuration

The configuration options for the CSV source plugin are very well documented in the source code. They are included here for quick reference:

  • path is required. It contains the path to the CSV file. Starting with the 8.x-3.x branch, stream wrappers are supported.
  • ids is required. It contains an array of column names that uniquely identify each record.
  • header_offset is optional. The index of record to be used as the CSV header and the thereby each record's field name. It defaults to zero (0) because the index is zero-based. For CSV files with no header row the value should be set to null.
  • fields is optional. It contains a nested array of names and labels to use instead of a header row. If set, it will overwrite the column names obtained from header_offset.
  • delimiter is optional. It contains one character column delimiter. It defaults to a comma (,). For example, if your file uses tabs as delimiter, you set this configuration to \t.
  • enclosure is optional. It contains one character used to enclose the column values. Defaults to double quotation marks (").
  • escape is optional. It contains one character used for character escaping in the column values. It defaults to a backslash (****).

Important: The configuration options changed significantly between the 8.x-3.x and 8.x-2.x branches. Refer to this change record for a reference of how to configure the plugin for the 8.x-2.x.

And that is how you can use CSV files as the source of your migrations. Because this is such a common need, it was considered to move the CSV source plugin to Drupal core. The effort is currently on hold and it is unclear if it will materialize during Drupal 8’s lifecycle. The maintainers of the Migrate API are focusing their efforts on other priorities at the moment. You can read this issue to learn about the motivation and context for offering functionality in Drupal core.

Note: The Migrate Spreadsheet module can also be used to migrate data from CSV files. It also supports Microsoft Office Excel and LibreOffice Calc (OpenDocument) files. The module leverages the PhpOffice/PhpSpreadsheet library.

What did you learn in today’s blog post? Have you migrated from CSV files before? Did you know that it is now possible to read files using stream wrappers? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors: Drupalize.me by Osio Labs has online tutorials about migrations, among other topics, and Agaric provides migration trainings, among other services. Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Aug 16 2019
Aug 16

After a couple months off, SC DUG met this month with a presentation on super cheap Drupal hosting.

Chris Zietlow from Mindgrub, Will Jackson from Kanopi Studios, and I all gave short talks very cheap ways to host Drupal 8.

[embedded content]

Chris opened by talking about using AWS Micro servers. Will shared a solution using a Raspberry Pi for a fully wireless server. I closed the discussion with a review of using Drupal Tome on Netlify.

We all worked from a loose set of rules to help keep us honest and prevent overlapping:

Rules for Cheap D8 Hosting Challenge

The goal is to figure out the cheapest D8 hosting that would actually function for a project, even if it is deeply irresponsible to actually use.

Rules

  1. It has to actually work for D8 (so modern PHP version, working database, etc),
  2. You do not actually have to spend the money, but you do need to know all the steps required to make it work.
  3. It needs to honor the TOS for any networks and services you use (no illegal network taps – legal hidden taps are fair game).
  4. You have to share your idea with the other players so we don’t have two people propose the same solution (first-come-first-serve on ideas).

Reporting

Be prepared to talk for about 5 minutes on how your solution would work.  Your talk needs to include:

  1. Estimated Monthly cost for the first year.
  2. Steps required to make it work.
  3. Known weaknesses.

If you have a super cheap hosting solution for Drupal 8 we’d love to hear about it.

Aug 15 2019
Aug 15

Today we will present an introduction to paragraphs migrations in Drupal. The example consists of migrating paragraphs of one type, then connecting the migrated paragraphs to nodes. A separate image migration is included to demonstrate how they are different. At the end, we will talk about behavior that deletes paragraphs when the host entity is deleted. Let’s get started.

Example mapping for paragraph reference field.

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD paragraphs migration introduction whose machine name is ud_migrations_paragraph_intro. It comes with three migrations: ud_migrations_paragraph_intro_paragraph, ud_migrations_paragraph_intro_image, and ud_migrations_paragraph_intro_node. One content type, one paragraph type, and four fields will be created when the module is installed.

Note: Configuration placed in a module’s config/install directory will be copied to Drupal’s active configuration. And if those files have a dependencies/enforced/module key, the configuration will be removed when the listed modules are uninstalled. That is how the content type, the paragraph type, and the fields are automatically created and deleted.

You can get the Paragraph module is using composer: composer require drupal/paragraphs. This will also download its dependency: the Entity Reference Revisions module. If your Drupal site is not composer-based, you can get the code for both modules manually.

Understanding the example set up

The example code creates one paragraph type named UD book paragraph (ud_book_paragraph). It has two “Text (plain)” fields: Title (field_ud_book_paragraph_title) and Author (field_ud_book_paragraph_author). A new UD Paragraphs (ud_paragraphs) content type is also created. This has two fields: Image (field_ud_image) and Favorite book (field_ud_favorite_book) containing references to images and book paragraphs imported in separate migrations. The words in parenthesis represent the machine names of the different elements.

The paragraph migration

Migrating into a paragraph type is very similar to migrating into a content type. You specify the source, process the fields making any required transformation, and set the destination entity and bundle. The following code snippet shows the source, process, and destination sections:

source:
  plugin: embedded_data
  data_rows:
    - book_id: 'B10'
      book_title: 'The definite guide to Drupal 7'
      book_author: 'Benjamin Melançon et al.'
    - book_id: 'B20'
      book_title: 'Understanding Drupal Views'
      book_author: 'Carlos Dinarte'
    - book_id: 'B30'
      book_title: 'Understanding Drupal Migrations'
      book_author: 'Mauricio Dinarte'
  ids:
    book_id:
      type: string
process:
  field_ud_book_paragraph_title: book_title
  field_ud_book_paragraph_author: book_author
destination:
  plugin: 'entity_reference_revisions:paragraph'
  default_bundle: ud_book_paragraph

The most important part of a paragraph migration is setting the destination plugin to entity_reference_revisions:paragraph. This plugin is actually provided by the Entity Reference Revisions module. It is very important to note that paragraphs entities are revisioned. This means that when you want to create a reference to them, you need to provide two IDs: target_id and target_revision_id. Regular entity reference fields like files, images, and taxonomy terms only require the target_id. This will be further explained with the node migration.

The other configuration that you can optionally set in the destination section is default_bundle. The value will be the machine name of the paragraph type you are migrating into. You can do this when all the paragraphs for a particular migration definition file will be of the same type. If that is not the case, you can leave out the default_bundle configuration and add a mapping for the type entity property in the process section.

You can execute the paragraph migration with this command: drush migrate:import
ud_migrations_paragraph_intro_paragraph
. After running the migration, there is not much you can do to verify that it worked. Contrary to other entities, there is no user interface, available out of the box, that lists all paragraphs in the system. One way to verify if the migration worked is to manually create a View that shows paragraphs. Another way is to query the database directly. You can inspect the tables that store the paragraph fields’ data. In this example, the tables would be:

  • paragraph__field_ud_book_paragraph_author for the current author.
  • paragraph__field_ud_book_paragraph_title for the current title.
  • paragraph_r__8c3a9563ac for all the author revisions.
  • paragraph_r__3fa7e9863a for all the title revisions.

Each of those tables contains information about the bundle (paragraph type), the entity id, the revision id, and the migrated field value. Table names are derived from the machine names of the fields. If they are too long, the field name will be hashed to produce a shorter table name. Having to query the database is not ideal. Unfortunately, the options available to check if a paragraph migration worked are limited at the moment.

The node migration

The node migration will serve as the host for both referenced entities: images and paragraphs. The image migration is very similar to the one explained in a previous article. This time, the focus will be the paragraph migration. Both of them are set as dependencies of the node migration, so they need to be executed in advance. The following snippet shows how the source, destinations, and dependencies are set:

source:
  plugin: embedded_data
  data_rows:
    - unique_id: 1
      name: 'Michele Metts'
      photo_file: 'P01'
      book_ref: 'B10'
    - unique_id: 2
      name: 'Benjamin Melançon'
      photo_file: 'P02'
      book_ref: 'B20'
    - unique_id: 3
      name: 'Stefan Freudenberg'
      photo_file: 'P03'
      book_ref: 'B30'
  ids:
    unique_id:
      type: integer
destination:
  plugin: 'entity:node'
  default_bundle: ud_paragraphs
migration_dependencies:
  required:
    - ud_migrations_paragraph_intro_image
    - ud_migrations_paragraph_intro_paragraph
  optional: []

Note that photo_file and book_ref both contain the unique identifier of records in the image and paragraph migrations, respectively. These can be used with the migration_lookup plugin to map the reference fields in the nodes to be migrated. ud_paragraphs is the machine name of the target content type.

The mapping of the image reference field follows the same pattern than the one explained in the article on migration dependencies. Using the migration_lookup plugin, you indicate which is the migration that should be searched for the images. You also specify which source column contains the unique identifiers that match those in the image migration. This operation will return a single value: the file ID (fid) of the image. This value can be assigned to the target_id subfield of field_ud_image to establish the relationship. The following code snippet shows how to do it:

field_ud_image/target_id:
  plugin: migration_lookup
  migration: ud_migrations_paragraph_intro_image
  source: photo_file

Paragraph field mappings

Before diving into the paragraph field mapping, let’s think about what needs to be done. Paragraphs are revisioned entities. To make a reference to them, you need two IDs: their entity id and their entity revision id. These two values need to be assigned to two subfields of the paragraph reference field: target_id and target_revision_id respectively. You have to come up with a process pipeline that complies with this requirement. There are many ways to do it, and the specifics will depend on your field configuration. In this example, the paragraph reference field allows an unlimited number of paragraphs to be associated, but only of one type: ud_book_paragraph. Another thing to note is that even though the field allows you to add as many paragraphs as you want, the example migrates exactly one paragraph.

With those considerations in mind, the mapping of the paragraph field will be a two step process. First, use the migration_lookup plugin to get a reference to the paragraph. Second, use the fetched values to set the paragraph reference subfields. The following code snippet shows how to do it:

pseudo_mbe_book_paragraph:
  plugin: migration_lookup
  migration: ud_migrations_paragraph_intro_paragraph
  source: book_ref
field_ud_favorite_book:
  plugin: sub_process
  source:
    - '@pseudo_mbe_book_paragraph'
  process:
    target_id: '0'
    target_revision_id: '1'

The first step is a normal migration_lookup procedure. The important difference is that instead of getting a single value, like with images, the paragraph lookup operation will return an array of two values. The format is like [3, 7] where the 3 represents the entity id and the 7 represents the entity revision id of the paragraph. Note that the array keys are not named. To access those values, you would use the index of the elements starting with zero (0). This will be important later. The returned array is stored in the pseudo_mbe_book_paragraph pseudofield.

The second step is to set the target_id and target_revision_id subfields. In this example, field_ud_favorite_book is the machine name paragraph reference field. Remember that it is configured to accept an arbitrary number of paragraphs, and each will require passing an array of two elements. This means you need to process an array of arrays. To do that, you use the sub_process plugin to iterate over an array of paragraph references. In this example, the structure to iterate over would be like this:

[
  [3, 7]
]

Let’s dissect how to do the mapping of the paragraph reference field. The source configuration of the sub_process plugin contains an array of paragraph references. In the example, that array has a single element: the '@pseudo_mbe_book_paragraph' pseudofield. The quotes (') and at sign (@) are required to reuse an element that appears before in the process pipeline. Then, in the process configuration, you set the subfields for the paragraph reference field. It is worth noting that at this point you are iterating over a list of paragraph references, even if that list contains only one element. If you had more than one paragraph to migrate, whatever you defined in process will apply to all of them.

The process configuration is an array of subfield mappings. The left side of the assignment is the name of the subfield you want to set. The right side of the assignment is an array index of the paragraph reference being processed. Remember that this array does not have named-keys, so you use their numerical index to refer to them. The example sets the target_id subfield to the element in the 0 index and the target_revision_id subfield to the element in the one 1 index. Using the example data, this would be target_id: 3 and target_revision_id: 7. The quotes around the numerical indexes are important. If not used, the migration will not find the indexes and the paragraphs will not be associated. The end result of this operation will be something like this:

'field_ud_favorite_book' => array (1) [
  array (2) [
    'target_id' => string (1) "3"
    'target_revision_id' => string (1) "7"
  ]
]

There are three ways to run the migrations: manually, executing dependencies, and using tags. The following code snippet shows the three options:

# 1) Manually.
$ drush migrate:import ud_migrations_paragraph_intro_image
$ drush migrate:import ud_migrations_paragraph_intro_paragraph
$ drush migrate:import ud_migrations_paragraph_intro_node

# 2) Executing depenpencies.
$ drush migrate:import ud_migrations_paragraph_intro_node --execute-dependencies

# 3) Using tags.
$ drush migrate:import --tag='UD Paragraphs Intro'

And that is one way to map paragraph reference fields. In the end, all you have to do is set the target_id and target_revision_id subfields. The process pipeline that gets you to that point can vary depending on how your paragraphs are configured. The following is a non-exhaustive list of things to consider when migrating paragraphs:

  • How many paragraphs types can be referenced?
  • How many paragraphs instances are being migrated? Is this a multivalue field?
  • Do paragraphs have translations?
  • Do paragraphs have revisions?

Do migrated paragraphs disappear upon node rollback?

Paragraphs migrations are affected by a particular behavior of revisioned entities. If the host entity is deleted, and the paragraphs do not have translations, the whole paragraph gets deleted. That means that deleting a node will make the referenced paragraphs’ data to be removed. How does this affect your migration workflow? If the migration of the host entity is rollback, then the paragraphs will be removed, the migrate API will not know about it. In this example, if you run a migrate status command after rolling back the node migration, you will see that the paragraph migration indicated that there are no pending elements to process. The file migration for the images will report the same, but in that case, the images will remain on the system.

In any migration project, it is common that you do rollback operations to test new field mappings or fix errors. Thus, chances are very high that you will stumble upon this behavior. Thanks to Damien McKenna for helping me understand this behavior and tracking it to the rollback() method of the EntityReferenceRevisions destination plugin. So, what do you do to recover the deleted paragraphs? You have to rollback both migrations: node and paragraph. And then, you have to import the two again. The following snippet shows how to do it:

# 1) Rollback both migrations.
$ drush migrate:rollback ud_migrations_paragraph_intro_node
$ drush migrate:rollback ud_migrations_paragraph_intro_paragraph

# 2) Import both migrations againg.

$ drush migrate:import ud_migrations_paragraph_intro_paragraph
$ drush migrate:import ud_migrations_paragraph_intro_node

What did you learn in today’s blog post? Have you migrated paragraphs before? If so, what challenges have you found? Did you know paragraph reference fields require two subfields to be set? Did you that deleting the host entity also deletes referenced paragraphs? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Aug 15 2019
Aug 15

The Drupal Community Working Group is happy to announce that we've teamed up with Otter Tech to offer live, monthly, online Code of Conduct enforcement training for Drupal Event organizers and volunteers through the end of 2019. 

The training is designed to provide "first responder" skills to Drupal community members who take reports of potential Code of Conduct issues at Drupal events, including meetups, camps, conventions, and other gatherings. The workshops will be attended by Code of Conduct enforcement teams from other open source events, which will allow cross-pollination of knowledge with the Drupal community.

Each monthly online workshop is the same; community members only have to attend one monthly workshop of their choice to complete the training.  We strongly encourage all Drupal event organizers to consider sponsoring one or two persons' attendance at this workshop.

The monthly online workshops will be presented by Sage Sharp, Otter Tech's CEO and a diversity and inclusion leader in the open source community. From the official description of the workshop, it will include:

  • Practice taking a report of a potential Code of Conduct violation (an incident report)
  • Practice following up with the reported person
  • Instructor modeling on how to take a report and follow up on a report
  • One practice scenario for a report given at an event
  • One practice scenario for a report given in an online community
  • Discussion on bias, microaggressions, personal conflicts, and false reporting
  • Frameworks for evaluating a response to a report
  • 40 minutes total of Q&A time

In addition, we have received a Drupal Community Cultivation Grant to help defray the cost of the workshop for those that need assistance. The standard cost of the workshop is $350, Otter Tech has worked with us to allow us to provide the workshop for $300. To register for the workshop, first let us know that you're interested by completing this sign-up form - everyone who completes the form will receive a coupon code for $50 off the regular price of the workshop.

For those that require additional assistance, we have a limited number of $100 subsidies available, bringing the workshop price down to $200. Subsidies will be provided based on reported need as well as our goal to make this training opportunity available to all corners of our community. To apply for the subsidy, complete the relevant section on the sign-up form. The deadline for applying for the subsidy is end-of-business on Friday, September 6, 2019 - those selected for the subsidy will be notified after this date (in time for the September 9, 2019 workshop).

The workshops will be held on:

  • September 9 (Monday) at 3 pm to 7 pm U.S. Pacific Time / 8 am to 12 pm Australia Eastern Time
  • October 23 (Wednesday) at 5 am to 9 am U.S. Pacific Time / 2 pm to 6 pm Central European Time
  • November 14 (Thursday) at 6 pm to 10 pm U.S. Pacific Time / 1 pm to 5 pm Australia Eastern Time
  • December 4 (Wednesday) at 9 am to 1 pm U.S. Pacific Time / 6 pm to 10 pm Central European Time

Those that successfully complete the training will be (at their discretion) listed on Drupal.org (in the Drupal Community Workgroup section) as a means to prove that they have completed the training. We feel that moving forward, the Drupal community now has the opportunity to have professionally trained Code of Conduct contacts at the vast majority of our events, once again, leading the way in the open source community.

We are fully aware that the fact that the workshops will be presented in English limit who will be able to attend. We are more than interested in finding additional professional Code of Conduct workshops in other languages. Please contact us if you can assist.
 

Aug 15 2019
Aug 15

Drupal is moving to the future and adopts more and more innovative trends. No wonder that high tech engineering leaders trust Drupal and build their sites with it.

Drupal in high-tech: innovative companies + innovative CMS

They have found each other! Thinking about Drupal’s innovative spirit, we want to mention plenty of its capabilities, so here are at least a few:

Great examples of high tech company websites built on Drupal

So let’s learn more about Drupal for high tech company websites by looking at the following examples.

Tesla

Electric cars, solar panels, and renewable energy are the three pillars of expertise of the incredible innovator — Tesla. The high tech giant has also chosen the right CMS — the Tesla’s site is built with Drupal.

Tesla website built with Drupal

Amazing web design with background videos and zooming effects allows customers to see the products almost like in real life and get inspired. Users can select among 30+ regions to see the website version in their native language.Tesla website built with Drupal

While choosing the products, you can specify all parameters and see the changed picture without a page reload. There is also an online payment feature.

Tesla website built with Drupal

General Electric

Discussing Drupal for high tech industry leaders, we are glad to mention General Electric Company (GE). Its innovation builds, powers, moves, and cures the world. And Drupal powers their website where we can learn this and more about their activity.

The site’s users can select among such GE businesses as aviation, power, renewable energy, healthcare, lighting, and so on. The large and handy search bar on the main page also quickly takes them to where they wish. The stylish design of many sections features background videos.

GE website built with Drupal

As 130 countries are currently home to GE operations, users can select and visit their specific site version.

GE website built with Drupal

Iteris

Iteris Inc. produces innovative sensors and other solutions that predict the state of traffic, weather, soil, etc., to boost the agriculture and transportation industries. While Iteris products win high tech innovation awards, Drupal has won their trust as a CMS.

Iteris website built with Drupal

They chose Drupal 8 to revamp their website as part of the rebranding campaign. The site is great from frontend to backend — from the stylish design with the main page’s slider to complex permissions for particular user types.

Iteris website built with Drupal

Pfizer

The Pfizer multinational pharmaceutical corporation uses technology and innovative science for advanced patient care.

Pfizer website built with Drupal

Their website has 50+ country/region and language options. The content is presented in five key categories: “Your Health,” “Our Science,” “Our People,” “Our Purpose”, and “Our products.”

Pfizer website built with Drupal

There is a strong search feature for Pfizer’s products, the option to find the clinical trial results, and much more.

What about your future website?

Hopefully, these examples of high tech company websites built on Drupal have inspired you. They are just a few from a million+ Drupal sites worldwide in various industries — e-commerce, education, business & finance, and so on. In addition to being innovative and powerful, the CMS is very versatile and flexible.

So contact our development team to discuss how Drupal can be helpful in your case or with your website idea!

Aug 15 2019
Aug 15

Once, the Drupal community had Mollom, and everything was good. It was a web service that would let you use an API to scan comments and other user-submitted content and it would let your site know whether it thought it was spam, or not, so it could safely publish the content. Or not. It was created by our very own Dries Buytaert and obviously had a Drupal module. It was the service of choice for Drupal sites struggling with comment spam. Unfortunately, Mollom no longer exists. But there is an alternative, from the WordPress world: Akismet.

Akismet is very similar to Mollom. It too is a web service that lets you use an API to judge if some submitted content is spam. The name is derived from the Turkish word kismet, which means fate or destiny. The A simply stands for automatic (or Automattic, the company behind both WordPress and Akismet). It was created for WordPress, and like Mollom was once for Drupal, it is the service of choice for WordPress sites. However, nothing is keeping any other software from making use of it, so when you download the Akismet module, you can use it with your Drupal site as well. Incidentally, the module is actually based on the code of the Mollom module.

There is no stable release of the module, currently. In fact, there is no release at all, not even an alpha release. Except for development releases, that is. This means that for now it might not be an option for you to deploy the module. Hopefully, this changes soon, although the last commit is over a year ago at the time of writing. A mitigating circumstance, though, is that Drupal.org itself seems to be using this module as well, albeit in the Drupal 7 version (this article will be discussing the D8 version).

Adding Akismet to your Drupal site

How to add the module will depend on your workflow. Either download a development release, or - when you use a composer-based workflow - add the module like so:

$ composer require drupal/akismet:^[email protected]

Then, enable the module through the Extend admin screen (/admin/modules).

Basic configuration

In order to configure the module, you will first need an Akismet API key. To get this, register at https://akismet.com. If you have a wordpress.com account (which you might have from either wordpress.com itself, or e.g. because you also have Gravatar) you can sign in with it.

Once you've obtained an API key, you can go to /admin/config/content/akismet/settings to configure Akismet.

The choice what to do when Akismet is not available is probably dependent on how busy your site is. If it is very busy, and you do not get tons of spam, you probably want to accept all submissions. If you get a lot of spam and not very many actual contributions, you might want to block everything. If you both have a high traffic site and get a lot of spam, good luck. Of course, you can always look at a second line of defense, like the Honeypot module

The second two settings - Show a link to the privacy policy and Enable testing mode - seem to be left-overs from the Mollom module, because neither of them seem to do anything. I created issues for both the privacy policy and the testing mode.

While Mollom had a requirement to either show a link to its terms of use on protected forms, or have your own terms of use that made it clear you make use of the Mollom service, Akismet doesn't seem to have such a requirement. (Of course, it is a good idea to add something about your use of Akismet in your terms of use or your privacy policy).

Testing with Akismet is possible to either pass "viagra-test-123" as the body of the message, or "[email protected]" as the email address; these will always result in a spam classification. This seems to trigger the "unsure" scenario, eventhough that doesn't actually fully work, currently (see further). Forcing a ham (the opposite of spam - I didn't make this up) response is a bit trickier, because it would involve setting some parameters to the web service you do not have control over from the outside. Especially the testing mode might be a nice feature request for the Akismet module. Ideally, the module would work similar to the Mollom module, where you could simply send in a comment with "ham", "unsure" and "spam" to test. As said, I created an issue to flesh out this functionality.

The advanced configuration hides settings to control whether to only log errors and warnings, or all Akismet messages, and a timeout for contacting the Akismet server. Especially in the beginning you might want to log all messages to monitor whether things are working as they should. 

Configuring which forms to protect

When having finished the basic configuration for the module, it is time to configure the forms you want to protect. This happens on the path /admin/config/content/akismet. Here, click the "Add form" button to start configuring a form.

When clicking the button, the module will ask you which form you wish to configure. Out of the box, the module will offer to protect node, comment user and contact forms. A hook is offered to add additional forms, although either a module will need to implement the hook itself, or it will have to be done for it. Here, I'm choosing to just protect the comment form, as I am suffering from quite a lot of comment spam. Once you've chosen a form, it will show the form fields you might want to pass to the Akismet web server for analysis.

You'll basically want to select anything that is directly controlled by the user. The obvious candidate is the body, but also the subject, user name, email address, website and hostname will contain clues whether something is spam or not.

Next, you get to select what happens when Akismet decides content is or might be spam. Akismet may report back that it is sure something is spam. If it says something is spam, but does not pass back this certainty flag, the Drupal module says Akismet is "unsure", which is actually a term that can be traced back to the Mollom roots of this module. You may tell the module it should then retain the submission for manual moderation, although this doesn't seem to work correctly, at the moment. I created an issue in the issue queue for that. What I'm seeing happening is that the post is discarded, just like when Akismet is sure.

Click Create Protected Akismet Form to save the protection configuration. You're now ready to catch some spam. You can look at the watchdog log (/admin/reports/dblog) to see the module reporting on what it is doing. Akismet itself also has a nice dashboard with some graphs showing you how much ham and spam it detected on your site.

Reporting spam to Akismet

Sometimes, Akismet might wrongly accept some piece of content that is actually spam. Or, when the moderation queue mechanism actually works properly, you probably want to let Akismet know that yes, something is in fact spam (you might also want to let Akismet know it didn't correctly identify spam, i.e. report false positives. This is a feature of the web service, but is currently not in the module; another feature request, it seems. I've submitted one in the issue queue).

The module comes with action plugins for comments and nodes that let you unpublish the content and report it as spam to Akismet at the same time. You can add it to your comment views by changing their configuration at /admin/structure/views/view/comment (you will need the Views UI module enabled to be able to configure views). Unfortunately, it seems that also with this functionality there is an issue, the action doesn't actually unpublish. A patch is available in the linked issue and of course the workaround is to first use the Akismet action, and then use the standard unpublish action.

Find the configure link for the Comment operations form. Click the link and, in the modal that opens, find the list of checkboxes for the available actions. Enable Report to Akismet and unpublish and save the configuration. Repeat for the Unapproved comments display. This will mean you will now have the action available in the actions dropdown on the comments overviews at /admin/content/comment.

Adding this to a node view will be similar, although chances are that when you have end users submitting nodes, you likely also have some dedicated views for your specific use case, such as Forum posts.

Issues created as a result of this blog post

Please note that I did not intend to "set anyone to work" with creating these. I simply wanted to record some findings and ideas from writing up this blog post.

Aug 15 2019
Aug 15

Extensive nodes (or other types of entities) with many text fields, such as biographies, often remain unread because of the huge (and discouraging) amount of text.

The Drupal 8 "Field Group" module allows you to group fields and to present them in containers like vertical or horizontal tabs, accordions or just plain wrappers. It lets you group fields in the frontend of your site, and in the backend as well.

Keep reading to learn how to use this module!

Step #1. - Install the Required Module

According to the project page, Drupal versions above 8.3 require the 8.3 branch of the module. You have to force Composer to download this specific version.

  • Open the Terminal application of your PC
  • Go to the root of your Drupal installation (the composer.json file is located inside this directory)
  • Type the following command:

composer require "drupal/field_group:^3.0"

Type the following command

  • Click Extend
  • Scroll down until you find the Field Group module and check it
  • Click Install

Click Install

Step #2. Create the Content Type

For the purpose of this tutorial, you will create a content type with this structure.

  • Content type title: Vertebrate
    • Field Image
    • Text field (Introduction)
    • Field Image
    • Text field (First Part)
    • Field Image
    • Text field (Second Part)
    • Field Image
    • Text field (Conclusion)
  • Click Structure > Content types > Add Content type
  • Give the Content type a proper name, click Save and manage fields

Give the Content type a proper name and click Save

  • Click Add field
  • From the dropdown select Image and write a proper label
  • Click Save and Continue

Click Save and Continue

  • Leave the defaults and click Save field settings
  • Click Save settings
  • Click Add field
  • Select Text (formatted long) and give it a proper label
  • Click Save and continue

Click Save and continue

  • Click Save field settings
  • Click Save settings
  • Repeat the process 3 more times for the image and text fields
  • Delete the Body field

The "Manage fields" screen on your computer should look more or less like this:

The Manage fields screen on your computer should look more or less like this

Step #3. Group the fields

The "Field Group" module works on the display of the node and in the form display as well.

  • Click Manage form display > Add group

Click Manage form display and then click Add group

  • Select Fieldset
  • Click Save and continue

Click Save and continue

  • Click Create group
  • Repeat the process and create 3 more Fieldset groups

Repeat the process and create 3 more Fieldset groups

  • Rearrange the items according to the image using the cross handles
  • Click Save

Click Save

  • Click Manage display > Add group

Click Manage display and then click Add group

  • This time, select Tabs
  • Click Save and continue

Click Save and continue

  • Change the direction to Horizontal
  • Click Create Group

You can add an ID and CSS classes to the container to ease the styling process.

You can add an ID and CSS classes to the container to ease the styling process

  • Click Add group
  • Select Tab and give it a proper label
  • Click Save and continue

Click Save and continue

  • Select Default state OPEN (for the first tab, the other tabs will have Default state CLOSE)
  • Click Create group
  • Repeat the process with the other three tabs

Repeat the process with the other three tabs

  • Hide the image labels
  • Rearrange the items, take care of the indentation
  • Click Save

Click Save

Step #4. Create Content

  • Click Content > Add Content > Vertebrate

The form fields are now grouped in four fieldsets. This is very practical for editors.

  • Fill out the form, upload images and click Save

Fill out the form, upload images and click Save

Take a look at the node. All content is grouped in horizontal tabs. Users will definitely have a better reading experience.

Take a look at the node, all content is grouped in horizontal tabs

Users will definitely have a better reading experience

Please, leave us your comments below. Thanks for reading!


About the author

Jorge lived in Ecuador and Germany. Now he is back to his homeland Colombia. He spends his time translating from English and German to Spanish. He enjoys playing with Drupal and other Open Source Content Management Systems and technologies.
Aug 14 2019
Aug 14

Today we will learn how to migrate addresses into Drupal. We are going to use the field provided by the Address module which depends on the third-party library commerceguys/addressing. When migrating addresses you need to be careful with the data that Drupal expects. The address components can change per country. The way to store those components also varies per country. These and other important consideration will be explained. Let’s get started.

Example address field process mapping.

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD address whose machine name is ud_migrations_address. The migration to execute is udm_address. Notice that this migration writes to a content type called UD Address and one field: field_ud_address. This content type and field will be created when the module is installed. They will also be removed when the module is uninstalled. The demo module itself depends on the following modules: address and migrate.

Note: Configuration placed in a module’s config/install directory will be copied to Drupal’s active configuration. And if those files have a dependencies/enforced/module key, the configuration will be removed when the listed modules are uninstalled. That is how the content type and fields are automatically created and deleted.

The recommended way to install the Address module is using composer: composer require drupal/address. This will grab the Drupal module and the commerceguys/addressing library that it depends on. If your Drupal site is not composer-based, an alternative is to use the Ludwig module. Read this article if you want to learn more about this option. In the example, it is assumed that the module and its dependency were obtained via composer. Also, keep an eye on the Composer Support in Core Initiative as they make progress.

Source and destination sections

The example will migrate three addresses from the following countries: Nicaragua, Germany, and the United States of America (USA). This makes it possible to show how different countries expect different address data. As usual, for any migration you need to understand the source. The following code snippet shows how the source and destination sections are configured:

source:
  plugin: embedded_data
  data_rows:
    - unique_id: 1
      first_name: 'Michele'
      last_name: 'Metts'
      company: 'Agaric LLC'
      city: 'Boston'
      state: 'MA'
      zip: '02111'
      country: 'US'
    - unique_id: 2
      first_name: 'Stefan'
      last_name: 'Freudenberg'
      company: 'Agaric GmbH'
      city: 'Hamburg'
      state: ''
      zip: '21073'
      country: 'DE'
    - unique_id: 3
      first_name: 'Benjamin'
      last_name: 'Melançon'
      company: 'Agaric SA'
      city: 'Managua'
      state: 'Managua'
      zip: ''
      country: 'NI'
  ids:
    unique_id:
      type: integer
destination:
  plugin: 'entity:node'
  default_bundle: ud_address

Note that not every address component is set for all addresses. For example, the Nicaraguan address does not contain a ZIP code. And the German address does not contain a state. Also, the Nicaraguan state is fully spelled out: Managua. On the contrary, the USA state is a two letter abbreviation: MA for Massachusetts. One more thing that might not be apparent is that the USA ZIP code belongs to the state of Massachusetts. All of this is important because the module does validation of addresses. The destination is the custom ud_address content type created by the module.

Available subfields

The Address field has 13 subfields available. They can be found in the schema() method of the AddresItem class. Fields are not required to have a one-to-one mapping between their schema and the form widgets used for entering content. This is particularly true for addresses because input elements, labels, and validations change dynamically based on the selected country. The following is a reference list of all subfields for addresses:

  1. langcode for language code.
  2. country_code for country.
  3. administrative_area for administrative area (e.g., state or province).
  4. locality for locality (e.g. city).
  5. dependent_locality for dependent locality (e.g. neighbourhood).
  6. postal_code for postal or ZIP code.
  7. sorting_code for sorting code.
  8. address_line1 for address line 1.
  9. address_line2 for address line 2.
  10. organization for company.
  11. given_name for first name.
  12. additional_name for middle name.
  13. family_name for last name:

Properly describing an address is not trivial. For example, there are discussions to add a third address line component. Check this issue if you need this functionality or would like to participate in the discussion.

Address subfield mappings

In the example, only 9 out of the 13 subfields will be mapped. The following code snippet shows how to do the processing of the address field:

field_ud_address/given_name: first_name
field_ud_address/family_name: last_name
field_ud_address/organization: company
field_ud_address/address_line1:
  plugin: default_value
  default_value: 'It is a secret ;)'
field_ud_address/address_line2:
  plugin: default_value
  default_value: 'Do not tell anyone :)'
field_ud_address/locality: city
field_ud_address/administrative_area: state
field_ud_address/postal_code: zip
field_ud_address/country_code: country

The mapping is relatively simple. You specify a value for each subfield. The tricky part is to know the name of the subfield and the value to store in it. The format for an address component can change among countries. The easiest way to see what components are expected for each country is to create a node for a content type that has an address field. With this example, you can go to /node/add/ud_address and try it yourself. For simplicity sake, let’s consider only 3 countries:

  • For USA, city, state, and ZIP code are all required. And for state, you have a specific list form which you need to select from.
  • For Germany, the company is moved above first and last name. The ZIP code label changes to Postal code and it is required. The city is also required. It is not possible to set a state.
  • For Nicaragua, the Postal code is optional. The State label changes to Department. It is required and offers a predefined list to choose from. The city is also required.

Pay very close attention. The available subfields will depend on the country. Also, the form labels change per country or language settings. They do not necessarily match the subfield names. Moreover, the values that you see on the screen might not match what is stored in the database. For example, a Nicaraguan address will store the full department name like Managua. On the other hand, a USA address will only store a two-letter code for the state like MA for Massachusetts.

Something else that is not apparent even from the user interface is data validation. For example, let’s consider that you have a USA address and select Massachusetts as the state. Entering the ZIP code 55111 will produce the following error: Zip code field is not in the right format. At first glance, the format is correct, a five-digits code. The real problem is that the Address module is validating if that ZIP code is valid for the selected state. It is not valid for Massachusetts. 55111 is a ZIP code for the state of Minnesota which makes the validation fail. Unfortunately, the error message does not indicate that. Nine-digits ZIP codes are accepted as long as they belong to the state that is selected.

Finding expected values

Values for the same subfield can vary per country. How can you find out which value to use? There are a few ways, but they all require varying levels of technical knowledge or access to resources:

  • You can inspect the source code of the address field widget. When the country and state components are rendered as select input fields (dropdowns), you can have a look at the value attribute for the option that you want to select. This will contain the two-letter code for countries, the two-letter abbreviations for USA states, and the fully spelled string for Nicaraguan departments.
  • You can use the Devel module. Create a node containing an address. Then use the devel tab of the node to inspect how the values are stored. It is not recommended to have the devel module in a production site. In fact, do not deploy the code even if the module is not enabled. This approach should only be used in a local development environment. Make sure no module or configuration is committed to the repo nor deployed.
  • You can inspect the database. Look for the records in a table named node__field_[field_machine_name], if migrating nodes. First create some example nodes via the user interface and then query the table. You will see how Drupal stores the values in the database.

If you know a better way, please share it in the comments.

The commerceguys addressing library

With version 8 came many changes in the way Drupal is developed. Now there is an intentional effort to integrate with the greater PHP ecosystem. This involves using already existing libraries and frameworks, like Symfony. But also, making code written for Drupal available as external libraries that could be used by other projects. commerceguys\addressing is one example of a library that was made available as an external library. That being said, the Address module also makes use of it.

Explaining how the library works or where its fetches its database is beyond the scope of this article. Refer to the library documentation for more details on the topic. We are only going to point out some things that are relevant for the migration. For example, the ZIP code validation happens at the validatePostalCode() method of the AddressFormatConstraintValidator class. There is no need to know this for a migration project. But the key thing to remember is that the migration can be affected by third-party libraries outside of Drupal core or contributed modules. Another example, is the value for the state subfield. Address module expects a subdivision as listed in one of the files in the resources/subdivision directory.

Does the validation really affect the migration? We have already mentioned that the Migrate API bypasses Form API validations. And that is true for address fields as well. You can migrate a USA address with state Florida and ZIP code 55111. Both are invalid because you need to use the two-letter state code FL and use a valid ZIP code within the state. Notwithstanding, the migration will not fail in this case. In fact, if you visit the migrated node you will see that Drupal happily shows the address with the data that you entered. The problems arrives when you need to use the address. If you try to edit the node you will see that the state will not be preselected. And if you try to save the node after selecting Florida you will get the validation error for the ZIP code.

This validation issues might be hard to track because no error will be thrown by the migration. The recommendation is to migrate a sample combination of countries and address components. Then, manually check if editing a node shows the migrated data for all the subfields. Also check that the address passes Form API validations upon saving. This manual testing can save you a lot of time and money down the road. After all, if you have an ecommerce site, you do not want to be shipping your products to wrong or invalid addresses. ;-)

Technical note: The commerceguys/addressing library actually follows ISO standards. Particularly, ISO 3166 for country and state codes. It also uses CLDR and Google's address data. The dataset is stored as part of the library’s code in JSON format.

Migrating countries and zone fields

The Address module offer two more fields types: Country and Zone. Both have only one subfield value which is selected by default. For country, you store the two-letter country code. For zone, you store a serialized version of a Zone object.

What did you learn in today’s blog post? Have you migrated address before? Did you know the full list of subcomponents available? Did you know that data expectations change per country? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Aug 14 2019
Aug 14

It‘s easy and enjoyable to create marketing campaigns, drive leads, and tell your brand’s story to the world if your website is on the right CMS. Drupal 8’s benefits will definitely impress any marketer. So let’s take a closer look at the greatness of Drupal 8 for marketers, see what makes it so valuable, and name a few useful modules.

The benefits of Drupal 8 for marketers

Drupal 8 has cutting-edge marketing features built into the core and a myriad of contributed modules helpful in every aspect of your successful marketing. 

Easy integration with marketing automation tools 

Marketers love the various marketing automation and CRM tools. They effectively streamline their workflows, as well as give them valuable analytics. 

Thanks to its built-in support for RESTFul web services, Drupal 8 integrates with marketing tools or any others at a snap of a finger. 

Drupal 8 modules to integrate marketing tools

and many more.

Multilingual Drupal 8 campaigns

Marketers can hold more powerful and convincing campaigns in their users’ native languages. Without a doubt, Drupal 8 is a great choice for multilingual websites.

A hundred languages are supported out-of-box, including those with the RTL text direction. The four built-in multilingual modules make it easy to add languages to your site and translate everything — interface, configuration, and content. Most interface translations are already prepared by the community.

Admin interfaces to add translations are very handy and editor-friendly. As D8’s third-party integration capabilities are among its key benefits, it’s also easy to integrate any translation software.

Drupal 8 modules for translation tool integration

and more.

There also are contributed Drupal 8 modules for multilingual features for every aspect of a multilingual website.

Some Drupal 8 multilingual modules

and many more.

Multilingual Drupal 8 website example

 

Multi-channel marketing in Drupal 8

Marketers can engage customers with their Drupal 8 campaigns on their preferred devices. D8 websites are ready to share their data to any web or mobile applications. 

This is thanks to the API-first architecture — one of D8’s key benefits. The core now has 5 robust modules that effectively expose your Drupal entities as a RESTful web API. In the Drupal 8.7 release, the JSON:API module joined this “team” to make multi-channel experiences even more ambitious.

Marketers will appreciate the “create once, publish everywhere” philosophy adopted by the API-first Drupal 8. It significantly increases their reach with minimum publishing expenses. 

Quick and handy content creation with advanced features

Content is the heart and soul of marketing campaigns. A new level of its creation is among the greatest benefits prepared by Drupal 8 for marketers. Editorial workflows are both very user-friendly and advanced in their capabilities. 

At least a few of Drupal 8 content creation benefits

  • You can make edits on the fly with the inline editing feature.
  • The handy drag-and-drop CKEditor is the default D8’s WYSIWYG editor. 
  • Drupal 8 has a Media Library that lets you save and reuse videos, audios, images, and other media.
  • You can add media directly into articles or news, including remote videos from YouTube or Vimeo via the oEmbed feature in Drupal 8.
  • Configurable editorial workflows are available to your marketing team with the new Content Moderation and Workflows modules.
  • The Views in D8 core lets you add highly configurable content collections to your pages. 
  • It’s easy to create beautiful slideshows and carousels with the help of contributed tools such as Views Slideshow, Juicebox, jCarousel, Owl Carousel, and so on.

Media Library in Drupal 8

Smart content personalization

Individualized, or targeted content delivery helps marketers reach their audience. You just give the right offer at the right moment to the right person. Drupal 8 offers awesome opportunities to marketers in this field. 

Some great Drupal 8 personalization modules

  • Smart Content allows you to offer a different display based on browser conditions.
  • Smart Content Segments helps you manage groups of these conditions in a handy UI.
  • Acquia Lift Connector unites your content and customer in a drag-and-drop interface so marketers can effectively manage your campaigns in real-time using behavioral factors.
  • Cloudwords for Multilingual Drupal is your assistant in multilingual campaigns with the features of workflow automation and project management.

Acquia Lift Connector Drupal 8 personalization module

Social media campaigns

Marketers know for sure that social networks are amazing campaign boosters. Your SM page and your website can be an invincible marketing tandem. With D8, their integration is a breeze. 

Marketers will be amazed by a myriad of contributed Drupal 8 modules help them to:

  • auto-publish content to social networks
  • invite users to join your social media pages
  • add their icons to the website
  • add sharing buttons
  • analyze the statistics 
  • embed feeds 

These Drupal 8 social integration modules include

and many more. 

Social integration modules in Drupal 8

Let your marketing boost with Drupal 8!

These have been just a few glances at the greatness of Drupal 8 for marketers. Contact our Drupal team to discuss in what other ways D8 can be helpful to your marketing campaigns and your business. 


Our Drupal developers are ready to:

  • build you a Drupal 8 website from scratch
  • migrate your Drupal 7 website to Drupal 8
  • update your minor Drupal 8 version (since some features mentioned above only start from Drupal 8.7)
  • install the needed modules and configure them properly based on your needs
  • customize the existing solutions to add the desired marketing features to your site

Enjoy the benefits of Drupal 8 for marketers!

Aug 14 2019
Aug 14

14 Aug

dropsolid8

Segfaults in Drupal are not a common occurrence - but when they do pop up, they can pose some tricky challenges... Our DevOps Engineer Mattias Michaux sheds some light on how to debug segmentation faults.
 

As a Drupal web developer, it can be frightening to encounter a so-called segmentation fault or segfault. This is a type of failure raised by hardware with memory protection, notifying you that the software has attempted to access a restricted area of memory. This kind of fault is common in languages with low-level memory management, such as C. Coding in a scripting language like PHP usually implies that you will be spared from segfaults, but on rare occasions these kind of errors can still pop up. And if they do, they tend to leave no stack trace or meaningful error clue in the PHP log. This is because segfault errors originate in the layers located below the PHP engine. Let me talk you through a recent consulting project that involved a curious segmentation fault.
 

Segfault case study

Not so long ago, one of our clients started experiencing seemingly random ‘503 service unavailable’ errors, at random times on random pages. That’s plenty of randomness, without much of a clue to start from.
 

The segfaults had started occurring after an update from PHP 5.6 to PHP 7.2 and the switch from mod_php - PHP is executed as a module within the Apache process -  to PHP-FPM. Just to clarify: PHP runs as a standalone service that Apache connects to.
 

We tried to reproduce the errors on a copy of the site and infrastructure of the project. The segfault was only reproducible when we ran a scraping tool on the site. There was no apparent connection between the pages it occurred on, and the error came up with less than 1% of the total requests. Looking at the logs, there seemed to be no direct cause for this error.

[proxy_fcgi:error] [pid 10272] AH01067: Failed to read FastCGI header
[proxy_fcgi:error] [pid 10272] (104)Connection reset by peer: AH01075: Error dispatching request to ****

Example of the unhelpful Apache error message

The message above only tells us that something went wrong, but it provides no indication as to what the cause might be. Next, the PHP-FPM log indicated a segmentation fault:

WARNING: [pool web] child 3824 exited on signal 11 (SIGSEGV) after 3.353763 seconds from start

The syslog entry didn’t turn out to be very helpful either:

kernel: [4734894.041892] traps: php-fpm7.2[3760] general protection ip:555ce4cb6342 sp:7ffccab418c8 error:0
kernel: [4734894.041897]  in php-fpm7.2[555ce4a51000+411000]

To find out what the root cause might be, we needed to recompile PHP with debugging enabled. This allowed us to produce core dumps, which contain a full backtrace. A backtrace is a summary of how the program got to a particular point. It displays one line per frame for many frames, starting with the frame that is currently being executed. I suggest finding a guide like this one on how to compile your php with debugging enabled.
 

After compiling PHP, we copied the php.ini and pool config from the original PHP version to the freshly compiled one. Next, we altered the config, so now the PHP-FPM pool used a separate name and socket path, and we changed that socket in the vhost. After starting the new PHP-FPM instance and running the crawler to reproduce the issue, we quickly saw the same 503 errors showing up, but this time with a core dump. We started comparing the dumps and tried to find a pattern. Fortunately, most cases pointed to the same thing: the unserialize() function of large objects. This made us look for specific PHP bugs that involved unserializing or memory allocation. We found an interesting one - in fact, someone had encountered almost the same issue before.
 

We decided to try the same approach by disabling garbage collection in the settings file:

ini_set('zend.enable_gc', 0);

We reran the scraper for multiple hours and no issues occurred. There are other documented cases where PHP’s garbage collection interferes with Drupal’s processing of huge amounts of data and objects.  Because disabling the garbage collection globally could have negative effects on the resource consumption of the live site, we also looked for a way to only disable garbage collection partially. This is indeed possible by patching  includes/cache.inc so _cache_get_object doesn’t run gc while fetching data.
 

This puzzling problem took several hours and two people to solve, but in the end we managed to diagnose correctly and we implemented a solid solution. An interesting case that led us to explore unusual parts of Drupal - and possibly something to bear in mind next time your Drupal environment produces a similar type of error.
 

gdb /opt/php/7.2/sbin/php-fpm /tmp/coredump-php-fpm.28651 -ex "source /opt/php-src/php7.2-build/php-7.2.17/.gdbinit" -ex "zbacktrace" --batch | grep "\[0x"
[0x7f2c2b4220e0] unserialize("O:4:"view":49:{s:8:"db_table";s:10:"views_view";s:10:"base_table";s:18:"commerce_line_item";s:10:"base_field";s:3:"nid";s:4:"name";s:25:"commerce_reports_products";s:3:"vid";s:0:"";s:11:"description";s:0:"";s:3:"tag";s:16:"commerce_reports";s:10:"human_nam...") [internal function]
[0x7f2c2b421f40] DrupalDatabaseCache->prepareItem(object[0x7f2c2b421f90]) /home/project/www/includes/cache.inc:520
[0x7f2c2b421d80] DrupalDatabaseCache->getMultiple(reference) /home/project/www/includes/cache.inc:433
[0x7f2c2b421cf0] cache_get_multiple(reference, "cache_views") /home/project/www/includes/cache.inc:113
[0x7f2c2b421a70] _ctools_export_get_defaults_from_cache("views_view", array(24)[0x7f2c2b421ad0]) /home/project/www/sites/all/modules/contrib/ctools/includes/export.inc:746
[0x7f2c2b421390] _ctools_export_get_defaults("views_view", array(24)[0x7f2c2b4213f0]) /home/project/www/sites/all/modules/contrib/ctools/includes/export.inc:649
[0x7f2c2b4211a0] _ctools_export_get_some_defaults("views_view", array(24)[0x7f2c2b421200], array(1)[0x7f2c2b421210]) /home/project/www/sites/all/modules/contrib/ctools/includes/export.inc:783
[0x7f2c2b420210] ctools_export_load_object("views_view", "names", array(1)[0x7f2c2b420280]) /home/project/www/sites/all/modules/contrib/ctools/includes/export.inc:493
[0x7f2c2b420080] ctools_export_crud_load("views_view", "search_results") /home/project/www/sites/all/modules/contrib/ctools/includes/export.inc:81
[0x7f2c2b41ff50] views_get_view("search_results") /home/project/www/sites/all/modules/contrib/views/views.module:1683
[0x7f2c2b41fb50] views_block_view("-exp-search_results-page") /home/project/www/sites/all/modules/contrib/views/views.module:772
[0x7f2c2b41fa60] module_invoke("views", "block_view", array(1)[0x7f2c2b41fad0]) /home/project/www/includes/module.inc:934
[0x7f2c2b41f410] _block_render_blocks(array(0)[0x7f2c2b41f460]) /home/project/www/modules/block/block.module:911
[0x7f2c2b41f2d0] block_list("branding") /home/project/www/modules/block/block.module:690
[0x7f2c2b41f200] block_get_blocks_by_region("branding") /home/project/www/modules/block/block.module:319
[0x7f2c2b41eee0] block_page_build(reference) /home/project/www/modules/block/block.module:270
[0x7f2c2b41ecf0] drupal_render_page(reference) /home/project/www/includes/common.inc:5914
[0x7f2c2b41e570] drupal_deliver_html_page(array(1)[0x7f2c2b41e5c0]) /home/project/www/includes/common.inc:2761
[0x7f2c2b41e3d0] drupal_deliver_page(array(1)[0x7f2c2b41e420], "") /home/project/www/includes/common.inc:2634
[0x7f2c2b41e100] menu_execute_active_handler() /home/project/www/includes/menu.inc:542
[0x7f2c2b41e030] (main) /home/project/www/index.php:21
Aug 13 2019
Aug 13

A great place to start a conversation about decoupled Drupal is evaluating Contenta CMS. While Contenta CMS is not necessarily something you must have to build a decoupled Drupal 8 site, it is designed to do it. You can relatively easily create a vanilla Drupal site that supports a decoupled application, but Contenta has already solved many of the related problems.

So what is Contenta CMS exactly? 

Contenta CMS is an API-first Drupal 8 distribution intended to be used to build fully decoupled Drupal sites. It was built by the developers of many of the modules you may end up using as you build your decoupled site, using their defined best practices to address typical needs and challenges. It is meant to be 100% decoupled, and only uses Drupal 8 as an administrative backend with no user-facing HTML content. This is something you could change if you desired to do so.

At first it seemed like a magical, mysterious system. However, Contena uses core and well-known contributed modules with only a single custom module that provides a few customizations. With only a few small tweaks, you could conceivably have a base system for your decoupled project with Contenta CMS as you would any Drupal 8 project.

Features

The main difference between Contenta CMS and a site you would conceivably build yourself lies in the only custom module the distribution comes with, Contenta Enhancements. This module primarily does a few things:

  • creates a new, streamlined administration toolbar for a decoupled site,
  • provides some additional local tasks menu items giving easy access to necessary configuration,
  • alters the node view to display JSON instead of HTML, allowing for easy viewing of node structure, and
  • disables certain routes not needed or necessarily useful for a decoupled site.

Because this is a Drupal module, you’re perfectly fine installing it in your own clean Drupal 8 install to try to replicate Contenta along with the contributed modules used.

Key Modules

The following modules make up the underlying Contenta architecture.

Consumers

Consumers is a utility module that allows other modules to define users of the API and then further allows administrators to manage access control for each consumer. 

Decoupled Router

As the module description states, "Decoupled Router provides an endpoint that will help you resolve path aliases and redirects for entity related routes. This is especially useful for decoupled applications where the editor wants full control over the paths available in the consumer application (for instance a React app)."

To further understand the purpose of Decoupled Router, read the great post (and series) by the module creator.

JSON:API (now part of Drupal 8.7 Core)

JSON:API is a plug and play module that fully implements the JSON:API Specification that without configuration exposes your Drupal entities via a REST API.

JSON:API Extras

JSON:API Extras extends the JSON:API module by allowing customizations and management of API endpoints. The JSON:API module does not offer any configuration options.

JSON-RPC

While JSON:API exposes Drupal entities via a REST API, JSON-RPC exposes Drupal administrative actions that would otherwise not be easily accessible or handled using the REST API.

OpenAPI

The OpenAPI is a utility module that allows your Web Services to be discoverable using the OpenAPI standard and thus neatly documented for testing and development when used in addition the OpenAPI UI and either something like OpenAPI UI Redoc or another similar module.

OpenAPI UI

OpenAPI UI helps provide a user interface within Drupal for displaying your OpenAPI standards-based API explorer for your Drupal site.

OpenAPI UI Redoc

This module provides a plugin that implements Redoc for use with OpenAPI UI to display from within Drupal. This is akin to GraphiQL in the GraphQL standard.

preview of redoc in drupalOpenAPI UI using Redoc in Contenta

SimpleOauth

SimpleOauth is an implementation of the OAuth 2.0 Authorization Framework RFC for Drupal allowing you to use Oauth 2.0 for user authentication.

Subrequests

The subrequests module allows aggregated requests and responses to be sent to Drupal. For example, instead of sending a request for a node and then the node’s associated taxonomy terms, it allows you to send a single request that returns all the data you’d need. The goal being to make decoupled applications more efficient by reducing requests. An excellent article on the motivation for this module can be found on Lullabot’s blog.

Installing Contenta

To get started, head over to the Contenta install page. After installing, run: drush runserver 127.0.0.1:8888 to run server, then access 127.0.0.1:8888 to view site.

preview of contenta home screen

Upon logging in, you will find a familiar interface, with some slightly different links available, with all but the “API” link leading to common Drupal administration pages, and the “Advanced Link” exposing the default Drupal administration toolbar.

preview of contenta admin screen

You should be now ready to begin building your decoupled site. In the following posts in this series, I will explore doing just that.

Conclusion

By adding and configuring the above modules to a new or existing Drupal 8 site (I can’t stress enough that you must install modules using composer), configuring CORS (if needed), you should be ready to begin developing your decoupled application using the technology of your choosing so at this point I wish you happy trails!

*P.S. The folks that brought you ContentaCMS also created ContentaJS, “a nodejs server that proxies to Contenta CMS and holds custom code”. It’s certainly not necessary to begin your decoupled journey, but it does provide some handy features that I think could be extremely useful.
 

Aug 13 2019
Aug 13

Our Family Wizard Home page

Website Redesign: A Professional Face for a Market Leader

When parents divorce or separate, the child often becomes the unwilling intermediary for communication. Our Family Wizard (OFW) is a mobile application for co-parenting exes that facilitates and tracks communication, helps coordinate child duties and stores important information. The app includes a shared calendar, messaging component (with a “read” stamp), school and medical information and an expense log. 

People are usually referred to the site by judges, attorneys and mediators during separation or divorce proceedings. Often a judge or mediator requires that the parents use the application to document any child-related communication. 

Our Family Wizard Mother and Child

Our Family Wizard is the market leader in their vertical, but they felt their website did not exhibit the gravitas, credibility and professionalism that it should. TEN7 had been working with Avirat (the parent company of OFW) since 2015, supporting the existing OFW site, and they hired us to completely make over their website in both content and form. We went through a complete discovery process to ensure we completely understood the needs and desires of the client before we launched into design and content strategy for the new site.

The site redesign focused on the following goals:

Focus the Site to Drive Signups

The entire purpose of the ourfamilywizard.com website is to get the site visitor to sign up for the service and download the app. The new site has been streamlined to focus on two clearly defined calls to action: learn more and get started/sign up. The new design also makes use of a persistent “sticky bar” on the homepage with these two prompts. 

Redesign the Site to Make it More Modern and Professional 

In describing the desired site look and feel, the client mentioned pharmaceutical sites. Pharma sites tend to have a very clean design, large images and clear messaging. But more than that, pharma sites serve two audiences: professionals (doctors) and users (patients). Our Family Wizard also needs to market to two distinct user groups with distinct needs. 

Whereas the old site had a very “default site” feel, the new site feels more alive. Our designer Eva Lovisa paid attention to details big and small—from adding bigger photos and hero images to creating a brand library of icons. She created a bold color scheme with a lot of blues which served two purposes: blues tend to be calming (for the stressed out parents) and dark navy blues tend to have a regal, legal feel (for the professional visitors). She implemented a varied text style palette, choosing more distinctive fonts and creating things like quote and bullet styles to add more visual interest to what could have become a boring text-heavy site.

We gave a lot of consideration to the home page flow. Whereas the old home page felt like a mishmash of information, the new homepage is composed of clean, modular “stripes” of information. This solution is more than just good-looking; the stripes are created using the Paragraphs module in Drupal. Paragraphs functionality puts power in the hands of page editors to create paragraph types for common scenarios (image to the left of text, pull quotes, slideshows, etc.) that can be easily moved and edited.

“Instead of one giant HTML palette to work from, [with Paragraphs] we can work in pre-made chunks that look good. We have a lot more flexibility. We have a few content writers on staff, and the pages they’ve been able to create—the whole look and feel compared to the way they were doing it before—the difference is night and day.”
—Jai Kissoon, Avirat CEO

The mobile app was being redesigned at the same time as the website, so both will have the same identity. However, the website was also designed to be responsive, so it looks great on mobile devices.

Consolidate Information and Make It More Searchable

As part of the redesign, we migrated the Drupal 7 site to Drupal 8. A site redesign or migration is a great time to take stock of the organization and usability of your content items. Although this is primarily a signup site, it also houses a library of helpful content and resources for co-parents. With a long-running site like OFW, there can be an overabundance of categories, taxonomies and tags for the hundreds of pieces of content, and this hinders content usefulness and searchability. In addition, content was scattered in multiple areas around the site. 

We thought about which tags and topics we wanted, which URL structures we needed to change, and made decisions about single or multiple tagging. Each blog article had copies in multiple languages, so we had to properly tag and move those over.

We consolidated the content categories down into a manageable set, which we hope will last them for many years. We sorted and grouped related information to make it more easily navigable. For example, content about using the app and site, regional resource directories and other evergreen content was moved to a new Knowledge Center, found only on the website. Additional articles are found in the site blog. 

“Working with TEN7 has been a good fit. Things have been straightforward and easy. Les [Lim, Senior Developer] is a wizard in his own right. It’s his attitude, more than anything—his ability to help us get to resolution we want, which is sometimes in conflict with what’s easiest. We’re very happy.”
—Jai Kissoon, Avirat CEO 

The new site launched in May of 2018.

Aug 13 2019
Aug 13

This blog has been re-posted and edited with permission from Dries Buytaert's blog.

A special bird flying in space has the spotlight while lots of identical birds sit on the ground (lack of diversity)

At Drupalcon Seattle, I spoke about some of the challenges Open Source communities like Drupal often have with increasing contributor diversity. We want our contributor base to look like everyone in the world who uses Drupal's technology on the internet, and unfortunately, that is not quite the reality today.

One way to step up is to help more people from underrepresented groups speak at Drupal conferences and workshops. Seeing and hearing from a more diverse group of people can inspire new contributors from all races, ethnicities, gender identities, geographies, religious groups, and more.

To help with this effort, the Drupal Diversity and Inclusion group is hosting a speaker diversity training workshop on September 21 and 28 with Jill Binder, whose expertise has also driven major speaker diversity improvements within the WordPress community.

I'd encourage you to either sign up for this session yourself or send the information to someone in a marginalized group who has knowledge to share, but may be hesitant to speak up. Helping someone see that their expertise is valuable is the kind of support we need in order to drive meaningful change.

Aug 13 2019
Aug 13

Back in April, BigCommerce, in partnership with Acro Media, announced the release of the BigCommerce for Drupal module. This module effectively bridges the gap between the BigCommerce SaaS ecommerce platform and the Drupal open source content management system. It allows Drupal to be used as the frontend customer experience engine for a headless BigCommerce ecommerce store.

For BigCommerce, this integration provides a new and exciting way to utilize their platform for creating innovative, content-rich commerce experiences that were not possible via BigCommerce alone.

For Drupal, this integration extends the options its users and site-builders have for adding ecommerce functionality into a Drupal site. The flexibility of Drupal combined with the stability and ease-of-use of BigCommerce opens up new possibilities for Drupal that didn’t previously exist.

Since the announcement, BigCommerce and Acro Media have continued to educate and promote this exciting new headless commerce option. A new post on the BigCommerce blog published last week title Leverage Headless Commerce To Transform Your User Experience with Drupal Ecommerce (link below) is a recent addition to this information campaign. The BigCommerce teams are experts in what they do and Acro Media is an expert in open source integrations and Drupal. They asked if we could provide an introduction for their readers to really explain what Drupal is and where it fits in to the headless commerce mix. This, of course, was an opportunity not to be missed and so our teams buckled down together once again to provide readers with the best information possible.

So without further explanation, click the link below to learn how you can leverage headless commerce to transform your user experience with Drupal.

Read the full post on the BigCommerce blog

Additional resources:

Aug 13 2019
Aug 13

When the open-source Accelerated Mobile Pages (AMP) project was launched in October 2015, Google AMP was often compared to Facebook's Instant Articles. Nonetheless, both of the tech-giants share a common goal – to make web pages load faster. While AMP can be reached with a web URL, Facebook’s Instant Articles aimed only at easing the pain for their app-users. Teaming up with some powerful launch partners in the publishing and technology sectors, Google AMP aimed to impact the future of content distribution on mobile devices.

Fast forward to today, and Google AMP is the hottest thing on the internet. With over 25 million website domains that have published over 4 Billion AMP pages, it did not take long for the project to be a huge success. Comprising of two main features; Speed and Support to Monetization of Objects, AMPs implications are far reaching for enterprise businesses, marketers, ecommerce and every other big and small organizations. With great features and the fact that its origin as a Google Initiative, it is no surprise that the AMP pages get featured in Google SERP more prominently. 

What is AMP?

With the rapid surge in mobile users, the need to provide a website-like user experience does not just cut it. Today mobile user’s come with a smaller attention-span and varied internet speeds. Businesses can cater to each of these challenge with a fast-loading, light-weight and an app-like website with Google AMP.

AMP is an open-source framework that simplifies the HTML, streamlines CSS rules, restricts use of Javascript (can use AMP’s component library instead) and delivers pages via a Google AMP cache (a proxy-based Content Delivery Network).

Why AMP??

Impacting the technical architecture of digital assets, Google's open source initiative aims to provide streamlined web pages to mobile browsers and other apps.

It is Fast, like Really Fast

Google AMP loads about twice as fast as a normal comparable mobile page and the latency is as less as one-tenth. Intended to provide the fastest experience for mobile users, customers will be able to access content faster, and they are more likely to stay on the page to make a purchase or enquire about your service, because they know it won't take long.

An Organic Boost

Eligibility for the AMP carousal that rests above the other search results on Google SERP, resulting in a substantial increase in organic result and traffic is a major boost for the visibility of an organization. Though not responsible for increasing the page authority and domain authority, Google AMP plays a key role in sending far more traffic your way.

ROI

The fact that AMP leverages and not disrupts the existing web infrastructure of a website, makes the cost of adopting AMP much lesser than the competing technologies. In return, Google AMP enables better user experience which translates to better conversion rates on mobile devices.

Drupal & AMP

With better user engagement, higher dwell time and easy navigation between content benefits, businesses are bound to drive more traffic with AMP-friendly pages and increase their revenue. The AMP module is especially useful for marketers as it is a great addition to optimize their Drupal SEO efforts.

AMP produces HTML that makes the web a faster place. Implementing the AMP module in Drupal is really simple. Just download, enable and configure!
Before you begin with the integration of AMP module with Drupal, you need -
AMP Module : The AMP module mainly handles the conversion of regular Drupal HTML pages to AMP-complaint pages.

Two main components of AMP module:

AMP Module : The AMP module mainly handles the conversion of regular Drupal HTML pages to AMP-complaint pages.
Two main components of AMP module:

AMP Theme: I'm sure you have come across AMP HTML and its standards. The one that are responsible for your content to look effective and perform well on mobile. The Drupal AMP theme produces the mark up required by these standards for websites looking to perform well in the mobile world. Also, AMP theme allows creation of custom-made AMP pages.

AMP PHP Library: Consisting of the AMP base theme and the ExAMPle sub-theme, the Drupal AMP PHP Library handles the final corrections. Users can also create their own AMP sub-theme from scratch, or modify the default ExAMPle sub-theme for their specific requirements.

How to setup AMP with Drupal?

Before you integrate AMP with Drupal, you need to understand that AMP does not replace your entire website. Instead, at its essence, the AMP module provides a view mode for content types, which is displayed when the browser asks for an AMP version.

Download the AMP Module

With your local prepped up, type the following terminal command:

drush dl amp, amptheme, composer_manager

This command will download the AMP module, the AMP theme and the Composer Manager module (suppose if you do not have the Composer Manager already).

If you have been a user of Drupal 8,  you are probably familiar with Composer and its function as a packaging tool for PHP that installs dependencies for a project. The composer is used to install a PHP library that converts raw HTML into AMP HTML. Also, the composer will help to get that library working with Drupal.

However, as the AMP module does not explicitly require Composer Manager for a dependency, alternate workflows can make use of module Composer files without using Composer Manager.

Next, enable the items that are required to get started:

drush en composer_manager, amptheme, ampsubtheme_example

Before enabling the AMP module itself, an AMP sub-theme needs to be enabled. The default configuration for the AMP module sets the AMP Theme to ‘ExAMPle subtheme.’

How to Enable AMP Module?

The AMP module for Drupal can be enabled using Drush. Once the module is enabled, the Composer Manager will take care of the downloading of the other AMP libraries and its dependencies.

drush en amp

Configuration

Once everything is installed and enabled, AMP needs to be configured using a web interface before the Drupal AMP pages can be displayed. First up, you need to decide which content types should have an AMP version. You might not need it for all of them. Enable particular content type by clicking on the “Enable AMP in Custom Display Settings” link. On the next page, open the “Custom Display Settings” fieldset. Check the AMP box, then click Save.

image

Setting an AMP Theme

Once the AMP module and content type is configured, it is time to select a theme for AMP pages and configure it. The view modules and the field formatters of the Drupal AMP module take care of the main content of the page. The Drupal AMP theme, on the other hand, changes the mark-up outside the main content area of the page.

Also, the Drupal AMP themes enables you to create custom styles for your AMP pages. On the main AMP config page, make sure that the setting for the AMP theme is set to the ExAMPle Subtheme or the custom AMP subtheme that you created.

Drupal-theme

Aug 13 2019
Aug 13

We need to nudge governments to start funding and fixing accessibility issues in the Open Source projects that are being used to build digital experiences. Most governments are required by law to build accessible websites and applications. Drupal’s commitment to accessibility is why Drupal is used by a lot of governments to create ambitious digital experiences.

Governments have complex budgeting systems and policies, which can make it difficult for them to contribute to Open Source. At the same time, there are many consulting agencies specializing in Drupal for government, and maybe these organizations need to consider fixing accessibility issues on behalf of their clients.

Accessibility is one of the key selling points of Drupal to governments.

If an agency started contributing, funding, and fixing accessibility issues in Drupal core and Open Source, they’d be showing their government clients that they are indeed experts who understand the importance of getting involved in the Open Source community.

So I have started this blog post with a direct ask for governments to pay to fix accessibility issues without a full explanation as to why. It helps to step back and look at the bigger context: “Why should governments fix accessibility issues in Drupal Core?”

Governments are using Drupal

This summer’s DrupalGovCon in Washington, DC was the largest Drupal event on the East Coast of the United States. The conference was completely free to attend with 1500 registrants. There were dozens of sponsors promoting their Drupal expertise and services. My presentation, Webform for Government, included a section about accessibility. There were also four sessions dedicated to accessibility.

Besides presenting at DrupalGovCon, I also volunteered a few hours to help with Webform-related contribution sprints focusing on improving Drupal’s Form API’s accessibility. I felt that having new contributors help fix accessibility issues would be a rewarding first experience into the world of contributing to Drupal core.

Fixing accessibility issues in Drupal

At the Webform contribution sprint, Lera Atwater (leraa), a first-time contributor, started reviewing Issue #2951317: FAPI #radios form elements do not have an accessible required attribute. She reviewed the most recent patch and provided some feedback. I was able to re-roll the patch a few times to address some new remaining issues and add some basic test coverage. The fact that Lera and I focused on one issue helped move it forward. Still, our solution needs to be reviewed by Andrew Macpherson (andrewmacpherson) or Mike Gifford (mgifford), Drupal’s accessibility maintainers and then Alex Bronstein (effulgentsia) or Tim Plunkett (tim.plunkett), Drupal’s Form API maintainers. Getting an accessibility-related patch committed is a lot of communication and work.

This experience made me ask…

How can we streamline the process for getting accessibility patches committed to Drupal core?

Streamlining the process of fixing accessibility issues

The short answer is we can’t streamline the process of documenting, fixing, and reviewing accessibility issues. These steps are required and needed to ensure the quality of code being committed to Drupal core. What we might be able to do is strive for more efficiency in how we manage accessibility-related issues and the steps required to fix them.

While working on this one accessibility for free in my spare time, it made me wonder...

What would happen if a paid developer collected and worked on multiple accessibility issues for a few weeks and managed to move these issues forward towards a resolution collectively?

First off, I can tell from experience that most Form API and accessibility-related issues in Drupal Core as well as other Open Source projects are very similar. Most accessibility issues have something to do with fixing or adding Aria (Accessible Rich Internet Applications) attributes or keyboard access. A developer should be able to familiarize themselves with the overarching accessibility requirements and fixes needed to address the multiple accessibility issues in Drupal core.

Second, developers tend to work better on focused tasks. Most developers contribute to Open Source in their spare time completing small tasks. Paying a developer to commit and focus on fixing accessibility issues as part of their job is going to yield better results.

Finally, having multiple tickets queued for core maintainers is a more efficient use of Andrew, Mike, Alex, and Tim’s time. Blocks of similar tickets can pass through the core review process more quickly. Also, everyone shares the reward of saying we fixed these accessibility issues.

Governments should pay to fix accessibility issues

I’d like to nudge people or organizations to get involved. In the last month’s Webform 8.x-5.3 release, I settled on the below direct message within the Webform module’s UI.

My conclusion is that we need to directly ask people to get involved, and directly ask organizations to contribute financially (a.k.a. give money). I am admittedly uncomfortable asking people for money because I think to myself, “What if they say no?”

No one should say no to fixing accessibility issues.

The Drupal community cares about accessibility, governments have to care about accessibility, and governments rely on Drupal. Governments should pay to fix accessibility issues.

Talking Drupal and the U.S. Government

Before DrupalGovCon, the Talking Drupal podcast spoke with Abby Bowman about Drupal for Gov. They discussed the U.S. government’s usage and contribution to Drupal and Open Source. From Abby, I learned two interesting things about the U.S. government’s commitment to Open Source.

First, the U.S. government contributes code to Open Source via Code.gov. Second, the U.S. government requires all websites to be accessible, but there is no central department or team, ensuring that all government websites are accessible. All the U.S. government’s websites would immediately benefit from having a team dedicated to finding and fixing accessibility issues.

If you want to hear about my experience at DrupalConGov check out Talking Drupal #221 - Live from GovCon.

How can government and organizations start paying to fix accessibility issues?

The word “pay” is liberally used throughout this post to emphasize the need for governments and organizations to budget for and commit to spending resources for fixing accessibility issues. Either a government related-project needs to get someone on their team to fix accessibility issues or nudge (a.k.a. pay) a vendor or outside developer to fix accessibility issues.

We have to keep asking and experimenting with how we ask organizations to contribute.

Companies that work for governments should pay to fix accessibility issues

In an ideal world, governments should pay to fix accessibility issues. Realistically, some agencies in government can’t directly contribute to Open Source. As stated earlier, any outside vendor who works for the government can contribute to Open Source. Saying that “We care about accessibility and fix accessibility issues within Drupal” is a great slide to include in a pitch deck for a government project.

Maybe governments can mandate that vendors are contributors to the Open Source projects that are being used by a government project.

What is realistically possible?

Realistically, we need to need to fix the accessibility issues in Drupal and Open Source projects. Everyone in the world should have equal access to digital information and services. Every government needs to ensure that the software they are relying on is accessible.

In Drupal and Open Source, we ask individuals, organizations, and governments to get involved. Nudging governments to fix accessibility issues in Drupal and Open Source is just a very direct ask to fix a very specific problem that affects everyone.

There are two immediate ways for governments to get involved in fixing accessibility issues. Either governments dedicate resources to address the problem or they push their vendors to start addressing accessibility issues. In the Open Source community, we need to further embrace and encourage this type of contribution.

Embracing and encouraging governments and organizations contributing to Open Source

In the Drupal Community, we always acknowledge the individuals contributing to Open Source by listing maintainers and contributors on the project pages and even in the software’s MAINTAINERS.txt. We do recognize organizations supporting a Drupal project, but maybe we need to do more. In this day and age, we put corporation names on stadiums. Open Source has reached that scale that the organizations and government that contribute to Open Source are equally important as the individuals. Notably, in the case of enterprise-focused software like Drupal, where organizations are the target consumer, we need to figure out how to get these organizations involved and adequately acknowledged.

Acknowledging that we are already doing a lot

The Drupal community has accomplished something pretty amazing. We have one of the most accessible and secure Open Source Content Manager Systems on the market. We care about accessibility and security and work hard to fix and improve them. As a community, we need to always strive to grow and evolve. Nudging governments to get more involved in fixing accessibility issues will help make our software more accessible to everyone.

Almost done…

We just sent you an email. Please click the link in the email to confirm your subscription!

OKSubscriptions powered by Strikingly

Aug 13 2019
Aug 13

Today we will learn how to migrate dates into Drupal. Depending on your field type and configuration, there are various possible combinations. You can store a single date or a date range. You can store only the date component or also include the time. You might have timezones to take into account. Importing the node creation date requires a slightly different configuration. In addition to the examples, a list of things to consider when migrating dates is also presented.

Example syntax for date migrations.

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD date whose machine name is ud_migrations_date. The migration to execute is udm_date. Notice that this migration writes to a content type called UD Date and to three fields: field_ud_date, field_ud_date_range, and field_ud_datetime. This content type and fields will be created when the module is installed. They will also be removed when the module is uninstalled. The module itself depends on the following modules provided by Drupal core: datetime, datetime_range, and migrate.

Note: Configuration placed in a module’s config/install directory will be copied to Drupal’s active configuration. And if those files have a dependencies/enforced/module key, the configuration will be removed when the listed modules are uninstalled. That is how the content type and fields are automatically created.

PHP date format characters

To migrate dates, you need to be familiar with the format characters of the date PHP function. Basically, you need to find a pattern that matches the date format you need to migrate to and from. For example, January 1, 2019 is described by the F j, Y pattern.

As mentioned in the previous post, you need to pay close attention to how you create the pattern. Upper and lowercase letters represent different things like Y and y for the year with four-digits versus two-digits, respectively. Some date components have subtle variations like d and j for the day with or without leading zeros respectively. Also, take into account white spaces and date component separators. If you need to include a literal letter like T it has to be escaped with \T. If the pattern is wrong, an error will be raised, and the migration will fail.

Date format conversions

For date conversions, you use the format_date plugin. You specify a from_format based on your source and a to_format based on what Drupal expects. In both cases, you will use the PHP date function's format characters to assemble the required patterns. Optionally, you can define the from_timezone and to_timezone configurations if conversions are needed. Just like any other migration, you need to understand your source format. The following code snippet shows the source and destination sections:

source:
  plugin: embedded_data
  data_rows:
    - unique_id: 1
      node_title: 'Date example 1'
      node_creation_date: 'January 1, 2019 19:15:30'
      src_date: '2019/12/1'
      src_date_end: '2019/12/31'
      src_datetime: '2019/12/24 19:15:30'
destination:
  plugin: 'entity:node'
  default_bundle: ud_date

Node creation time migration

The node creation time is migrated using the created entity property. The source column that contains the data is node_creation_date. An example value is January 1, 2019 19:15:30. Drupal expects a UNIX timestamp like 1546370130. The following snippet shows how to do the transformation:

created:
  plugin: format_date
  source: node_creation_date
  from_format: 'F j, Y H:i:s'
  to_format: 'U'
  from_timezone: 'UTC'
  to_timezone: 'UTC'

Following the documentation, F j, Y H:i:s is the from_format and U is the to_format. In the example, it is assumed that the source is provided in UTC. UNIX timestamps are expressed in UTC as well. Therefore, the from_timezone and to_timezone are both set to that value. Even though they are the same, it is important to specify both configurations keys. Otherwise, the from timezone might be picked from your server’s configuration. Refer to the article on user migrations for more details on how to migrate when UNIX timestamps are expected.

Date only migration

The Date module provided by core offers two storage options. You can store the date only, or you can choose to store the date and time. First, let’s consider a date only field. The source column that contains the data is src_date. An example value is '2019/12/1'. Drupal expects date only fields to store data in Y-m-d format like '2019-12-01'. No timezones are involved in migrating this field. The following snippet shows how to do the transformation.

field_ud_date/value:
  plugin: format_date
  source: src_date
  from_format: 'Y/m/j'
  to_format: 'Y-m-d'

Date range migration

The Date Range module provided by Drupal core allows you to have a start and an end date in a single field. The src_date and src_date_end source columns contain the start and end date, respectively. This migration is very similar to date only fields. The difference is that you need to import an extra subfield to store the end date. The following snippet shows how to do the transformation:

field_ud_date_range/value: '@field_ud_date/value'
field_ud_date_range/end_value:
  plugin: format_date
  source: src_date_end
  from_format: 'Y/m/j'
  to_format: 'Y-m-d'

The value subfield stores the start date. The source column used in the example is the same used for the field_ud_date field. Drupal uses the same format internally for date only and date range fields. Considering these two things, it is possible to reuse the field_ud_date mapping to set the start date of the field_ud_date_range field. To do it, you type the name of the previously mapped field in quotes (') and precede it with an at sign (@). Details on this syntax can be found in the blog post about the migrate process pipeline. One important detail is that when field_ud_date was mapped, the value subfield was specified: field_ud_date/value. Because of this, when reusing that mapping, you must also specify the subfield: '@field_ud_date/value'. The end_value subfield stores the end date. The mapping is similar to field_ud_date expect that the source column is src_date_end.

Note: The Date Range module does not come enabled by default. To be able to use it in the example, it is set as a dependency of demo migration module.

Datetime migration

A date and time field stores its value in Y-m-d\TH:i:s format. Note it does not include a timezone. Instead, UTC is assumed by default. In the example, the source column that contains the data is src_datetime. An example value is 2019/12/24 19:15:30. Let’s assume that all dates are provided with a timezone value of America/Managua. The following snippet shows how to do the transformation:

field_ud_datetime/value:
  plugin: format_date
  source: src_datetime
  from_format: 'Y/m/j H:i:s'
  to_format: 'Y-m-d\TH:i:s'
  from_timezone: 'America/Managua'
  to_timezone: 'UTC'

If you need the timezone to be dynamic, things get a bit harder. The from_timezone and to_timezone settings expect a literal value. It is not possible to read a source column to set these configurations. An alternative is that your source column includes timezone information like 2019/12/24 19:15:30 -07:00. In that case, you would need to tweak the from_format to include the timezone component and leave out the from_timezone configuration.

Things to consider

Date migrations can be tricky because they can be affected by things outside of the Migrate API. Here is a non-exhaustive list of things to consider:

  • For date and time fields, the transformation might be affected by your server’s timezone if you do not manually set the from_timezone configuration.
  • People might see the date and time according to the preferences in their user profile. That is, two users might see a different value for the same migrated field if their preferred timezones are not the same.
  • For date only fields, the user might see a time depending on the format used to display them. A list of available formats can be found at /admin/config/regional/date-time.
  • A field can always be configured to be presented in a specific timezone. This would override the site’s timezone and the user’s preferred timezone.

What did you learn in today’s blog post? Did you know that entity properties and date fields expect different destination formats? Did you know how to do timezone conversions? What challenges have you found when migrating dates and times? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Aug 13 2019
Aug 13

Drupal Tome is a static site generator distribution of Drupal 8. It provides mechanisms for taking an entire Drupal site and exporting all the content to HTML for direct service. As part of a recent competition at SCDUG to come up with the cheapest possible Drupal 8 hosting, I decided to do a proof-of-concept level implementation of Drupal 8 with Docksal for local content editing, and Netlify for hosting (total cost was just the domain registration).

The Tome project has directions for setup with Docker, and for setup with Netlify, but they don’t quite line up with each other (I followed the docker instructions, then the Netlify set, but had to chart my own course to get the site from the first project linked to the repo in the second), and since I’m getting used to using Docksal when I had to fall back and do a bit of it myself I realized it was almost painfully easy to setup.

The first step was to go to the Tome documentation for Netlify and setup an account, and site from the template. There is a button in those directions to trigger the Netlify setup, I’ve added one here as well (but if this one fails, check to see if they updated theirs):

Deploy to Netlify

Login with Github or similar service, and let it create a repo for your project.

Follow Netlify’s directions for setting up DNS so you can have the domain you want, and HTTPS (through Let’s Encrypt). It took it a couple hours to get that detail to run right, but it eventually worked. For this project I chose a subdomain of my main blog domain: tome-netlify.spinningcode.org

Next go to Github (or whatever service you used) and clone the repository to your local machine. There is a generated README on that project, but the directions aren’t 100% correct if you aren’t cloning onto a machine with a working PHP environment. This is when I switched over to docksal, and ran the following series of commands:

fin init
fin composer install
fin drush tome:install
fin drush uli

Then log into your local site using the domain from docksal and the link from drush, and add some content.

Next we export the content from Drupal to send over to Netlify for deployment.

fin drush tome:static
git add .
git commit -m "Adding sample content"
git push

…now we wait while Netlify notices and builds the site…

If you look at the site a few minutes later the new content should be posted.

This is all well and good if I want to use the version of the site generated for the Netlify example, but I wanted to make sure I could do something more interesting. These days Drupal ships with an install profile called Unami that provides a more robust sample site than the more traditional Standard install.

So now let’s try to get Unami onto this site. Go back to the terminal and have Tome reset everything (it’ll warn you that you are about to nuke everything):

fin drush tome:init

…select Unami when it asks for a profile…and wait cause this takes a while…

Now just re-export the content and push it to your repo.

fin drush tome:static
git add .
git commit -m "Converting to Unami"
git push

And wait again, cause this also takes a few minutes…

The Unami home page on my subdomain hosted at Netlify.

That really was all that was involved for a simple site, you can see my repository on Github if you want to see all of what was generated along the way.

The whole process is pretty straight forward, but there are a few things that it helps to understand.

First, Netlify is actually regenerating the markup on their servers with this approach. The Drupal nodes, and other entities, are saved as JSON and then imported during the build. This makes the process reliable, but slow. Unami takes several minutes to deploy since Netlify is installing and configuring Drupal, loading the content, and generating the output. The build command provided in that template is clear enough to follow if you are familiar with composer projects:

command = "composer install && ./vendor/bin/drush tome:install -y && ./vendor/bin/drush tome:static -l $DEPLOY_PRIME_URL" 

One upside of this, is that you can use a totally unrelated domain for your local testing and have it adjust correctly to the production domain. When you are using Netlify’s branching workflow for managing dev, test, and production it also protects your work that way.

My directions above load a standard docksal container because that’s quick and easy, which includes MySQL, but Tome falls back to using a Sqlite database since you can be more confident it is there. Again this is reliable but slow. If I were going to do this on a more complete project I’d want a smaller Docksal setup or to switch to using MySQL locally.

A workflow based on this approach might also struggle with concurrent edits or complex configuration of large sites. It would probably make more sense to have the content created on a hidden, but traditional, server and then run through a different workflow. But for someone working on a series small sites that are rarely updated, a totally temporary instance of the site that can be rapidly deployed to a device, have content updated, push out to production, and then deleted locally until needed again.

The final detail to note is that there is no support for forms built into this solution. Netlify has support for that, and Tome has a module that claim to connect to that service but I wasn’t able to quickly determine how to get it connected. I am confident there are solves to this problem, but it is something that would take a little additional work.

Aug 12 2019
Aug 12

Drupal has pretty good multilingual support out of the box. It's also fairly easy to create new entities and just add translation support through the annotation. These things are well documented elsewhere and a quick search will reveal how to do that. That is not what this post is about. This post is about the UX around selecting which fields are translatable.

On the Content Language page at http://example.com/admin/config/regional/content-language you can select which fields on your nodes, entities and various other translatable elements will be available on non-default language edit pages. The section at the top is the list of types of translatable things. Checking these boxen will reveal the related section. You can then go down to that section and start selecting fields to translate, save the form and they become available. All nice and easy.

I came into the current project late and this is my first exposure to this area of Drupal. We have a few content types and a lot of entities. I was ticking the box for the entity I wanted to add, jumping to the end of the form and saving it. When the form came back though it was not selected. I could not figure out why. It wasn't until a co-worker used the form differently to me that the issue was resolved. Greg ticked the entity, scrolled down the page and found it, ticked some of the checkboxen in the entity itself and then saved the page. The checkbox was still ticked.

The UX on this pretty good once you know how it works. It could be fixed fairly easy with a system message pointing out that your checkbox was not saved because none of the items it exposed were selected.

I feel a patch coming on…

Aug 12 2019
Aug 12
A special bird flying in space has the spotlight while lots of identical birds sit on the ground (lack of diversity)

At Drupalcon Seattle, I spoke about some of the challenges Open Source communities like Drupal often have with increasing contributor diversity. We want our contributor base to look like everyone in the world who uses Drupal's technology on the internet, and unfortunately, that is not quite the reality today.

One way to step up is to help more people from underrepresented groups speak at Drupal conferences and workshops. Seeing and hearing from a more diverse group of people can inspire new contributors from all races, ethnicities, gender identities, geographies, religious groups, and more.

To help with this effort, the Drupal Diversity and Inclusion group is hosting a speaker diversity training workshop on September 21 and 28 with Jill Binder, whose expertise has also driven major speaker diversity improvements within the WordPress community.

I'd encourage you to either sign up for this session yourself or send the information to someone in a marginalized group who has knowledge to share, but may be hesitant to speak up. Helping someone see that their expertise is valuable is the kind of support we need to drive meaningful change.

August 12, 2019

44 sec read time

db db
Aug 12 2019
Aug 12

Like many companies in our technology-enabled, globally connected environment, Promet Source operates with clients and team members all over the world. This reality creates a challenge for communications. The truth is, the more we put into our interactions, the more we get out of them.

I’ve been working with distributed teams for a long time. Even though I got very accustomed to joining video calls, until recently, I had opted to not turn my camera on. I guess it started a while back when I had my first virtual interactions with teammates. Probably due to shyness or my lack of knowledge of virtual communications, I tended to avoid the camera component. That's changed.

Eye-Opening Experience

Lately I’ve started using more face-to-face open communications with clients, collaborators and internally. It brings a higher level of empathy, honesty and receptiveness to the conversations. It’s been like going from a gray-scale image to a sudden, colorful world, and has provided an important step toward building trust and strengthening ties.

A couple weeks ago, I was on a call with a collaborator and a team member with whom I’ve been working for more than two months.

That day, I turned on my camera, and the dynamic quickly changed. We started having a candid conversation between the three of us. 

As I we were chatting, my co-worker felt encouraged and also turned on his camera. We were able to see each other's faces for the first time in more than two months of working together. It made a difference.

Next, our collaborator followed suit and turned his camera on. The conversation was instantly raised up to another level. Once one of us opened up, others felt empowered to do the same. It was like a chain reaction or “Domino Effect.”

A New Dimension

We could comment about our surroundings, our clothes, our hair, what was going on in our lives and in our parts of the world!  We were able to get talking and build rapport so much more easily.

The collaboration on the call became alive and we got more out of it than if we had not had the advantage of video.

Looking back on this conversation, it would be easy to say that it was video that made the difference, but that was only one aspect. It was empathy that drove the emotional connection. 

The cameras helped. We were also willing to open ourselves up to a more honest dialog, sharing something personal, becoming available and responsive to each other. 

The Key: Trust

Trust your teammates. Trust your clients. Trust your collaborators. Trust that there is value in what you have to share.

Here’s what I’ve concluded are the keys to successful interactions even when working across multiple time zones.

Lead by Example

Be confident and share honestly. Let other people see you and hear you. Let your emotions shine through your expressions (facial expressions, expressions through the tone of your voice, the words you choose, etc.) 

Open up and people will trust you, and they will be more likely to open up too.  The Domino Effect can be very exciting.

Leverage Human Interaction

Promet Source is a leading practitioner of human-centered design. We know what it means to design for humans and we facilitate human-centered design workshops all over the country to enhance effectiveness and outcomes.

Just as we consistently emphasize that we are designing for humans, we are careful to not lose sight of the fact that we are designing by humans.

Strengthen Teams through Sharing

Too often, the left brain, technology-driven environment in which we operate ignores the powerful impact of the human element in all of our engagements. Even when separated by borders and time zones, efforts to connect on a personal level pays off in ways that are often unanticipated.

Have you found this to be the case? Share your thoughts in the comment section below on why and how connecting on a human level can drive better outcomes.

Sharing your thoughts and experiences can go a long way toward a greater sense of connection and community in our dispersed, digital world.
 

Aug 12 2019
Aug 12
Image of the Rossetta Stone

In Mastering Drupal 8 Multilingual: Part 1 of 3, we focused on planning for Drupal 8 multilingual and its impact on a project's timeline and budget.

In Part 2 (below), we cover everything you need to know to have a functioning multilingual site with no custom code. Part 3 of the series covers more advanced techniques for site builders and front-end developers.

Aug 12 2019
Aug 12

Sean is a strong believer in the open source community at large, and that working collaboratively is best for creating awesome projects. His community work extends into maintaining and building the BADCamp website build, as well as helping to maintain Docksal, a tool used for managing development environments.

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web