Apr 10 2018
Apr 10

As Drupal module maintainers, we at Nextide need to be constantly updating our modules to add new features or patch issues.  Whether your module is available for download or is a custom module for a client site, you can't expect users to uninstall and reinstall it to pick up new features.  If you have data or configuration changes, update hooks are mandatory to learn.  This post will show how we created a new content entity in a Drupal update hook.

Our Maestro module's first release for Drupal 8 housed the core workflow engine, task console and template builder.  It was a rewrite of the core engine's capabilities and was a major undertaking.  However, as Drupal 8's core matures, and as our deployments of Maestro continue, new features, patches and bug fixes for Maestro are inevitable.  In order to bring those new features to the core product, we had to ensure that anything we added to the module was available to existing users and deployed sites via the Drupal update routine.

One of the first new features of Maestro, not included in the initial release, is the capability to show a linear bird's eye view of the current progression through a process.  This feature requires a new content entity to be created.  A fresh install of Maestro would install the entity, however, for existing sites, uninstalling and reinstalling Maestro is not an option.  A Drupal update hook is required to install the entity.  What we found was the available documentation describing how to create a new content entity via an update hook was nearly non-existent.  Some of the top Google results showing how others have solved this issue provide outdated and even dangerous methods to solve the problem.

 

Define Your Content Entity

The first step is to define your entity.  Drupal 8's coding structure requires that you place your content entity in your /src/Entity folder.  There's a good deal of documentation on how to create a content entity for Drupal 8 on module install.  Follow the documentation on creating the entity's required permissions and routes based on your requirements. You can find our Maestro Process Status entity defined in code here:  https://cgit.drupalcode.org/maestro/tree/src/Entity/MaestroProcessStatus.php

The Maestro Process Status Entity code in conjunction with the appropriate permissions, routes and access control handlers will install the Maestro Status Entity on module install.  However, if you already have Maestro installed, configured and executing workflows, running update.php will not install this entity!  Update hooks to the rescue...

Creating a Drupal 8 Update Hook

There's good documentation available on how to write update hooks, but what's missing is how to inject a new content entity for already installed modules. If you use the Google machine, you'll come across many posts and answers to this question showing the following as the (wrong) solution:

/**
  * This is an example of WHAT NOT TO DO! DON'T DO THIS!
  *
  */
function hook_update_8xxx() {
  //For the love of Drupal, do not do this.
  \Drupal::entityDefinitionUpdateManager()->applyUpdates(); //No, really, don't do this.
  //Are you still trying to do this?  Don't.
}

WRONG!!!  DON'T DO THIS!!!

This simple update hook will most certainly pick up and install your new entity.  It will also apply any other module's entity updates along with yours!  This can cause catastrophic issues when modules that have not yet been updated get their entities altered by other modules.  What you need to do is tell Drupal to install the new entity explicitly with this less than obvious piece of code which I will explain after showing it:

/**
 * Update 8001 - Create maestro_process_status entity.
 */
function maestro_update_8001() {
  //check if the table exists first.  If not, then create the entity.
  if(!db_table_exists('maestro_process_status')) {
    \Drupal::entityTypeManager()->clearCachedDefinitions();
    \Drupal::entityDefinitionUpdateManager()
      ->installEntityType(\Drupal::entityTypeManager()->getDefinition('maestro_process_status'));
  }
  else {
    return 'Process Status entity already exists';
  }
}

The update hook is found in the maestro.install file and I've removed some of the extra Maestro-specific code to simply show how to get your content entity recognized and installed. 

  1. We do a simple check to see if the maestro_process_status table exists.  Since content entities store their data in the database, if the table doesn't exist, our content entity is not installed. 
  2. We clear the cached definitions from the entityTypeManager.  This should force Drupal to read in all of the definitions from storage.
  3. Using the entityDefinitionUpdateManager (also used in the "wrong" example), we use the installEntityType method which takes an entity definition as an input.
  4. We pass in the maestro_process_status definition using the getDefinition method of the entityTypeManager object.

At this point, Drupal installs the entity based on the definition I showed above.  Your content entity is installed, including the database table associated with the entity.

Apr 09 2018
Apr 09

A new workflow for distribution configuration management.

Drupal core workflow limitations

It is not a secret that the configuration management in Drupal 8 core was made for only one specific use case: move configuration from one environment to another for the same site. Everything else was left for contrib to find solutions for. Config Installer is necessary to set up a site starting from existing configuration and Config Split is needed to have environment specific configuration that go beyond simple configuration overrides, like development modules enabled only locally. But the whole workflow still assumes that developers - most often in the same team - have control over the environments and deployment of the site.

Distributions are not covered

Distributions are very different. When maintaining a distribution one doesn't develop a specific site but rather the template to build different sites with. Deploying configuration as we do it for a single site does not have any meaning because there is no one production site to deploy to. Each site based off a distribution is its own site with its own site uuid and its own possible customisations and possibly its own development and production environments. At the same time as a consumer of a distribution - a site owner of a site based off a distribution - one wants to update the distributions features while keeping the customisations.

Update hooks?

Different distributions handle this in different ways. A first approach to solve this issue is treating it the same way you would a schema update and change the configuration on a site with an update hook. The problem with this is that update hooks are meant to fix the database to be able to run with the version of code that is in use on the site. The first thing one needs to do after updating the code base is to run the update hooks to realign the database. Configuration management has different needs and when updating the configuration in an update hooks means that one has to be careful to check that the configuration is in the expected state. And the only recourse is to log and make sure the user is informed what manual actions he needs to do to update the configuration himself. As there is no place to involve the user with a choice while update hooks are run.

We propose a new kind of workflow, in essence it allows developers of a distribution to say how the "distributions configuration" is supposed to be updated and it allows site owners of a site based on a distribution to treat the distribution maintainers as a developer on their team. At the same time this does not interfere with the configuration management workflow for staging and deploying configuration to the production site.

The primary function of Config Distro is to allow updating a distributions configuration in a new workflow and give the site owners more control and insight on how the configuration of their site changes. It is a new way to imagine the configuration management system in a way that lets sites own their configuration but allowing distributions to provide updated configuration for existing sites.

A new dimension

In a blog post one year ago I talked about different dimensions of configuration management;

  • vertical: moving configuration between environments but for the same site
  • horizontal: moving configuration between different sites. for example for distribution updates.

Config Distro is a UI and a drush command for that workflow. It is intended to be used by site builders who base their site on a distribution and want to import configuration updates from the updated distribution. The UI is essentially a parallel to configuration import screen for the workflow supported by core. But instead of importing the configuration from the files used to move/deploy configuration between environments you import from a special configuration storage which starts out with the sites active configuration and has the distributions updates applied to it. This means you do not have problems with mismatched site uuids or things being removed that you added and were not part of the distribution. And updates can apply (or not) to translations even if the distribution did not have languages.

Where do updates come from

If you only install Config Distro, the import screen eternally shows that there is nothing to import. This is because the module only provides the framework for this to work, ie the UI and the drush command. The module is not opinionated on how or what should be updated. All the complexity can be addressed by a distribution maintainer with the help of a ConfigFilter plugin (the same way Config Split and Config Ignore work). One such module is Config Sync. All the complexity of finding out what what configuration a module has originally shipped with, what it does now and whether a user has changed the originally installed configuration is left to Config Sync and its dependencies.

Choice

Just like Config Ignore allows you to opt out of some of the configuration management workflows, Config Distro has a Config Distro Ignore module that lets you retain certain configuration from being changed when you hit the "import" button. The "Retain configuration" button is available right next to the "View differences" button.

Config Distro overview

Clicking it leads to a form that lets you choose to retain the configuration permanently or only for this specific update. It also allows you to ignore a update for a specific language.

Config Distro detail

Example

In our team we set up a site based on a distribution. We added our own login module for our single sign on and added a few webforms. Now there is a new version of the distribution with some new features and I would like upgrade the site to using the new features. I am tasked with updating the site, here is what I do:

  • I update the code of the distribution by specifying the new version in composer.json and do a composer update to get the updated code*
  • I run the database updates drush updb to align the database with the code*
  • I go to the distro update screen provided by Config Distro
    • This screen looks very familar, it looks the same way as when I import configuration changes my team mates made after I get their code with git pull.
    • I see a new module is going to be installed and a few views and settings from the distribution are updated.
    • Our webforms are not going to be removed and our custom modules not uninstalled.
  • I click "import all"
  • Now my site has the updated code and configuration so I export the configuration for deployment: drush cex and commit*
  • My colleges get the update the same way they get normal changes that happen during development*:
    • git pull get the code
    • composer install get the external code (vendor, contrib, etc)
    • drush updb align the database with the new code base.
    • drush cim import the sync configuration.

* This is the same procedure as for any module update on any site.
Distributions may add a "Auto upgrade" setting and then import the Config Distro in a post_update_hook to bypass the manual step required of site administrators upgrading the distribution.

Conclusion

Config Distro provides a new workflow for updating distributions with a familiar feel. It recognizes that update hooks are not an adequate solution for updating a sites configuration when a site owns the configuration and site administrators may have changed the initially provided configuration.
It allows developers of distributions to alter the sites configuration through ConfigFilter plugins and it gives site administrators a choice of what to import.
Config Distro is just the framework for extending cores configuration management to allow managing configuration changes to get into the site from a third party such as the distribution maintainers. It does not interfere with the traditional workflow and importing the configuration updates from a distribution should be seen as any other configuration change such as adding a new view or changing permissions: You go to an admin page and you click some things in a form, then you export the configuration and deploy it.

Future - CMI 2.0

While it already works, Config Distro is still in an alpha state. Config Distro is part of a larger effort to improve the configuration management in Drupal. You can find further information and participate in the discussion on drupal.org

Apr 09 2018
Apr 09

As we head into DrupalCon week I've got something big to announce. With the blessing of the Aegir core maintainers, I am taking the 4.x branch of Provision I have been working on and I am separating it from the Aegir Project.

Provision 4 will still power Aegir. We are working on a patch to Hostmaster that will allow us to run a different command other than Drush, allowing Provision4 to become the primary back-end to Aegir 4. This means it will also be able to power the current generation DevShop.

I've created a new GitHub organization at github.com/provision4 and a new twitter account at twitter.com/provisionCLI.  

I am still working on documentation, but will have completed the Getting Started guide by the time I land in Nashville Tuesday morning: provision.gitbook.io/cli/ 

A few months ago I gave a sneak peek at the project at the DrupalNYC meetup. You can read about it and watch the presentation at www.thinkdrop.net/presenting-provision-4x-cli-developer-sneak-peek-drupalnyc

This new adventure is certainly intimidating. I hope to create a welcoming and helpful community for this project, but it takes a lot of work. Thank you for your patience as we work to bring this project to the masses!

I am scheduling a BoF for Provision4, so keep a lookout for time and day once it is secured. Please get in touch if you'd like to talk more!

Apr 09 2018
Apr 09

In February, we welcomed Scott Vinkle as our guest speaker. Scott is an accessibility expert and front-end developer for Shopify who spoke to our meetup group about creating accessible React JS apps. Scott has been active in the accessibility community since 2011. He has worked with many groups including the accessibility project, the a11y tour, CodePen Ottawa, and many, many others.

Takeaways

The React JavaScript library is a great way to create reusable modular components that can be shared among projects. But how do you ensure your React apps are usable by all kinds of people? Scott Vinkle shared his tips on how to build your React apps with accessibility baked-in!

If you are new to the world of JavaScript or Accessibility, or maybe just need a refresher on either subject, please read Scott’s article for more background information: https://svinkle.me/react-a11y

Can React Apps Be Accessible?

Yes! React apps can absolutely be made accessible for people with disabilities. Here are some areas that Scott presented to the group: how to set a page title, how to create a live announcements component, and how to manage keyword focus between components and when loading new pages.

Scott reviewed React's accessibility linter which comes with each new app. He also spotlighted a fairly new feature with React fragments. Scott also shared some thoughts on writing semantic HTML within React components. He finished up the presentation by going through a small demo app that he created to help illustrate each point.

Some Things to Watch for When First Starting to Develop with React

When it comes to writing HTML attributes in React components, they need to be written in camelcase style. This means when an attribute name is made up of two words, the second word in the attribute must start with a capital letter. For example, the attribute "tab index" needs to be written as "tab index" with the capital "I" (tabIndex). The attribute "content editable" needs to be written as "content editable" with capital "E" (contentEditable). And "max length" needs to be written as "max length" with a capital "L" (maxLength). And it's worth noting that the area and data attributes are exempt from this rule.

There are a few reserved words which come into conflict when writing HTML attributes within your React components. For example, the reserved word "for" is used in HTML when pairing a label with an input element. In JavaScript, "for" is used to loop through items using an index until a specified condition evaluates to false. So, the for attribute needs to be written as "htmlFor". Note the capital "F".

Another example is the reserved word "class" which is in the HTML to allow CSS and JavaScript to select and access specific elements via the class selector. In JavaScript, it's used to create a class declaration. So, when adding a class to an element this needs to be written as "className". In HTML5, there are a few elements which no longer require the closing tag, but in React these elements require the self-closing forward slash.

Example Code: How to Set a Page Title in React

Let's take a look at how to set a page title. First of all, why bother setting the page title content? There are a few reasons why you'd want to do this. For one, it updates the content in the browser tab so sighted users have a better understanding of where they are in the app. Two, it helps to increase search engine optimization. So, when something like Google comes along and indexes your app, it'll have that information. The page title content is often the first bit of information announced by screen readers, so users of assistive technology will have a better understanding of their current place in the app.

So, how do I do this in React? The simplest method that Scott found while doing research was to set the title using the JavaScript "document.title" global property. There is a function called "componentDidMount"; this is a React lifecycle method. The code within this function executes when the component is loaded onto the screen and the single line within this function is using "document.title" to study string value of "my page title".

When this component loads, the page title will be set to this value. It's also worth noting that this is not specific to or unique to React, this method can be used in any JavaScript based application or website. For quicker work, there are other pre-existing components that other developers have made; these components can be included in your project via NPM.

For example, there is React "DocumentTitle" and it can be used by wrapping your content with a document title component. Then you would add a title prop and set its value to the desired page title. The other one is called React "Helmet" and it is used by creating a helmet component. Within this, you can set anything you'd like that would normally appear in the head section of the page.

title code for React JS

For more of Scott's accessible React code examples, please watch the video!

Resources

YouTube Video

A11Y Talks February 2018 - Scott Vinkle - Creating Accessible React Apps

Full-text transcript of the video

Links mentioned in the talk:

Drupal Accessibility Group

Join the Accessibility group on Drupal.org for hints, tips, discussions, and patch proposals to help make Drupal more inclusive.

Apr 09 2018
Apr 09

drupal securityHere is a brief account of how we applied the most critical Drupal security update in the past couple of years to web projects we support and monitor.

As you probably know, our company supports and monitors the performance of several dozen Drupal-powered sites. On 2018.03.21 it was announced that on 28.03, around 22:00 +0300, a critical Drupal security update will be released. Of course, it was absolutely necessary to apply it to all sites for which we are responsible, and do that within the shortest time possible.

As you understand, the web projects we support are not uniform, they are in fact quite different from each other, run different versions of Drupal and occupy different servers. Many sites endured radical changes in their development teams before we undertook their support and performance monitoring.

We tasked our DevOps engineers with developing a solution that allows:
1) applying the security update to all supported and monitored projects within one (1) hour;
2) updating the Drupal core or applying the patches available;
3) backing up sites before applying the updates.

Within a week we developed and tested the solution. We used Ansible, git, and bash. Also, we integrated the solution with our monitoring system.

The critical update was released on schedule. Our specialists checked the changes made to the kernel and greenlighted the automated update solution we have developed. Nevertheless, to avoid any problems with operation of our clients' websites, we did a test first: run the automated update for a small group of sites, which included our projects and test sites. The test run returned a number of issues that were remedied promptly. After that, we run the update solution for all the supported web projects.

The results:
1) All sites continued to work as usual, our monitoring tools never reported any problems;
2) The entire update procedure took 1 hour, as we have planned (issues remedying included);
3) We now have an excellent solution that automates the uncomplicated but labor-intensive process of applying security updates.

From now on, this automated Drupal update solution will be used for all projects and servers that we support.

Apr 08 2018
Apr 08

This blog post summarizes the 572 comments spanning 5 years and 2 months to get REST file upload support in #1927648 committed. Many thanks to everyone who contributed!

From February 2013 until the end of March 2017, issue #1927648 mostly … lingered. On April 3 of 2017, damiankloip posted an initial patch for an approach he’d been working on for a while, thanks to Acquia (my employer) sponsoring his time. Exactly one year later his work is committed to Drupal core. Shaped by the input of dozens of people! Just *look at that commit message!*

Background: API-First Drupal: file uploads!.

  • Little happened between February 2013 (opening of issue) and November 2015 (shipping of Drupal 8).
  • Between February 2013 and April 2014, only half a dozen comments were posted, until moshe weitzman aptly said Still a gaping hole in our REST support. Come on Internets ….
  • The first proof-of-concept patch followed in August 2014 by juampynr, but was still very rough. A fair amount of iteration occurred that month, between juampynr and Arla. It used base64 encoding, which means it needed 33% more bytes on the wire to transfer a file than if it were transmitted in binary rather than base64.
  • Then again a period of silence. Remember that this was around the time when we were trying to get Drupal 8 to a shippable state: the #1 priority was to stabilize, fix critical bugs. Not to add missing features, no matter how important. To the best of my knowledge, the funding for those who originally worked on Drupal 8’s REST API had also dried up.
  • In May 2015, another flurry of activity occurred, this time fueled by marthinal. Comment #100 was posted. Note that all patches up until this point had zero validation logic! Which of course was a massive security risk. marthinal is the first to state that this is really necessary, and does a first iteration of that.
  • A few months of silence, and then again progress in September, around DrupalCon Barcelona 2015. dawehner remarked in a review on the lack of tests for the validation logic.
  • In February 2016 I pointed out that I’m missing integration tests that prove the patch actually works. To which Berdir responded that we’d first need to figure out how to deal with File entity type access control!
  • Meanwhile, marthinal works on the integration test coverage in 2016. And … we reached comment #200.
  • In May 2016, I did a deep review, and found many problems. Quick iterations fix those problems! But then damiankloip pointed out that despite the issue being about the general File (de)serialization problem, it actually only worked for the HAL normalization. We also ended up realizing that the issue so far was about stand-alone File entity creation, even though those entities cannot be viewed stand-alone nor can they be created stand-alone through the existing Drupal UI: they can only be created to be referenced from file fields. And consequently, we have no access control logic for this yet, nor is it clear how access control should work; nor is it how validation should work! Berdir explained this well in comment 232. This lead us to explore moving parts of https://www.drupal.org/project/file_entity into core (which would be a hard blocker). The issue then went quiet again.
  • In July 2016, garphy pointed out that large file uploads still were not yet supported. Some work around that happened. In September, kylebrowning stressed this again, and provided a more detailed rationale.
  • Then … silence. Until damiankloip posted comment #281 on April 3, 2017. Acquia was sponsoring him to work on this issue. Damian is the maintainer of the serialization.module component and therefore of course wanted to see this issue get fixed. My employer Acquia agreed with my proposal to sponsor Damian to work on REST file upload support. Because after 280 comments, some fundamental capabilities are still absent: this was such a complex issue, with so many concerns and needs to balance, that it was nigh impossible to finish it without dedicated time.
    To get this going, I asked Damian to look at the documentation for a bunch of well-known sites to observe how they handle file uploads. I also asked him to read the entire issue. Combined, this should give him a good mental map of how to approach this.
  • #281 was a PoC patch that only barely worked but did support binary (non-base64) uploads. damiankloip articulated the essential things yet to be figured out: validation and access checking. Berdir chimes in with his perspective on that in #291 … in which he basically outlines what ended up in core! Besides Berdir, dagmar, dawehner, garphy, dabito, ibustos all chimed in and influenced the patch. Berdir, damiankloip and I had a meeting about how to deal with validation, and I disagreed with with both of them. And turned out to be very wrong! More feedback is provided by the now familiar names, and the intense progress/activity continues for two months, until comment #376!
  • Damian got stuck on test coverage — and since I’d written most of the REST test coverage in the preceding months, it made sense for me to pick up the baton from Damian. So I did that in July 2017, just making trivial changes that were hard to figure out. Damian then continued again, expanding test coverage and finding a core bug in the process! And so comment #400 was reached!
  • At the beginning of August, the patch was looking pretty good, so I did an architectural review. For the first time, we realized that we first needed to fix the normalization of File entities before this could land. And many more edge cases need to be tested for us to be confident that there were no security vulnerabilities. blainelang did manual testing and posted super helpful feedback based on his experience. Blaine and Damian tag-teamed for a good while, then graphy chimed in again, and we entered September. Then dawehner chimed in once more, followed by tedbow.
  • On September 6 2017, in comment #452 I marked the issue postponed on two other issues, stating that it otherwise looked tantalizingly close to RTBC. aheimlich found a problem nobody else had spotted yet, which Damian fixed.
  • Silence while the other issues get fixed … and December 21 2017 (comment #476), it finally was unblocked! Lots of detailed reviews by tedbow, gabesullice, Berdir and myself followed, as well as rerolls to address them, until I finally RTBC‘d it … in comment #502 on February 1 2018.
  • Due to the pending Drupal 8.5 release, the issue mostly sat waiting in RTBC for about two months … and then got committed on April 3 2018!!!

Damian’s first comment (preceded by many hours of research) was on April 3, 2017. Exactly one year later his work is committed to Drupal core. Shaped by the input of dozens of people! Just look at that commit message!

Apr 08 2018
Apr 08

Drupal 8’s REST API has been maturing steadily since the Drupal 8.0.0 was released in November 2015. One of the big missing features has been file upload support. As of April 3 2018, Drupal 8.6 will support it, when it ships in September 2018! See the change record for the practical consequences: https://www.drupal.org/node/2941420.

It doesn’t make sense for me to repeat what is already written in that change record: that already has both a tl;dr and a practical example.

What I’m going to do instead, is give you a high-level overview of what it took to get to this point: why it took so long, which considerations went into it, why this particular approach was chosen. You could read the entire issue (#1927648), but … it’s one of the longest issues in Drupal history, at 572 comments. You would probably need at least an entire workday to read it all! It’s also one of the longest commit messages ever, thanks to the many, many people who shaped it over the years:

Issue #1927648 by damiankloip, Wim Leers, marthinal, tedbow, Arla, alexpott, juampynr, garphy, bc, ibustos, eiriksm, larowlan, dawehner, gcardinal, vivekvpandya, kylebrowning, Sam152, neclimdul, pnagornyak, drnikki, gaurav.goyal, queenvictoria, kim.pepper, Berdir, clemens.tolboom, blainelang, moshe weitzman, linclark, webchick, Dave Reid, dabito, skyredwang, klausi, dagmar, gabesullice, pwolanin, amateescu, slashrsm, andypost, catch, aheimlich: Allow creation of file entities from binary data via REST requests

Thanks to all of you in that commit message!

I hope it can serve as a reference not just for people interested in Drupal, but also for people outside the Drupal community: there is no One Best Practice Way to handle file uploads for RESTful APIs. There is a surprising spectrum of approaches. Some even avoid this problem space even entirely, by only allowing to “upload” files by sending a publicly accessible URL to the file. Read on if you’re interested. Otherwise, go and give it a try!

Design rationale

General:

  • Request with Content-Type: application/octet-stream aka “raw binary” as its body, because base64-encoded means 33% more bytes, implying both slower uploads and more memory consumption. Uploading videos (often hundreds of megabytes or even gigabytes) is not really feasible with base64 encoding.
  • Request header Content-Disposition: file; filename="cat.jpg" to name the uploaded file. See the Mozilla docs. This also implies you can only upload one file per request. But of course, a client can issue multiple file upload requests in parallel, to achieve concurrent/batch uploading.
  • The two points above mean we reuse as much as possible from existing HTTP infrastructure.
  • Of course it does not make sense to have a Content-Type: application/octet-stream as the response. Usually, the response is of the same MIME type as the request. File uploads are the sensible exception.
  • This is meant for the raw file upload only; any metadata (for example: source or licensing) cannot be associated in this request: all you can provide is the name and the data for the file. To associate metadata, a second request to “upgrade” the raw file into something richer would be necessary. The performance benefit mentioned above more than makes up for the RTT of a second request in almost all cases.

PHP-specific:

  • php://input because otherwise limited by the PHP memory limit.

Drupal-specific:

  • In the case of Drupal, we know that it always represents files as File entities. They don’t contain metadata (fields), at least not with just Drupal core; it’s the file fields (@FieldType=file or @FieldType=image) that contain the metadata (because the same image may need different captions depending on its use, for example).
  • When a file is uploaded for a field on a bundle on an entity type, a File entity is created with status=false. The response contains the serialized File entity.
  • You then need a second request to make the referencing entity “use” the File entity, which will cause the File entity to get status=true.
  • Validation: Drupal core only has the infrastructure in place to use files in the context of an entity type/bundle’s file field (or derivatives thereof, such as image fields). This is why files can only be uploaded by specifying an entity type ID, bundle ID and field name: that’s the level where we have settings and validation logic in place. While not ideal, it’s pragmatic: first allowing generic file uploads would be a big undertaking and somewhat of a security nightmare.
  • Access control is similar: you need create access for the referencing entity type and field edit access for the file field.

Result

If we combine all these choices, then we end up with a new file_upload @RestResource plugin, which enables clients to upload a file:

  1. by POSTing the file’s contents
  2. to the path /file/upload/{entity_type_id}/{bundle}/{field_name}, which means that we’re uploading a file to be used by the file field of the specified entity type+bundle, and the settings/constraints of that field will be respected.
  3. … don’t forget to include a ?_format URL query argument, this determines what format the response will be in
  4. sending file data as a application/octet-stream binary data stream, that means with a Content-Type: application/octet-stream request header. (This allows uploads of an arbitrary size, including uploads larger than the PHP memory limit.)
  5. and finally, naming the file using the Content-Disposition: file; filename="filename.jpg" header
  6. the five preceding steps result in a successfully uploaded file with status=false — all that remains is to perform a second request to actually start using the file in the referencing entity!

Four years in the making — summarizing 572 comments

From February 2013 until the end of March 2017, issue #1927648 mostly … lingered. On April 3 of 2017, damiankloip posted an initial patch for an approach he’d been working on for a while, thanks to Acquia (my employer) sponsoring his time. Exactly one year later his work is committed to Drupal core. Shaped by the input of dozens of people! Just look at that commit message!

Want to actually read a summary of those 572 comments? I got you covered!

Apr 08 2018
Apr 08

Well, I've finally done it! I migrated this blog from Drupal 6 to Drupal 8. I did a test run yesterday with a personal blog of mine and found the process was relatively easy. Both sites are relatively simple.

There are other blog posts about the process as well as documentation on Drupal.org so I won't repeat lots of details.

I'm running Drupal 8.5.1 as of this blog post. I chose to use all of the various migration modules that come with Drupal 8 core, including the two marked as experimental. Once I had them enabled I clicked the link to get to the upgrade form in the UI. One of the sites did have file uploads and all of them were pulled in seamlessly. I had created a backup of the sites/[site-name] directory and placed it in my new Drupal 8 sites directory.

Here are issues I encountered:

  • Administration Menu is (apparently) not properly ported to Drupal 8 and it blew things up on the site yesterday. I had to manually clean things up in the database so that module was not enabled in my Drupal 8 site and clear cache.
  • The taxonomy term reference to the Tags vocabulary needed the field setting updated so that it selected the Tags vocabulary.
  • When I click the user in the Toolbar it does not switch the tray so that it shows View profile, Edit profile and Log out. (That's still an issue as I write this. I haven't investigated it enough to figure out what's going wrong, nor have I filed an issue.)
  • The feed that taxonomy provides for terms has changed slightly. I filed an issue to get Planet Drupal to use the new feed.
  • No views were imported.
  • The pathauto patterns weren't imported.
  • Disqus doesn't handle the migration.

Other than those things, I can't say I ran into much of anything else. And, aside from the site blowing up from Administration Menu, there wasn't much that presented a real challenge.

If you are reading this post on Planet Drupal, then you know I'm back up and running!

I'll still need to theme this site again. And I'll need to replace some functionality that was previously provided by Advanced Blog. I haven't yet installed and tested out the Drupal 8 port of Tagadelic.

One more note: I decided to just delete the comments that I had for the Drupal 6 version of this site since I don't want to use the Comment module, preferring to use Disqus instead.

Apr 07 2018
Apr 07

DrupalCon Nashville includes a full track of core conversations where you can learn about current topics in Drupal core development, and a week of sprints where you can participate in shaping Drupal's future.

In addition to the core conversations, we have a few meetings on specific topics for future core development. These meetings will be very focused, so contact the listed organizer for each if you are interested in participating. There are also birds-of-a-feather (BoF) sessions, which are open to all attendees without notice.

Also be sure to watch Dries' keynote for ideas about Drupal's future! Check out the extended Dries Q&A session on Thursday as well to get even more questions answered.

Time Topic Organizer Monday, 9 April, 10:00 Configuration validation to support REST and JS Wim Leers Tuesday, 10 April, 10:45 Improving Drupal's evaluator experience (BoF)tedbow Tuesday, 10 April, 15:45 Layout Initiative meeting (BoF)tim.plunkett Wednesday, 11 April, 10:45 Official local development environment (BoF)tedbow Wednesday, 11 April, 14:15 Media roadmap meeting phenaproxima Friday, 13 April, 09:00 Release cycle changes discussion (only core committers) Gábor Hojtsy Friday, 13 April, 11:00 Automated security updates hestenet
Apr 07 2018
Apr 07

Commerce Saloon sponsor badge

Commerce Guys is joining forces with some of our Technology Partners and several contributing agencies to promote Drupal Commerce at DrupalCon Nashville from April 10-12, 2018.

We are colocating our booths to create the Commerce Saloon, your one stop shop to learn all things Drupal Commerce. Our booths will feature jam band instruments, multiple demos (including a new store theme), exclusive swag, and case studies to help you learn how teams are succeeding with Drupal Commerce.

Come try Drupal Commerce 2.x

DrupalCon Nashville is the perfect time to learn what's new by joining our week long sprint at the "Power Up" tables by the Commerce Saloon. We'll be training new contributors and working on the project together using sprint kits powered by DRUD's ddev local development environment.

We prepared the following sessions to help you learn more about Drupal Commerce and its ecosystem:

  • Contributing to Drupal Commerce (for beginners)
    Tuesday, April 10th, 12:00 PM | Commerce Saloon: "Power Up" Table | By: Matt Glaman
  • Drupal Commerce 2.x Update and Roadmap Planning (add it to your conference schedule)
    Tuesday, April 10th, 3:45 PM | Room: 203A | By: Ryan Szrama / Bojan Zivanovic
  • Marketing and Selling the Drupal Commerce Ecosystem (as seen at DrupalCon Vienna)
    Wednesday, April 11th, 10:45 AM | Commerce Saloon: "Power Up" Table | By: Ryan Szrama
  • Decoupled Drupal Commerce / REST APIs (for developers)
    Wednesday, April 11th, 3:45 PM | Commerce Saloon: "Power Up" Table | By: Matt Glaman
  • Subscriptions and Recurring Billing in Commerce 2.x
    Thursday, April 12th, 10:45 AM | Commerce Saloon: "Power Up" Table | By: Bojan Zivanovic

Hear from every Commerce Saloon sponsor

There's a lot to be said about how Drupal Commerce is making merchant and agency teams more productive, and you don't just have to take our word for it. Each Commerce Saloon sponsor has something unique to teach you about succeeding in eCommerce, and we encourage you to seek them and their sessions out:

  • Acro Media (Booth 803) - Test drive Commerce POS at their booth and hear its business case from Becky and Josh! You can also purchase (for free) a limited edition Drupal Commerce t-shirt through Acro Media's demo site.
  • Authorize.Net (Booth 911) - Authorize.Net offers several payment tools that let merchants get paid securely online. We've joined forces to demo Accept.js, their new drop-in solution for PCI compliant payment.
  • Bluespark (Booth 908) - Bluespark contributed significantly to Commerce 2.x development via their Sport Obermeyer project (check out their awesome case study) and have long promoted Drupal Commerce as a hotel booking solution.
  • Commerce Guys (Booth 809) - Stop by for a demo of Belgrade, our new default store theme for Commerce 2.x, or for a demo of, Lean Commerce Reports, our first SaaS product that offers a plug-n-play sales dashboard for Drupal Commerce.
  • Drupal Commerce Technology Partners (Both 811) - This booth features representatives and demos from Avalara and Lockr. Talk to them about tax automation and about eCommerce security respectively.
  • MailChimp (Booth 813) - MailChimp has revitalized their approach to eCommerce email marketing and has a full integration available for Drupal in the MailChimp eCommerce module. Stop by to learn more!
  • Zivtech (Booth 909) - Zivtech has a long history of implementing eCommerce in Drupal, including joining the Drupal Commerce project in late 2009. Talk to them about using Drupal Commerce as a front-end for third party applications.

Finally, be sure to catch Promet Source's showcase session on helping The Corning Museum of Glass migrate from Commerce 1.x to Commerce 2.x and Rick Manelius's session on the dos and don'ts Drupal Commerce project estimation.

Schedule Time to Meet

If you're heading to DrupalCon, we'd love to chat about Drupal Commerce with you. Use our meeting request form to get on our calendar to discuss a particular project or need, or subscribe to our newsletter to be kept in the loop more generally.

Apr 07 2018
Apr 07

April 07, 2018

The process documented process for setting up a local environment and running tests locally is, in my opinion, so complex that it can be a barrier to even determined developers.

For those wishing to locally test and develop core patches, I think it is possible to automate the process down to a few steps and few minutes; here is an example with a core issue, #2273889 Don’t use one language’s plural index formula with another language’s string in the case of untranslated strings using format_plural(), which, at the time of this writing, results in the number 0 being displayed as 1 in certain cases.

Is it possible to start useful local development on this within 10 minutes on a computer with nothing installed except Docker? Let’s try…

Step 1: install Docker

Install and launch Docker. Everything we need, Apache web server, MySql server, Drush, Drupal, will reside on Docker containers, so we won’t need to install anything locally except Docker.

Step 2: launch a dev environment

I have create a project hosted on GitHub which will help you set up everything you need in Docker contains without local dependencies other than Docker, or any manual steps. Set it up by running:

git clone https://github.com/dcycle/drupal8_core_dev_helper.git && \
  cd drupal8_core_dev_helper && \
  ./scripts/deploy.sh`

This will create everything you need: a webserver container and database container, and your Drupal core code which will be placed in ./drupal8_core_dev_helper/drupal; near the end of the output of ./scripts/deploy.sh, you will see a login link to your development environment. Confirm you can access that local development environment at an address like http://0.0.0.0:SOME-PORT. (The port is random.)

The first time you run this, it will have to download Docker images with Drupal, MySQL, and install everything you need for local development. Future runs will be a lot faster.

See the project’s README for more details.

In your dev environment, you can confirm that the problem exists (provided the issue has not yet been fixed) by following the instructions in the “To reproduce this problem:” section of the issue description on your local development environment.

Any calls to drush can be run on the Docker container like so:

docker-compose exec drupal /bin/bash -c 'drush ...'

For example:

docker-compose exec drupal /bin/bash -c 'drush en locale language -y'

If you want to run drush directly, you can connect to your container like so:

docker-compose exec drupal /bin/bash

This will result in the following prompt on the container:

[email protected]:/var/www/html#

Now you can run drush commands directly on the container:

drush eval "print_r(\Drupal::translation()->formatPlural(0, '1 whatever', '@count whatevers', array(), array('langcode' => 'fr')) . PHP_EOL);"

Because the drupal8_core_dev_helper project also pre-installs devel on your environment, you can also confirm the problem exists by visiting /devel/php and executing:

dpm((string) (\Drupal::translation()->formatPlural(0, '1 whatever', '@count whatevers', array(), array('langcode' => 'fr'))));

Whether you do this by Drush or /devel/php, the result should be the same if the issue has not been resolved: 1 whatever instead of 0 whatevers.

Step 3: get a local version of the patch and apply it

In this example, we’ll look at the patch in comment #32 of our formatPlural issue, referenced above. If the issue has been resolved since this blog post has been written, follow along with another patch.

cd drupal8_core_dev_helper
curl https://www.drupal.org/files/issues/2018-04-07/2273889-31-core-8.5.x-plural-index-no-test.patch -O
cd ./drupal && patch -p1 < ../2273889-31-core-8.5.x-plural-index-no-test.patch

You have now patched your local version of Drupal. You can try the “0 whatevers” test again and the bug should be fixed.

Running tests

Now the real fun begins… and the “fast-track” ends.

For any patch to be considered for inclusion in Drupal core, it will need to (a) not break existing tests; and (b) provide a test which, without the patch, confirms that the problem exists.

Let’s head back to comment #32 of issue #2273889 and see if our patch is breaking anything. Clicking on “PHP 7 & MySQL 5.5 23,209 pass, 17 fail” will bring us to the test results page, which at first glance seems indecipherable. You’ll notice that our seemingly simple change to the PluralTranslatableMarkup.php file is causing a number of tests to fail: HelpEmptyPageTest, EntityTypeTest…

Let’s start by finding the test which is most likely to be directly related to our change by searching on the test results page for the string “PluralTranslatableMarkupTest” (this is name of the class we changed, with the word Test appended), which shows that it is failing:

Testing Drupal\Tests\Core\StringTranslation\PluralTranslatableMarkupTest
.E

We need to figure out where that file resides, by typing:

cd /path/to/drupal8_core_dev_helper/drupal/core
find . -name 'PluralTranslatableMarkupTest.php'

This tells us it is at ./tests/Drupal/Tests/Core/StringTranslation/PluralTranslatableMarkupTest.php.

Because we have a predictable Docker container, we can relatively easily run this test locally:

cd /path/to/drupal8_core_dev_helper
docker-compose exec drupal /bin/bash -c 'cd core && \
  ../vendor/bin/phpunit \
  ./tests/Drupal/Tests/Core/StringTranslation/PluralTranslatableMarkupTest.php'

You should now see the test results for only PluralTranslatableMarkupTest:

PHPUnit 6.5.7 by Sebastian Bergmann and contributors.

Testing Drupal\Tests\Core\StringTranslation\PluralTranslatableMarkupTest
.E                                                                  2 / 2 (100%)

Time: 16.48 seconds, Memory: 6.00MB

There was 1 error:

1) Drupal\Tests\Core\StringTranslation\PluralTranslatableMarkupTest::testPluralTranslatableMarkupSerialization with data set #1 (2, 'plural 2')
Error: Call to undefined method Mock_TranslationInterface_4be32af3::getStringTranslation()

/var/www/html/core/lib/Drupal/Core/StringTranslation/PluralTranslatableMarkup.php:150
/var/www/html/core/lib/Drupal/Core/StringTranslation/PluralTranslatableMarkup.php:121
/var/www/html/core/tests/Drupal/Tests/Core/StringTranslation/PluralTranslatableMarkupTest.php:31

ERRORS!
Tests: 2, Assertions: 1, Errors: 1.

How to fix this, indeed whether this will be fixed, is a whole nother story, a story fraught with dependency injection, mock objects, method stubs… More an adventure, really, than a story. An adventure which deserves to be told, just not right now.

Please enable JavaScript to view the comments powered by Disqus.

Apr 06 2018
Apr 06

Sarah Maple is the Lead Web Designer at National Nurses United (NNU) and has been at the organization for four years. Her training bridges art and web technology in order to better serve clients with both their technological needs as well as their artistic ones. She provides support with Drupal logistics and project management for the NNU website, which is a full website conversion to Drupal 8.

How would you describe your technical capabilities?

My technical prowess is limited, but I'm probably better than most people at operating computers. I dabble with Python and crunch some statistics in R when I have some free time on the weekends. I can fix the odd microwave or toaster too. I am not a developer by any stretch of the imagination though; I mostly tinker. Sometimes I make something quasi-brilliant, and then it breaks, but it least it was brilliant for a moment. Now that I've taken on the mantle of the web lead within the organization, I find myself doing more development, project management, and content administration these days.

What was your previous website like?

Our previous site used ExpressionEngine. The site was quite limited in terms of functionality, and its performance began to suffer over the years. The ExpressionEngine install was managed by the host, as part of a package they offered at the beginning of our relationship. Eventually we outgrew the capabilities of the web host after several years, and the site was no longer receiving updates to its various plugins and modules. As the site continued to age, the host mentioned that it was going to be put out to pasture and the CMS would no longer receive security updates either. That signaled to us that it was time to transition to a new platform.

Was the Drupal ship the first one that happened to sail by, or did you look into other options?

I had worked with Drupal before when I worked in marketing, so I had some experience there, albeit older experience. This was around the time of Drupal 4 and 5. While researching the latest version of Drupal, I thought it had maintained itself quite well, and even worked hard to improve its appeal to the WordPress crowd in terms of ease of install and administration. Drupal certainly stepped up and was working to cater to the needs of its users. It looked like a solid and well-maintained product. WordPress was another option for us, but we didn't know if it would be able to support the feature sets that we had in mind as well as handle all our content needs. I admit I am less familiar with WordPress, but back on Drupal 4 and 5, I remember feeling like the sky was the limit for Drupal. It could be used for a lot of different things and, even though you yourself may not be able to do something, there was help out there and people you could bring on board to help make your dreams come true.

Ultimately, I was not convinced WordPress could do that for us, and I couldn't find much in the way of knowing how I could optimize WordPress for our needs. We have a lot of content and a lot of files to manage, so I wanted to be sure that our CMS could handle all of that, and still have the flexibility to “tinker under the hood,” like doing deep dives into the templates, and not have the whole thing break down.

What features of Drupal were you looking forward to using the most?

I was particularly keen on using Views again. We are a text heavy organization. We like to do a lot of writing, and we tend to segment our content into very discrete buckets, like by Campaign or by topic. We thought we could certainly leverage the power of Views to dynamically push around more our content and reformat it in ways that would be more meaningful and useful to our user base. Similarly, we wanted to extend and reuse various elements, like with Blocks. We wanted to be able to create a call-to-action, and then push it out to multiple pages at once, helping to create a sense of unity across this rather ranging site.

With the previous site, each page had its own custom markup and we couldn't customize many of the page elements. We were looking forward to leveraging reusable elements to create a cohesive network of nurse resources and campaigns under the umbrella of the NNU site.

Besides WordPress and Drupal, did you look into any other content management systems?

We did take a peek at a few others, just to see what their capabilities were. I reviewed some forums to see what people were doing with Joomla! We also looked at the newest version of ExpressionEngine to see where it was at, but we were not suitably impressed, and it wasn't open source. Drupal and WordPress were the only two under serious consideration. We wanted to utilize something that was enterprise-level, and something we could grow into. We wanted something extensible and open source. There are a lot of content management systems out there, but not many with both a large user base and a large community to help answer your questions, like with Drupal.

What are your personal thoughts or philosophies on open source?

I believe in the “free as in speech, not as in beer” attitude toward software. Whatever gets created should be shared back. I balk at organizations and groups that fork open source projects, then fail to share their developments with the rest of the community. I also believe open source projects need resources, and I endeavor to support them in whatever way they need assistance, whether it be monetarily, or supporting those that continue to lead and build the project up.

Practically speaking, if it's free and open source, you can find people who work on it and develop it. Well-managed open source projects tend to attract a dedicated and talented community of developers and supporters, and I can support the project by hiring from that community. Part of Drupal's appeal is that, while it can be kind of a daunting CMS to learn, it's pretty easy to know if someone knows how to use it -- if they’re from the Drupal community. And I feel it’s a lot easier to identify trusted advisors and developers in Drupal than maybe those who work in WordPress.

Day to day, what is your role in working with your Drupal site?

Now that all our content has been migrated into a brand new CMS, it’s time to revisit many of the major sections of our site with an eye for redesign, remaking pages to match the upgraded styles. Also streamlining content, to make use of Views, Blocks, Paragraphs, and other features of Drupal that will have the site running more efficiently.

What has it been like for the team to acclimate into using the new website?

There have been some growing pains with just wrapping our heads around the vocabulary. We’re still getting used to the Drupal nomenclature. We’ve also been using this as an opportunity to revise and update our workflow. Our old site was very outdated. It didn’t have version control or a secure workflow because it didn't require it and, in many cases, it wasn’t capable of it. We’re catching up though, and that process is a little painful, but very educational. We’ve been trying to follow best practices for development and security, so learning Git, using version control, securing PHP files are all internal processes that we’re working towards. Sometimes, it’s hard to distinguish between whether the difficulty is from the Drupal learning curve or from just catching up with technology in general, but acclimating to the new site has forced some very positive changes for us.

What have been some of the other positive points of using Drupal 8?

Drupal 8 has been very fast, out of the box. It’s faster than the Drupal I remember. The backend authoring features for making content seem more efficient than previous Drupal iterations. Reconfiguring and moving content around is an easier process too. We’ve been using Paragraphs on our site, and it has been a boon for us. The flexibility in being able to add accordions, tab systems, columns, and whatever else might be needed - in a modular framework - has allowed us to create a lot of new layouts and change up how we display our content within those layouts. No one is having to hand code any custom layouts inside the content itself anymore. It's definitely added to our design toolbox, and changed how we serve content while helping us stick within the bounds of our style guide.

What are some of the things Drupal 8 could maybe improve?

The biggest pain points have been making updates to core and contributed modules. The Composer workflow isn't always clear for a novice to understand. In Drupal 7, it was fairly straightforward, but Composer in Drupal 8 can be tricky. Making a security update should be one of the easier things to do, but it’s not always as equally accessible and doable as Drupal 7. File management could also be a little easier. Managing files that are orphaned or outdated is not always intuitive.

For people new to Drupal 8 or Drupal entirely, what kind of advice would you have for them?

To those interested in Drupal: It's important to learn as much as you can. Vet the developers you work with, the people you hire, and the advice that you receive. You need to be capable of assessing the quality of what’s being provided. Drupal can do everything, but that means there are no less than 12 different ways to do the thing that you want, and if you're not careful in how you ask, it can be like dealing with a genie; it's going to be exactly what you want, but come with some caveats. It is not an install-and-forget-it kind of system. It's like a puppy; you have to train it if you want it to behave well and fetch the newspaper. I would encourage people going down this path to keep learning so you can do as much of the work yourself. Where you can’t do the work, you’ll at least have the general concepts, language, and knowledge of your system to help others help you.

Basically, stick with it. It's a powerful content management system that will not let you down if you maintain it. Drupal has the capacity to grow with you and your organization’s needs. Look at it as an investment, and invest in yourself for the long haul.

Thanks, Sarah, for your insight!

For those interested in learning more about NNU's migration from ExpressionEngine to Drupal 8, check out the Hook 42 NNU Drupal 8 case study on Drupal.org.

Apr 06 2018
Apr 06

NOTE: While I work for a company that is closely related to Drupal, the thoughts expressed here DO NOT, in any way, represent my employer.

 

When the creator of Zen theme of the Drupal CMS chose a logo for the theme, they would have never imagined that this decision would cause such large confusion and probable escalation of heat between two countries years down the lane.

 

What happened:

On April 7 2018 (today), Multiple Indian Government websites, built and maintained by National Informatics Center, went down or were partially unavailable. Some of them showed a maintenance page.

They include:

* https://mod.gov.in/ (Ministry of Defence)

* Multiple others - Law, Home and Labour Ministry websites

News coverage:

* Youtube : TimesNow

* Times Now

* Hindustan Times

* NDTV

* Times of India


What does the Indian Government say?

* "National cybersecurity chief Gulshan Rai said the 10 websites hosted by the National Informatics Centre (NIC) went down after a hardware failure."

* There is no hacking or coordinated cyber attack on website of central ministries. There was a hardware failure in the storage network system at the NIC which resulted in a number of government websites being serviced by that system going down. We are working to replace the hardware and these websites will be up soon,” said Rai.

What caused it?

* Limited information is available in the public domain to be certain. Although there is no information as of now, that any site was compromised.

* While the sites that were down were Drupal ones, NIC builds most of their sites on Drupal. Which explains it.

* The sites were just showing a maintenance page. Nothing suggested they were compromised. A maintenance page is shown on various occasions, while in this case, the MySQL servers being down either due to a hardware failure as the Govt claims or due to large traffic, or due to an orchestrated DDOS attack, could be a reason.

* None of the above instances (including a DDOS attack) would suggest any data being compromised.

 

The Chinese connection:

* Almost every Indian media agency attributed this to hacking by  "Chinese Hackers".

* The maintenance pages of some of these sites showed Drupal Zen Theme's logo, which is has a Chinese-looking (or Japanese?) language character in its logo.

* In the context of strained relationships between China and India, all news agencies interpreted this Drupal maintenance page with a Japanese logo as "defacement by Chinese hackers"

 

 

 

Bad PR for Drupal:

While there is no reason to suspect Drupal was at fault, Drupal’s pictures were splashed all over the TV and news sites today claiming hack by Chinese hackers by misinformed Indian News agencies.

Apr 06 2018
Apr 06
image of the Drupal and Wordpress logosWe’ve covered this in previous blog posts, but I think it’s time we came back to this and gave the contenders another look. (It's only been three years since we last covered this, so everyone has probably been waiting with baited breath for this one.) Internet culture loves to pit things against each other to see which reigns supreme, so let’s do that for these two juggernaut content management systems.

Wordpress? More like Worldpress

It is no exaggeration to say that a lot of the internet (about 28% at the time of this writing) is made up of Wordpress sites. With that sort of share, it is no surprise that most everyone has heard of this blogging-tool-turned-web-platform. Among other CMS type sites, there is no contest as far as usage goes. Somewhere around half of all sites built with a CMS use Wordpress. You’ll find it as a suggestion on most shared hosting platforms and there are tutorials across the internet to help someone get started using it. This thing is everywhere.

Wordpress is currently on version 4.9.x right now and has the great reputation of making sure that most of its users are able to upgrade automatically without much threat of backwards compatibility issues. This is great from a stability standpoint. When you create a website you probably don’t want to worry that the next update to the platform will cause you to rebuild it more often than you are ready. That’s not to say it has a perfect track record when it comes to security. Not every site has a situation that allows for the automatic updates and even then there are thousands of plugins available that could have security holes.

Which CMS is best for your website?  Take our CMS Quiz and find out!

Wordpress is everywhere and it has been for a few years now. It is not suitable for every web need though and that comes through the most when you need something that can’t be done by installing a few plugins and throwing on a premium theme purchased from somewhere. There are many places you can get a custom Wordpress site built, but the CMS itself isn’t well suited for sites with a lot of editors, permissions, and features that large enterprise sites might run into.

By making Wordpress into an easily accessible platform, it has precluded itself from being able to handle the scale that comes with more complex sites. Without diving into code, you can’t define a new role or give users a different set of limited permissions outside of what already is defined in the system. The same goes with the types of content you can create and the fields you will have available. Plugins can extend some of this, but the reliability track record isn’t the greatest from my experience.

Drupal is for big projects

When it comes to market share, Drupal is a sliver of the pie compared to the internet at large. What it lacks in sheer numbers it makes up in the number of large and significant sites that use it. Some of these sites include government sites, entertainment sites, and university sites. Drupal has a reputation of being complex and heavy to run. While that is true, it isn’t necessarily a bad thing all of the time.

Drupal 8 is a bit of a rebirth for the platform. Promises of better forwards compatibility with future versions means that it will be easier to stay up to date than ever before. This was a pain point with previous versions of the platform and the community has made it a point to improve. Drupal 8 makes life a bit easier with many more features ready to go when you install the site. You can craft a pretty good, simple, site with a vanilla installation of this version.

The real magic comes out when a skilled team of developers get their hands on Drupal. This platform has always been made by developers for developers and it shows through in 8. The new object-oriented approach to the code makes it simpler for those who aren’t as familiar with Drupal to get in there and make some changes. It is extended easily with the large number of modules that add specific features to a site. These modules are put through pretty rigorous review before they are deemed stable and it makes for more secure sites overall.

image with text offering access to our free CMS Selection quiz.

Drupal’s other big draw is the workflow experience for editors and site builders. With all of the different ways you can setup a Drupal site, it is possible to have moderated workflow between editors and whoever has the final say on published content. New editing tools include a better wysiwyg, responsive images, and dynamic data views.

Drupal 8 is easier than it ever has been, but that doesn’t say a whole lot when you think about where it came from. What separates it from the rest of the crowd is the ability for it to scale to whatever size is asked of it, but that only can happen in trained hands still and that is probably why it hasn’t taken over the market share just yet. Not every small project needs a whole development team to get it done. If you only have a few pages with some text, Drupal is going to be too much for the task. (Though it will work just fine, it’s just overkill.)

The winner is the web

There is place on the internet for both of these platforms, and while it may seem like this is a cop-out answer to come to after approaching this topic again, there is more to it. Wordpress has established itself as a useful tool for what it does best. It allows users to create a website and a decent one at that. It has replaced the old platforms of yore that helped build the early internet, but it isn’t the platform of choice for the largest sites that get the most traffic. Quantity of sites using Wordpress does not mean these sites get the most traffic individually. Drupal is built for scale and is ready to handle high traffic. Is it the obvious choice for every website? No, but should an enterprise size project be shoved into a platform meant to handle every other website or should it be able to have its needs met specifically by something meant for that task?

MIKE OUT

Apr 06 2018
Apr 06

This is the second rendition of this topic within the Drupal Community, the first time I shared my experiences and journey in this context was at Drupal Camp Sofia, in Bulgaria in 2015. In many respects this is quite a circle for me, I have fond memories of attending a Drupal meet-up in Utrecht (long way from home in Kent!) in 2012, receiving a very warm welcome by the local community at OneShoe’s very eclectic offices and meeting the personality that is Michel.

Fast forward six odd years and am stoked to be going back to Utrecht to share with a community that has been a source of inspiration for what we do at Peace Through Prosperity! I hope our work at Peace Through Prosperity serves to be a source of inspiration for my fellow Drupal community members and friends.

The session at DrupalJam 2018 has been recorded and shall add a link to the video as and when it is up. It also happens to be my eldest daughter Alvera’s second Drupal community event, first DrupalJam!! Well done! #SuperProudDad And last but not the least thank you to the DrupalJam team, to the attendees and my Acquia colleagues  Nicky Rutten and Maartje Sampers their time.

…………

Lastly, If you got value from what I have shared please consider giving back by contributing to @BringPTP, you can followbroadcast or donate.

Peace Through Prosperity (PTP) works to improve the environment for peacebuilding by nurturing prosperity in conflict affected communities. We work to alleviate poverty and secure livelihoods through empowering micro-entrepreneurs with knowledge, skills and increasing their access to income and opportunities. We support small businesses, owned/managed by vulnerable and marginalised individuals/groups in society.

Apr 06 2018
Apr 06

In my humble opinion, as a Drupal developer, contributing back to the Drupal Community is something we should love to do.

Have you ever considered a Drupal with no Views module?

Or thought about a world where there is no Drupal at all? Just think of how much extra time you would be spending writing and fixing code for each individual project. Or how much more difficult it would be for a developer (or site builder) to finish a job on time!

Lucky for us, these days I hope that we have solved the issues of time-consuming development: the answer is open-source, the answer is Drupal. Thanks to collaborative contributions, Drupal is a quality, world-leading resource. I feel excited by the opportunity to get involved and contribute back to Drupal open-source projects, don’t you? The quantity of your contribution doesn’t matter; even your digital experience or expertise isn’t important. Big or small, all that matters is whether you are able to give something back or not.

Once willing to contribute, we all face the questions: How can I start my Drupal contribution? 

The simple answer: check for the next Drupal sprint happening near you, add it to your calendar and get to the sprint! Once there, you can find mentors and, most importantly, ask questions! Some people might say: 'I am not writing code any more' or 'I am not a developer'. Yet they also ask:

But I am using Drupal, so is there a way I can contribute?

Well there is a plenty of room for you to get involved. Here are just some of the ways I am aware of:

  • Register on Drupal.org as a user
  • Confirm as a user on Drupal.org
  • Tell someone about Drupal- spread the word!
  • Join the Drupal Association
  • Attend a Drupal Association meeting
  • Improve documentation - even if that’s just correcting a spelling mistake
  • Marketing - write blog posts, articles, organise events
  • Write Case Studies - explain what Drupal can achieve
  • Follow and share Drupal's social media
  • Mentoring 
  • List someone as a mentor on your Drupal.org profile
  • Speak at Drupal events
  • Test module patches (bug fixes) and quality assurance
  • Report an issue
  • Report spam users on Drupal.org
  • Take and share Drupal-related photographs
  • Organise Drupal events, like Meetups, Sprints and Camps
  • Sponsor a venue for Drupal events
  • Host the reception desk at Drupal events
  • Help on the sessions room
  • Fund or Sponsor Drupal events

Again, it’s not a matter of how we contribute to Drupal, what’s important is to ask yourself: 'Are we/ Am I giving back to Drupal?' Over the past fifteen years, Drupal has celebrated 8 major releases and it is totally incomparable from the first to the latest version. All of this is made possible because of many of our contributions. So whatever your contribution may be, it’s very important to Drupal.

Title image by Cafuego on Flickr

Apr 06 2018
Apr 06

With popularity comes trouble... In this case here meaning: security vulnerabilities and risky over-exposure to cyber threats. And this can only mean that securing your website, that's running on the currently third most popular CMS in the world, calls for a set of Drupal security best practices for you to adopt.

And to stick to!

There's no other way around it: a set of strategically chosen security measures, backed by a prevention-focused mindset, pave the shortest path to top security.   

Stay assured: I've selected not just THE most effective best practices for you to consider adopting, but the easiest to implement ones, as well.

Quick note: before I go on and knee deep into this Drupal security checklist, I feel like highlighting that:
 

  • Drupal still has a low vulnerability percentage rate compared to its market share
  • the majority of Drupal's vulnerabilities (46%) are generated by cross-site scripting (XSS)
     

And now, here are the tips, techniques and resources for you to tap into and harden your Drupal site's security shield with.
 

1. The Proper Configuration Is Required to Secure Your Drupal Database 

Consider enforcing some security measures at your Drupal database level, as well.

It won't take you more than a few minutes and the security dangers that you'll be safeguarding it from are massive.

Here are some basic, yet effective measures you could implement:
 

  • go for a different table prefix; this will only make it trickier for an intruder to track it down, thus preventing possible SQL injection attacks
  • change its name to a less obvious, harder to guess one
     

Note: for changing your table prefix you can either navigate to phpMyAdmin, if you already have your Drupal site installed, or do it right on the setup screen (if it's just now that you're installing your website).
 

2. Always Run The Latest Version of Drupal on Your Website

And this is the least you could do, with a significant negative impact on your Drupal site if you undermine its importance. If you neglect your updating routine.

Do keep in mind that:
 

  1. it's older versions of Drupal that hackers usually target (since they're more vulnerable)
  2. the regularly released updates are precisely those bug fixes and new security hardening features that are crucial for patching your site's vulnerabilities.
     

Why should you leave it recklessly exposed? Running on an outdated Drupal version, packed with untrusted Drupal modules and themes?

Especially since keeping it up to date means nothing more than integrating 2 basic Drupal security best practices into your site securing “routine”:
 

  1. 1. always download your themes and modules from the Drupal repository (or well-known companies)
  2. 2. regularly check if there are any new updates for you to install: “Reports” → “Available Updates”→“Check manually” 
     

Drupal Security Best Practices: run the latest version of Drupal
 

3. Make a Habit of Backing Up Your Website

And here's another one of those underrated and too often neglected Drupal security best practices!

Why should you wait for a ransomware attack and realize its true importance... “the hard way”?

Instead, make a habit of regularly backing up your website since, as already mentioned:

There's no such thing as perfection when it comes to securing a Drupal site, there's only a hierarchy of different “security levels” that you can activate on your site

And backing up your site, constantly, sure stands for one of the most effective measures you could apply for hardening your Drupal website.

Now, here's how you do it:
 

  1. make use of Pantheon's “one-click backup” functionality
  2. test your updates locally using MAMP or XAMPP or another “kindred” software
  3. harness the Backup and Migrate module's power, currently available only for Drupal 7
  4. export your MySQL database and back up your files “the old way”... manually
     

There, now you can stay assured that, if/when trouble strikes, you always have your backup(s) to retrieve your data from and get back “on your feet” in no time!
 

4. Block Those Bots That You're Unwillingly Sharing Your Bandwidth With

No need to get all “altruist” when it comes to your bandwidth!

And to share it with all kinds of scrappers, bad bots, crawlers.

Instead, consider blocking their access to your bandwidth right from your server.

Here's how:

Add the following code to your .htacces file and block multiple user-agent files at once:

RewriteEngine On
RewriteCond %{HTTP_USER_AGENT} ^.*(agent1|Wget|Catall Spider).*$ [NC]
RewriteRule .* - [F,L]

Or use the BrowserMatchNoCase directive as follows:

BrowserMatchNoCase “agent1” bots
BrowserMatchNoCase "Wget" bots
BrowserMatchNoCase "Catall Spider" bots

Order Allow,Deny
Allow from ALL
Deny from env=bots

Use the KeyCDN feature for preventing those malicious bots from stealing your bandwidth!



5. Use Strong Passwords Only: One of the Easiest to Implement Drupal Security Best Practices

More often than not “easy” doesn't mean “less efficient”. 

And in this particular case here, simply opting for a strong username (smarter than the standard “admin”) and password can make the difference between a vulnerable and a hard-to-hack Drupal site.

For this, just:

Manually change your credentials right from your admin dashboard:  “People” → “Edit”→ “Username” while relying on a strong password-generating program ( KeePassX or KeePass) 
 

6. Use an SSL Certificate: Secure All Sensitive Data and Login Credentials

Would you knowingly risk your users' sensitive data? Their card information let's say, if it's an e-commerce Drupal site that you own?

And how about your login credentials?

For this is what you'd be doing if — though you do recognize the importance of using an SSL certificate —  you'd still put this measure at the back of your list of Drupal security best practices.

In other words, running your site running on HTTPs (preferably on HTTP/2, considering all the performance benefits that it comes packaged with) you'll be:
 

  • encrypting all sensitive data that's being passed on, back and forth, between the server and the client
  • encrypting login credentials, instead of just letting them get sent, in crystal-clear text, over the internet.
     

7. Use Drupal Security Modules to Harden Your Site's Shield

For they sure make your most reliable allies when it comes to tracking down loopholes in your site's code or preventing brutal cyber attacks.

From:
 

  • scanning vulnerabilities
  • to monitoring DNS changes
  • blocking malicious networks
  • identifying the files where changes have been applied
     

… and so on, these Drupal modules will be “in charge” of every single aspect of your site's security strategy.

And supercharging your site with some of the most powerful Drupal security modules is, again, the easiest, yet most effective measure you could possibly enforce.

Now speaking of these powerful modules, here's a short selection of the “must-have” ones:
 

  • Password Policy: enables you to enforce certain rules when it comes to setting up new passwords (you even get to define the frequency of password changes)
  • Coder : runs in-depth checks, setting your code against Drupal's best practices and coding standards
  • Automated Logout: as an admin, you get to define the time limit for a user's session; he/she will get automatixcally logged out when time expires
  • SpamSpan Filter: enables you to obfuscate email addresses, thus preventing spambots from “stealing” them
  • Login Security: deny access by ID address and limit the number of login attempts
  • Content Access: grant permission to certain content types by user roles and authors
  • Hacked!: provides an easy way for you to check whether any new changes have been applied to Drupal core/themes
  • Security Review Module: it will check your website for those easy-to-make mistakes that could easily turn into security vulnerabilities; here's a preview of this module “at work”
     

Drupal Security Best Practices: the Drupal Security Review Module
 

8. Implement HTTP Security Headers

Another one of those too-easy-to-implement, yet highly effective Drupal security best practices to add to your Drupal security checklist:

Implementing (and updating) HTTP security headers

“Why bother?”

Cause:
 

  1. first of all, their implementation requires nothing more than a configuration change at the web server level
  2. their key role is letting the browsers know just how to handle your site's content
  3. … thus reducing the risk of security vulnerabilities and brute force attacks
     

9. Properly Secure File Permissions

Ensure that your file permissions for:
 

  • opening
  • reading
  • modifying them
     

… aren't too dangerously loose.

Since such negligence could easily turn into an invitation for “evil-minded” intruders! 

And it's on Drupal.org's dedicated page that you can find more valuable info on this apparently insignificant, yet extremely effective security measure 
 

10. Restrict Access To Critical Files 

Told you this was going to be a list of exclusively easy-to-implement Drupal security best practices.

Blocking access to sensitive files on your website (the upgrade.php file, the install.php file, the authorize.php file etc.) won't take you more than a few minutes.

But the danger you'd avoid — having a malicious intruder risking to access core files on your Drupal site — is way too significant to overlook.
 

END of list! These are probably the easiest steps to take for securing your Drupal site.

How does your own list of Drupal security tips, techniques and resources to tap into look like?

Apr 06 2018
Apr 06

We’re thrilled to announce that Teaching Tolerance, a program of the Southern Poverty Law Center is up for a Webby! Their mission is to provide educators with the tools to create a kind and inclusive school climate. They do this through a library of articles and resources that they’ve turned into teachable materials via a Learning Plan Builder, and other classroom activities. It’s something that we feel is especially important work, now more than ever.

This is a project that meant so much to everyone that touched it; and it was a true partnership every step of the way for both our teams (Tolerance and ThinkShout). It certainly speaks to the passion that was put into it from all angles, and it’s an honor to be recognized for this work.

Our Case Study will give you the full scope of work. But for a quick summary: In redesigning their website, ThinkShout set out to turn the wealth of articles and resources Tolerance had into teachable materials, and did so by creating a guided Learning Plan Builder that makes all content classroom-ready. Tolerance grants free access to thousands of resources – from video to essays to proven teaching strategies – and everything within that catalogue is now actionable. We also took on the challenge of migrating their content from two older Drupal 6 and 7 sites into one new Drupal 8 site.

The result? Since launching summer of 2017, Tolerance.org has seen improvements across the board:

  • Pages per session up 21%
  • Session duration up 27%
  • Bounce rate decreased by 8%
  • Returning visitors up by 3%
  • Registrations nearly doubled (from 19,000 to 36,000)

Here’s where you come in: our nomination for a Webby means we need the people’s voice (aka VOTES) to actually win. Voting ends April 19th!

Vote for Tolerance.org in the Webby’s

Personally, we can’t think of anything more critical at this time than the work Tolerance.org is doing to ensure the next generation is primed to participate in our democracy. And winning the Webby will certainly help them gain visibility and advance their mission even further.

P.S. Travel Oregon also made it as an honoree in the Travel category, and they were up against some stiff competition! You can see their case study here.

Get In Touch

Questions? Comments? We want to know! Drop us a line and let’s start talking.

Learn More Get In Touch
Apr 06 2018
Apr 06

The advances in technology has brought in an unprecedented growth in E-Commerce industry, which has become a major target for cyber crimes. Thus, it becomes necessary to address the security measures for websites as any data breach leads to the loss of sensitive information along with monetary losses. This not only threatens reputation of the organization but also leads to mistrust among customers. When compared to leading organizations, smaller firms are affected more as they have to suffer substantial losses.

 Full security over the web can’t be attained as the hackers are devising new plans everyday to access consumer data. But, threats can be minimised by following certain security measures.

Based on our experience of working on several E-Commerce websites built using Drupal framework, in this article, we will talk about various safety parameters and how these risks can be addressed.

Sensitive Data Exposure

The PCI DSS  recommends that an E-Commerce application shouldn’t store unnecessary consumer information as it may be prone to cyber attacks. Such information should be stored in a confidential manner, using strong encryptions.

In Drupal, the stored account passwords are protected and hashed based on Portable PHP Password Hashing Framework. The Drupal community-contributed codes offer solutions to encrypt sensitive data, whether stationary or in circulation.

XSS (Cross-site Scripting)

The XSS attacks are a type of injection, where infected scripts are injected into trusted websites. When these scripts are received by browsers, it  leads to data breach.

By default, any untrusted user's content is filtered to remove dangerous elements in Drupal Core. In case any anonymous content is identified, the errors which can lead to XSS vulnerabilities can be mitigated by building safer defaults.

Weak Access Control

The end users can’t be given the kind of access that administrators  have over their site. This is concealed using certain Java Scripts. If these Java Scripts become accessible to all, data can be easily breached. Additional security measures like two factor authentication through OTPs and e-mails should also be implemented to limit such kind of access.

Access controls in Drupal are protected by a powerful permission-based system which checks for proper authorization before any action is taken. With a number of modules, access checking can be tightly integrated into the entire menu-rendering and routing system. This enables the protection of visibility of navigation links and pages by the same system that handles incoming requests.

Security Misconfiguration

Although the contemporary E-Commerce applications come with an extra layer of security, but any loophole in the configuration can make them vulnerable. 

Inefficiencies that lead to misconfiguration are corrected through usability testing and fixes that are recommended for inclusion to Drupal core. There are several contributed projects that conduct automated security review or implement further secure configurations.

Broken Authentication and Session Management

Almost every webpage users use, caches their data and links a unique session ID with them while displaying the result they seek. If they copy and share the URL of the website with another individual, they also accidentally end up giving their session ID. If it is replicated, hackers can easily get into users’ session and access their information.

User accounts and authentication are managed by Drupal core on the server to prevent a user from escalating authorisation. The passwords are hashed using PHP framework and existing sessions are destroyed upon login and logout.

Unvalidated Redirects and Forwards

It is advisable for users to go for only authenticated redirects and certified links present in the comments section of a web page. This is because unauthenticated redirects  and links can lead you to a malicious page, created by an attacker to get the access to important administrative functions of the webpage.

Internal page redirects cannot be used to circumvent Drupal's integrated menu and access control system. Drupal protects a site against automatic redirection via anonymous links to off-site URLs which could be used in a phishing attack.

Security is an essential part of E-Commerce development and should be one of the fundamental objectives that needs to be kept in mind. It should be a part of development practice from project planning stage itself. If you are using Drupal to build your E-Commerce platform, it is as secured  as any other pure E-Commerce enterprise framework available in market. And in new age of commerce where content and community are part of your marketing effort, Drupal scores over most of different options available. Just to substantiate, one of the leading enterprise commerce framework, Magento Commerce site itself runs on Drupal. 

If you are building an E-Commerce site using Drupal and need assistance, please do contact us.

Apr 06 2018
Apr 06

We recently relaunched the updated front-end for Zürich Tourismus. In this blog post, I want to highlight some user experience improvements that we added to the existing Drupal 7 website using React. Enter the Zuerich.com filter pages.

The Zuerich.com filter pages are highly interactive and allow the site visitor to explore data in a synchronized list & map view. We also applied the same concept to the different filter pages for Accommodation, Events and Restaurants.

Instant, Client-Side filtering

A key improvement to the filter pages is that they allow users to quickly explore and filter the data. The filter section immediately updates the corresponding results list according to the selected criteria. This works well for datasets below 1000 items, which are all accessed together and filtered using React on the client-side.

In traditional Drupal implementations, we would have the entire page reload for every filter click event, or, if we were to use AJAX, the entire results section would reload and require a round-trip to the server which slows down the user experience. With the new React-based approach, we were able to greatly improve the interaction speed. The search box also instantly filters the items for every character that the user enters.

Zuerich.com Filter Pages based on React and Drupal 7

Proximity Filtering

A really cool feature on top of the instant client-side filtering is the “Nearby me” search. It allows the user to either select their own geolocation or select from some popular points of interest. For tourists that aren’t yet familiar with the city, being able to choose between various important locations, such as the Zurich airport or main station, helps in their orientation.

When a point of interest has been selected, the map instantly switches to “Filter list by map” mode which only displays the results that correspond to the current map window. As the user zooms in or out, the map automatically updates the results list.

Zuerich.com Proximity Search

Keeping Multiple Viewports in Sync

Keeping multiple viewports easily in sync is one of the main advantages of using React to implement the filter pages. The state of the dataset and filters can be managed centrally and will automatically update the different views, such as the filters themselves, the results list as well as the markers on the map. By moving around the map, the user is also able to filter the list results to show only what is available in the current viewport which helps narrow down their search geographically.

Zuerich.com Keep in Sync

Unlimited, Interactive, React-based Filter Pages

The Zuerich.com filter pages are built using React components within the existing Drupal 7 infrastructure that drives the main website. We fetch the data from the backend using custom JSON feeds and render the filters, the results lists and map views using React. By doing so, we significantly improved the actual and perceived performance of the user interactions with the filters and map view. The same concept has been applied to different parts of the website. There are many more of these filters pages in addition to the ones used for Accommodation, Events and Restaurants.

In the back-end, the content editors are able to create custom filter pages using a special Content Type form. Filter settings and sort or proximity search options are able to be configured accordingly. In the React-based front-end, we then show the adjusted set of filter options and adjust the list views slightly i.e. to show star ratings for hotels.

Apr 06 2018
Apr 06
Getting Started With Drupal's Webform Module

One of our club members asked us how to create a survey form in Drupal 7. They wanted to achieve this without a need for custom coding.

The Webform contrib module is the perfect tool for the job. In this tutoral, you will learn how to use this module to survey what peanut butter, jelly and bread your site visitors prefer.

Webform is a module for making forms and collecting information from users in Drupal.

After a submission, you can send users a thank-you email as well as sending a notification to administrators.

Results can be exported into Excel or other spreadsheet applications. Webform also provides some basic statistical review and has an extensive API for expanding its features.

If you need to build a lot of customized, one-off forms, Webform is a more suitable solution than creating content types and using the CCK or Field modules.

Step #1. Download the Webform, CTools, Views, and Token Modules

Download the Webform module

Make sure you download the latest versions of the module for Drupal 7. They may have changed since we wrote this, so be sure to check.

Step #2. Install the Webform, CTools, Views, and Token Modules

tutuploadsStep_2._Install_the_Webform_Module.png

Note: if you don't see the Install new module link, please go to Modules and enable the Update manager module.

Step #3. Enable the Webform, CTools, Views, and Token Modules

tutuploadsEnable_the_module.png

  • Go to the Modules and enable the Webform module
  • Scroll down to Webform and click the checkbox.
  • Click Save configuration.

Step #4. Access the Form Fields

tutuploadsAccess_the_form_fields.png

  • Go to Structure  > Content types.

tutuploadsmedia_1317093423174.png

  • Scroll down to Webform.
  • Locate the edit, manage fields, manage display and delete links.

tutuploadsmedia_1317094660865.png

  • Click Edit.

You will be taken to the Webform module management page:

tutuploadsmedia_1317095178845.png

This is not the place where you will be creating forms. Editing here is exactly the same as editing Fields in Content Types, which is a way to make fields available to this content type.

From here you can edit fields, manage existing fields, the display and the comment functions by clicking on the appropriate tabs.

But creating the actual form is done by adding content in the same way you would add an article.

You won't need to do much here but review all your choices and see if there is anything you feel you must change.

The default values will work for the purposes of our demonstration. After you create your first form and understand the module you might want to revisit the configuration.

Now that the module is installed and the configuration is checked, you can begin building your survey form.

Step #5. Create a Web Form by Adding It as Content

tutuploadsStep_3._Create_your_first_webform_by_adding_it_as_content.png

  • Go to Content > Add Content > Webform.

tutuploadsCreate_the_basic_entry_form_and_configure_the_same_as_any.png

  • Give it a title and make the decisions on all basic options.
  • Save this with the Save button at the bottom of the page.

Step #6. Add the Form Components

tutuploadsStep_4._Start_adding_form_components.png

Now you will see the controls for creating and editing the rest of the form elements.

Let's start adding Form components using the WEBFORM tab.

tutuploadsStep_5._Make_a_pre-filled_username_field_using_a_Token.png

  • Make sure you are under the Webform tab.
  • Enter Name instead of the New component name text.
  • Choose the textfield.
  • Click Add.

We are surveying registered users, so we are going to automatically fill in their username.

A name is a basic text field, but we want our registered users to show up on the textfield, so we're going to make use of the Tokens. Using Tokens is just an optional feature.

tutuploadsmedia_1317148972326.png

  • Enter the %username token value in the Default value field. This will pull the username from the database and fill it in automatically.

Note: If you don't see the TOKEN VALUES, you probably don't have the Token module installed. You only need the %username token if you want to fill in the default values taken from the database.

If this is going to be a blank field that the user will fill in when they visit the page, you can just leave the Default value blank. I used the token value here to illustrate the possibilities available to you.

  • Scroll down the rest of the page and make any configuration selections you need.
  • Click Save component at the very bottom of the page.

Step #7. Create the Select options Fields

tutuploadsStep_6._Create_your_first_Select_field.png

Now let's create our first select field form component.

  • Enter Bread as the filed label.
  • Choose the Select options type from the drop-down. 
  • Click Add. 
  • On the next screen enter the options as displayed in the image below:

tutuploadsStep_7._Create_the_list_of_options_for_the_user_to_choose.png

  • Go to Options and create Key-Value pairs.

These pairs consist of a machine-readable key and a plain language value separated by a "|" - This character is called a "pipe" and you can find it by holding shift while pressing the backslash key "\" key on most keyboards.

Key-value pairs MUST be specified as "safe_key|Some readable option". Use of only alphanumeric characters and underscores is recommended in keys. Enter only one option per line.

  • Save the component.
  • Repeat this step for the flavor of the jelly and type of peanut butter.

tutuploads-_Default_choice_is_radio_buttons.png

When you are creating lists, the default type is radio buttons. If you want checkboxes or a list box the choices are farther down on the page.

You can also set the field as mandatory or optional. If you click the "Multiple" checkbox at the top, the list will appear as checkboxes.

If you choose the Listbox under DISPLAY you will have a drop-down box. Selecting Multiple and Listbox will allow users to make multiple selections from a drop-down box.

When you create the Jelly type, add Other as a choice. Then add a text field so people can write in their suggestions.

Step #8. Add a textarea Field

tutuploadsStep_8._Add_a_Text_field.png

  • Add a textarea field. It will be a large area used for entering more extensive written responses.

Other field types you might want to add for your purposes are an E-Mail field or Date field. You can choose these types from the drop-down Type box.

tutuploadsmedia_1317154370995.png

Step #9. Add a Preset Field Type for US States

tutuploads-_Add_a_preset_field_type_-_an_added_feature_of_Webform.png

There is a convenience feature you may want to use. You can create pre-built option lists and add them to your form. The module comes with several default lists. 

  • Add one for the US States to see how this looks on your form.
  • Label it State.
  • Choose the Select Options type. 

tutuploadsmedia_1317151406971.png

  • Choose the US States from the drop down.
  • Click Save Component.

Step #10. Check Your Survey Form

tutuploadsStep_9._Check_your_form_and_fine-tune_the_view..png

At this point, your form will look similar to the one on the image above.

You can change the locations of the descriptions by clicking the WEBFORM tab and editing each item and making different configuration selections.

  • Order the form fields the way you like and all the questions and fields correct. You can use drag and drop to move form elements to different positions.

Next, let's configure the E-Mail Option.

tutuploadsStep_10.__Configure_the_E-Mail_options..png

  • Go back to the WEBFORM tab > E-mails tab.
  • Fill in an e-mail address. 
  • Click Add.

Good job. You just set the email address to receive an email when the form is submitted. If you wish, you can add multiple e-mail addresses.

tutuploadsmedia_1317152290926.png

There are lots of choices to make. Be sure you check every one of them so they are correct for your form. Be sure to save the changes.

Let's move on to the last step.

Step #11. Configure the Form Settings

tutuploadsStep_11._Configure_the_Form_Settings.png

  • From the WEBFORM tab click the Form settings sub-tab at the top of the page.
  • Check and modify settings as needed. 
  • Save your changes.

Step #12. Test Your Form

  • Now publish your form exactly like you would publish any other content item on your site.
  • Log out of your site.
  • Fill in our survey form as an anonymous user.
  • Now log in back to your site.

The Results tab

  • Click the Results tab.

Click the view link

  • On the next screen, click the view link.

The Submission screen

You will see the Submission #... screen, resembling the one, displayed on the image above. Job done! If you wish, you can also run a test as a logged in user.

Apr 06 2018
Ana
Apr 06

There will be a lot of sessions at DrupalCon Nashville. That's nothing new to be fair. DrupalCons are the biggest Drupal events with the most Drupal session, so you can’t attend all of the sessions you would want to. Therefore, we have made a short list of the business sessions you don't want to miss. We sure won't. 

Thursday, April 10

Mark Shropshire, Open Source Security Lead at Mediacurrent  

On May 28, 2018, GDPR will come into force - that is a data security legislation, which allows individualists to control their personal data used by companies. The method of collecting and storing personal contacts of (potential) clients will change greatly. In this session we will get a practical interpretation of the GDPR, the lecturer will answer the question how to determine if you are at risk for compliance, what happens with security and what is the impact to data, analytics, and personalization strategies.

Wednesday, April 11

Nelson Harris, Business Development Strategist at Elevated Third
Joe Flores, Senior Developer at Elevated Third

The future of Drupal is ambitious digital experiences, as Dries said himself. But that kind of digital experience comes with great responsibility to master Drupal development services. Many people say that Drupal is “hard”. The truth is, Drupal is a complex platform needed to be performed by skilled experts. Therefore it's not for building basic marketing sites, but complex, ambitious one. In this session, Lecturers will answer the question, what kind of project Drupal is for, how Elevated third has moved from thinking of Drupal development as a commodity to a consultative service, and how to make sure Drupal is being used to its full extent.

Tuesday, Apr 10

Alanna Burke, Developer at Chromatic

Women in tech is a topical issue in a modern society. In a culture of harassment can companies do, since there are not enough women in tech? Why some companies have no issue recruiting female employees, how can we invite them in our midst, and what repels them? Alanna Burke will talk about policies and benefits that may be more inclusive, as childcare, company culture etc.

Wednesday, Apr 11

Amy Shropshire, Founder and Principal of CASK Communications

W. Edwards Deming has an award in Japan named after him as a statistician with a PhD from Yale. He was focusing on using statistics not only to provide metrics for measuring performance but to create a culture of quality in an organization. In this session, Amy Shropshire will guide us through Deming’s Red Bead Experiment - which in most cases is showing us that employees don't have control to improve upon the metrics on which they are graded. Later we will overview Deming’s key principles - 14 Points of Management, their examples in real-world action and discussion how to implement those principals into our organization.

Tuesday, Apr 10

Chris Free, Partner at Chromatic

The path to running a successful agency is everything but easy. You have many obstacles on the way before you can start actual work.  Chris Free has decade-long experiences of starting an agency with some misfires and guesswork along the way. He is willing to share with us all the WHAT and WHY he learned through the process that leads him to the successful agency owner.

 

Tuesday, Apr 10

Daniel Schiavone, Partner/Technologist at Snake Hill Web Agency

By Drupal growing steadily is good news, but it brings something else as well. The trend is showing us, that bigger companies with stronger competencies are favoured, which can bring a struggle to smaller companies. Daniel Schiavone will discuss with us, how can small agencies survive, how the release of Drupal 8 will affect them. 

Tuesday, Apr 10

Chris Teitzel, Founder/CEO at Cellar Door Media
Ben Stoffel-Rosales, Partner Manager at Pantheon

Chris Teitzel is a CEO of Cellar Door Media, who saw a need for API and encryption key management as a service - the result was Lockr, the first hosted API & encryption key management for modern CMS’s like Drupal and WordPress. We will see his experience as a case study for building product as an agency. We will get the answers to questions like how to grow an agency within a product, and how to build a strategic partnership to build one. 

As a Partner Manager at Pantheon, Ben Stoffel-Rosales has worked with many agencies who have built and sold Drupal distributions and paid plug-ins to create steady streams of revenue that add to their business goals. In this session, we will learn how Drupal creates an opportunity to build a new product. How do we attract new clients on the one hand, and maintain a good relationship with our technology partners on the other?

Ashleigh Thevenet, Chief Operating Officer at Bluespark

Ashleigh Thevenet is a fan of spreadsheets and when she listened to a session at DrupalCon LA by Sean Larkin called “Scaling your business starts with the right spreadsheets: performance metrics” she was inspired and decided to adopt and adapt them to fit her needs. In this session she will take a closer look at operations specific metrics, and will walk us through tool she uses daily - with them she answers questions like will the team be able to complete the work in time, how many people she needs to assign to a certain project etc.

Photo by Boris Baldinger
Apr 05 2018
Apr 05

Welcome to Stanford Drupal Camp 2018!

Submitted by Sara Worrell Berg on April 5, 2018 - 3:41 pm Stanford Drupal Camp banner image has red background with white text

Stanford Drupal Camp 2018 is already upon us!

The ninth annual Stanford Drupal Camp will be hosted at the  Stanford Law School on April 6-7, 2018.

https://drupalcamp.stanford.edu/

Our Stanford Drupal Camp emphasizes introductory sessions for beginners, as well as use cases of Drupal in higher education. Those new to Drupal will be particularly interested in the events on Friday, whereas experienced Drupallers (yes, we spell it with two "L"s at Stanford) may be more interested in Saturday's program. Learn more about the sessions and log in to start your schedule planning.

As always, it’s free and open to the public.

If you have any questions, please contact [email protected].

Hope to see you there!

Categories:  News and Updates Tags:  DrupalCamp Drupal Planet

Add new comment

Your name Subject Comment * Leave this field blank
Apr 05 2018
Apr 05

For a recent project, we were tasked to consume the client's internal data from a custom API. Now, this scenario was lucky for us, the API provides a total item count of about 5000, but when queried with a start date, it provides all revisions of items between then and now. The premise was that the data was to be downloaded at regular intervals, so that content editors didn't need to copy and past to keep product information up to date. Updates to their external dataset would be done at a steady pace, like a handful of times a week, and would include a low number of changes, around 50 to 100 items a change. Since Drupal is known to be cumbersome with content saving, the idea of pulling in and saving data from for 5000 nodes in one go during cron didn't seem feasible. So, I premised that we could do a full import via an administration page and cron can keep the data up to date utilizing the cron queue with help from Ultimate Cron. For the sake of this blog, the API will be referred to as the Iguana API, the products will be Tea, and client, the Iguana Tea Company.

Initial Infrastructure

So, first I stubbed out administrative pages consisting of an overview page, an import form, and a settings form. The overview will be used to display the current health of the API. The import form will trigger a batch job to import and save all of the API's Tea data. The settings form for configuring access to the API and such.

drupal/modules/custom/iguana/iguana.routing.yml:

iguana.overview:
  path: '/admin/config/services/iguana'
  defaults:
    _controller: '\Drupal\iguana\Controller\IguanaOverviewController::showOverview'
    _title: 'Iguana API Status Report'
  requirements:
    _permission: 'iguana tea import'
  options:
    _admin_route: TRUE

iguana.tea_import:
  path: '/admin/config/services/iguana/tea-import'
  defaults:
    _form: '\Drupal\iguana\Form\IguanaTeaImportForm'
    _title: 'Iguana API: Tea Import'
  requirements:
    _permission: 'iguana tea import'
  options:
    _admin_route: TRUE

iguana.configuration:
  path: '/admin/config/services/iguana/config'
  defaults:
    _form: '\Drupal\iguana\Form\IguanaConfigurationForm'
    _title: 'Iguana API Configuration'
  requirements:
    _permission: 'iguana admin config'
  options:
    _admin_route: TRUE

Also, for a good administrative user experience, I added menu links and tabs:

drupal/modules/custom/iguana/iguana.links.menu.yml:

iguana.overview:
  title: Iguana API Status
  route_name: iguana.overview
  description: 'Configuration & information for the Iguana API integration.'
  parent: system.admin_config_services

iguana.tea_import:
  title: Tea Import
  route_name: iguana.tea_import
  parent: iguana.overview

iguana.configuration:
  title: Configure
  route_name: iguana.configuration
  parent: iguana.overview

drupal/modules/custom/iguana/iguana.links.task.yml:

iguana.overview:
  title: API Status
  route_name: iguana.overview
  base_route: iguana.overview

iguana.tea_import:
  title: Tea Import
  route_name: iguana.tea_import
  base_route: iguana.overview

iguana.configuration:
  title: Configure
  route_name: iguana.configuration
  base_route: iguana.overview

From previous experience, I designed this process to consist of two separate operation, save API data locally to a database table and then extract the downloaded data to Drupal nodes. It was explicitly defined that each item was uniquely identified by the field, gid or GID. So, the database table will be used for storing the raw API data keyed by the GID field.

drupal/modules/custom/iguana/iguana.install:

<?php
 
use Drupal\Core\Database\Database;
 
/**
 * Implements hook_schema().
 */
function iguana_schema() {
  $schema['iguana_tea_previous'] = [
    'description' => 'Preserves the raw data downloaded from the Iguana API for comparison.',
    'fields'      => [
      'gid' => [
        'description' => 'The primary unique ID for Iguana Tea data.',
        'type'        => 'int',
        'size'        => 'big',
        'not null'    => TRUE,
        'default'     => 0,
      ],
      'data' => [
        'description' => 'The full data of the Tea.',
        'type'        => 'blob',
        'size'        => 'big',
      ],
    ],
    'primary key' => ['gid'],
  ];
 
  $schema['iguana_tea_staging'] = [
    'description' => 'Stores the raw data downloaded from the Iguana API.',
    'fields'      => [
      'gid' => [
        'description' => 'The primary unique ID for Iguana Tea data.',
        'type'        => 'int',
        'size'        => 'big',
        'not null'    => TRUE,
        'default'     => 0,
      ],
      'data' => [
        'description' => 'The full data of the Tea.',
        'type'        => 'blob',
        'size'        => 'big',
      ],
    ],
    'primary key' => ['gid'],
  ];
 
  return $schema;
}

Now, to connect to the Iguana API, I needed to know the credentials to access it, so that meant building out the configuration form.

drupal/modules/custom/iguana/src/Form/IguanaConfigurationForm.php:

<?php
 
namespace Drupal\iguana\Form;
 
use Drupal\Core\Form\ConfigFormBase;
use Symfony\Component\HttpFoundation\Request;
use Drupal\Core\Form\FormStateInterface;
 
/**
 * Defines a form that configures forms module settings.
 */
class IguanaConfigurationForm extends ConfigFormBase {
 
  /**
   * {@inheritdoc}
   */
  public function getFormId() {
    return 'iguana_admin_settings';
  }
 
  /**
   * {@inheritdoc}
   */
  protected function getEditableConfigNames() {
    return [
      'iguana.settings',
    ];
  }
 
  /**
   * {@inheritdoc}
   */
  public function buildForm(array $form, FormStateInterface $form_state, Request $request = NULL) {
    $config = $this->config('iguana.settings');
    $state  = \Drupal::state();
    $form["#attributes"]["autocomplete"] = "off";
    $form['iguana'] = array(
      '#type'  => 'fieldset',
      '#title' => $this->t('Iguana settings'),
    );
    $form['iguana']['url'] = array(
      '#type'          => 'textfield',
      '#title'         => $this->t('Iguana API URL'),
      '#default_value' => $config->get('iguana.url'),
    );
    $form['iguana']['username'] = array(
      '#type'          => 'textfield',
      '#title'         => $this->t('Username'),
      '#default_value' => $config->get('iguana.username'),
    );
    $form['iguana']['password'] = array(
      '#type'          => 'textfield',
      '#title'         => $this->t('Password'),
      '#default_value' => '',
      '#description'   => t('Leave blank to make no changes, use an invalid string to disable if need be.')
    );
    $form['iguana']['public_key'] = array(
      '#type'          => 'textfield',
      '#title'         => $this->t('Public Key'),
      '#default_value' => $config->get('iguana.public_key'),
    );
    $form['iguana']['private_key'] = array(
      '#type'          => 'textfield',
      '#title'         => $this->t('Private Key'),
      '#default_value' => '',
      '#description'   => t('Leave blank to make no changes, use an invalid string to disable if need be.')
    );
    $form['iguana']['division'] = array(
      '#type'          => 'textfield',
      '#title'         => $this->t('Division'),
      '#default_value' => $config->get('iguana.division'),
    );
    $form['iguana']['territory'] = array(
      '#type'          => 'textfield',
      '#title'         => $this->t('Territory'),
      '#default_value' => $config->get('iguana.territory'),
    );
    $nums   = [
      5, 10, 25, 50, 75, 100, 150, 200, 250, 300, 400, 500, 600, 700, 800, 900,
    ];
    $limits = array_combine($nums, $nums);
    $form['cron_download_limit'] = [
      '#type'          => 'select',
      '#title'         => t('Cron API Download Throttle'),
      '#options'       => $limits,
      '#default_value' => $state->get('iguana.cron_download_limit', 100),
    ];
    $form['cron_process_limit'] = [
      '#type'          => 'select',
      '#title'         => t('Cron Queue Node Process Throttle'),
      '#options'       => $limits,
      '#default_value' => $state->get('iguana.cron_process_limit', 25),
    ];
    return parent::buildForm($form, $form_state);
  }
 
  /**
   * {@inheritdoc}
   */
  public function submitForm(array &$form, FormStateInterface $form_state) {
    $values = $form_state->getValues();
    $config = $this->config('iguana.settings');
    $state  = \Drupal::state();
    $config->set('iguana.url', $values['url']);
    $config->set('iguana.username', $values['username']);
    $config->set('iguana.public_key', $values['public_key']);
    $config->set('iguana.division', $values['division']);
    $config->set('iguana.territory', $values['territory']);
    $config->save();
    if (!empty($values['private_key'])) {
      $state->set('iguana.private_key', $values['private_key']);
    }
    if (!empty($values['password'])) {
      $state->set('iguana.password', $values['password']);
    }
    $state->set('iguana.cron_download_limit', $values['cron_download_limit']);
    $state->set('iguana.cron_process_limit', $values['cron_process_limit']);
  }
 
}

Specifically note how sensitive data, such as private keys, is saved via the State API instead of the Config API.

Connection Class

Next, whether the overview page is displaying the API health or the batch operation is downloading, I needed a class to simplify doing all of the basic connection operations.

drupal/modules/custom/iguana/src/IguanaConnection.php:

<?php
 
namespace Drupal\iguana;
 
use Drupal\Core\Url;
use GuzzleHttp\Client as GuzzleClient;
use GuzzleHttp\Psr7\Request as GuzzleRequest;
 
 
/**
 * Class IguanaConnection
 *
 * @package Drupal\iguana
 */
class IguanaConnection {
 
  /**
   * @var string Iguana API version to use
   */
  protected $version = 'v1';
 
  /**
   * @var string API querying method
   */
  protected $method  = 'GET';
 
  /**
   * @var \Drupal\Core\Config\Config Iguana settings
   */
  protected $config  = NULL;
 
  /**
   * @var array Store sensitive API info such as the private_key & password
   */
  protected $sensitiveConfig = [];
 
  /**
   * IguanaConnection constructor.
   */
  public function __construct() {
    $this->config = \Drupal::config('iguana.settings');
  }
 
  /**
   * Get configuration or state setting for this Iguana integration module.
   *
   * @param string $name this module's config or state.
   *
   * @return mixed
   */
  protected function getConfig($name) {
    $sensitive = [
      'private_key',
      'password',
    ];
    if (in_array($name, $sensitive)) {
      if (isset($this->sensitiveConfig[$name])) {
        return $this->sensitiveConfig[$name];
      }
      $this->sensitiveConfig[$name] = \Drupal::state()
        ->get('iguana.' . $name);
      return $this->sensitiveConfig[$name];
    }
    return $this->config->get('iguana.' . $name);
  }
 
  /**
   * Pings the Iguana API for data.
   *
   * @param string $endpoint division endpoint to query
   * @param array  $options for Url building
   *
   * @return object
   */
  public function queryEndpoint($endpoint, $options = []) {
    try {
      $response = $this->callEndpoint($endpoint, $options);
      return json_decode($response->getBody());
    } catch (\Exception $e) {
      watchdog_exception('iguana', $e);
      return (object) [
        'response_type' => '',
        'response_data' => [],
        'pagination'    => (object) [
          'total_count'    => 0,
          'current_limit'  => 0,
          'current_offset' => 0,
        ],
      ];
    }
  }
 
  /**
   * Call the Iguana API endpoint.
   *
   * @param string $endpoint
   * @param array  $options
   *
   * @return \Psr\Http\Message\ResponseInterface
   */
  public function callEndpoint($endpoint, $options = []) {
    $headers = $this->generateHeaders($this->requestUri($endpoint));
    $url     = isset($options['next_page']) ?
      $options['next_page'] : $this->requestUrl($endpoint, $options)
        ->toString();
    $client  = new GuzzleClient();
    $request = new GuzzleRequest($this->method, $url, $headers);
    return $client->send($request, ['timeout' => 30]);
  }
 
  /**
   * Build the URI part of the URL based on the endpoint and configuration.
   *
   * @param string $endpoint to the API data
   *
   * @return string
   */
  protected function requestUri($endpoint) {
    $division = $this->getConfig('division');
    return '/services/rest/' . $this->version . '/json/' . $division
           . '/' . $endpoint . '/';
  }
 
  /**
   * Build a Url object of the URL data to query the Iguana API.
   *
   * @param string $endpoint to the API data
   * @param array  $options to build the URL such as 'query_options'
   *
   * @return \Drupal\Core\Url
   */
  protected function requestUrl($endpoint, $options = []) {
    $url         = $this->getConfig('url');
    $public_key  = $this->getConfig('public_key');
    $territory   = $this->getConfig('territory');
    $request_uri = $this->requestUri($endpoint);
    $limit       = isset($options['limit']) ? $options['limit'] : 25;
    $offset      = 0;
    $start_time  = isset($options['start_time']) ? $options['start_time'] : NULL;
    $end_time    = isset($options['end_time']) ? $options['end_time'] : NULL;
    $url_query   = [
      'api_key'   => $public_key,
      'limit'     => $limit,
      'offset'    => $offset,
      'territory' => $territory,
    ];
 
    if (isset($start_time)) {
      $start_date             = new \DateTime('@' . $start_time);
      $url_query['startdate'] = $start_date->format('Y-m-d');
    }
 
    if (isset($end_time)) {
      $end_date             = new \DateTime('@' . $end_time);
      $url_query['enddate'] = $end_date->format('Y-m-d');
    }
 
    if (!empty($options['url_query']) && is_array($options['url_query'])) {
      $url_query = array_merge($url_query, $options['url_query']);
    }
 
    return Url::fromUri($url . $request_uri, [
      'query' => $url_query,
    ]);
  }
 
  /**
   * Build an array of headers to pass to the Iguana API such as the
   * signature and account.
   *
   * @param string $request_uri to the API endpoint
   *
   * @return array
   */
  protected function generateHeaders($request_uri) {
    $username       = $this->getConfig('username');
    $password       = $this->getConfig('password');
    $private_key    = $this->getConfig('private_key');
    $request_method = 'GET';
    // Date must be UTC or signature will be invalid
    $original_timezone = date_default_timezone_get();
    date_default_timezone_set('UTC');
    $message = $request_uri . $request_method . date('mdYHi');
    $headers = [
      'x-signature' => $this->generateXSignature($message, $private_key),
      'x-account'   => $this->generateXAccount($username, $password),
    ];
    date_default_timezone_set($original_timezone);
    return $headers;
  }
 
  /**
   * Builds a hash for the x-signature to send to the Iguana API according to
   * specifications.
   *
   * @param string $message
   * @param string $private_key
   *
   * @return string
   */
  protected function generateXSignature($message, $private_key) {
    return some_encoding_process($message, $private_key);
  }
 
  /**
   * Builds a hash for the x-account to send to the Iguana API according to
   * specifications.
   * @param string $username
   * @param string $password
   *
   * @return string
   */
  protected function generateXAccount($username, $password) {
    return some_other_encoding_process($username, $password);
  }
 
}

So, this reduces the process for getting API data down to two methods callEndpoint() and queryEndpoint(). The latter simply calls the former and cleans up the data for processing, where the former returns the whole response object. Also, note the method, getConfig(), that simplifies getting settings whether they are stored in yaml configuration or state.

API Health Overview

Now that I had a connection class, I could test it out on the API's overview page.

drupal/modules/custom/iguana/src/Controller/IguanaOverviewController.php:

<?php
 
namespace Drupal\iguana\Controller;
 
use Drupal\Core\Controller\ControllerBase;
use Drupal\iguana\IguanaConnection;
 
/**
 * Provides controller methods for the Iguana API integration overview.
 */
class IguanaOverviewController extends ControllerBase {
 
  /**
   * {@inheritdoc}
   */
  public function showOverview() {
    $build = [];
 
    list($response, $json) = $this->pingEndpoint($build);
    // If response data was built and returned, display it with a sample of the
    // objects returned
    if (isset($response)) {
      $build['response'] = [
        '#theme' => 'item_list',
        '#title' => t('Response: @r', [
          '@r' => $response->getReasonPhrase(),
        ]),
        '#items' => [
          'code' => t('Code: @c', ['@c' => $response->getStatusCode()]),
        ],
      ];
    }
    if (isset($json)) {
      $build['response_data'] = [
        '#theme' => 'item_list',
        '#title' => t('Response Data:'),
        '#items' => [
          'response-type' => t('Response Type: @t', [
            '@t' => $json->response_type,
          ]),
          'total-count' => t('Total Count: @c', [
            '@c' => $json->pagination->total_count,
          ]),
        ],
      ];
      $this->displayPaginationData($json, $build);
      $this->displayDataSample($json, $build);
    }
    return $build;
  }
 
  /**
   * Ping the Iguana API for basic data.
   *
   * @param array $build render array
   *
   * @return array of [$response, $json]
   */
  protected function pingEndpoint(&$build) {
    $connection = new IguanaConnection();
    $response   = NULL;
    $json       = NULL;
    try {
      $response = $connection->callEndpoint('teasDetailFull', [
        'limit'     => 10,
        'url_query' => [
          'sort' => 'gid asc',
        ]
      ]);
      $json = json_decode($response->getBody());
    } catch (\GuzzleHttp\Exception\ServerException $e) {
      // Handle their server-side errors
      $build['server_error'] = [
        '#theme' => 'item_list',
        '#title' => t('Server Exception: @r', [
          '@r' => $e->getResponse()->getReasonPhrase(),
        ]),
        '#items' => [
          'url'  => t('URL: @u', ['@u' => $e->getRequest()->getUri()]),
          'code' => t('Code: @c', ['@c' => $e->getResponse()->getStatusCode()]),
        ],
      ];
      $build['exception'] = [
        '#markup' => $e->getMessage(),
      ];
      watchdog_exception('iguana', $e);
    } catch (\GuzzleHttp\Exception\ClientException $e) {
      // Handle client-side error (e.g., authorization failures)
      $build['client_error'] = [
        '#theme' => 'item_list',
        '#title' => t('Client Exception: @r', [
          '@r' => $e->getResponse()->getReasonPhrase(),
        ]),
        '#items' => [
          'url'  => t('URL: @u', ['@u' => $e->getRequest()->getUri()]),
          'code' => t('Code: @c', ['@c' => $e->getResponse()->getStatusCode()]),
        ],
      ];
      $build['exception'] = [
        '#markup' => $e->getMessage(),
      ];
      watchdog_exception('iguana', $e);
    } catch (\Exception $e) {
      // Handle general PHP exemptions
      $build['php_error'] = [
        '#theme' => 'item_list',
        '#title' => t('PHP Exception'),
        '#items' => [
          'code' => t('Code: @c', ['@c' => $e->getCode()]),
        ],
      ];
      $build['exception'] = [
        '#markup' => $e->getMessage(),
      ];
      watchdog_exception('iguana', $e);
    }
    return [$response, $json];
  }
 
  /**
   * Build out any available data for pagination.
   *
   * @param object $json
   * @param array  $build render array
   */
  protected function displayPaginationData($json, &$build) {
    if (isset($json->pagination->current_limit)) {
      $build['response_data']['#items']['current-limit'] = t('Current Limit: @l', [
        '@l' => $json->pagination->current_limit,
      ]);
    }
    if (isset($json->pagination->current_offset)) {
      $build['response_data']['#items']['current-offset'] = t('Current Offset: @o', [
        '@o' => $json->pagination->current_offset,
      ]);
    }
    if (isset($json->pagination->first)) {
      $build['response_data']['#items']['first'] = t('First URL: @f', [
        '@f' => $json->pagination->first,
      ]);
    }
    if (isset($json->pagination->prev)) {
      $build['response_data']['#items']['prev'] = t('Previous URL: @p', [
        '@p' => $json->pagination->prev,
      ]);
    }
    if (isset($json->pagination->next)) {
      $build['response_data']['#items']['next'] = t('Next URL: @n', [
        '@n' => $json->pagination->next,
      ]);
    }
    if (isset($json->pagination->last)) {
      $build['response_data']['#items']['last'] = t('Last URL: @l', [
        '@l' => $json->pagination->last,
      ]);
    }
  }
 
  /**
   * Build out a sample of the data returned.
   *
   * @param object $json
   * @param array  $build render array
   */
  protected function displayDataSample($json, &$build) {
    if (isset($json->response_data[0])) {
      $tea_data = $json->response_data[0];
      $build['tea_sample'] = [
        '#prefix' => '<pre>',
        '#markup' => print_r($tea_data, TRUE),
        '#suffix' => '</pre>',
      ];
    }
  }
 
}

The portions displaying the JSON data are unique to the Iguana API, but the pinging of the API and the error handling around it should be noted.

Batch API Operations

So, with the API returning some data for the overview page, building out the data processing with the Batch API operations was next. Much of what was be done here was easily replicated for cron since both process the data in small batch sizes. Batch operations work best when there is total count to work towards, so taking an example from the overview page, I started with the form body capturing that total count. This also allows for an error check, if the API returns zero as a total, the submit button can be disabled.

drupal/modules/custom/iguana/src/Form/IguanaTeaImportForm.php:

<?php
 
namespace Drupal\iguana\Form;
 
use Drupal\Core\Database\Database;
use Drupal\Core\Form\FormBase;
use Symfony\Component\HttpFoundation\Request;
use Drupal\Core\Form\FormStateInterface;
use Drupal\iguana\IguanaConnection;
use Drupal\iguana\IguanaTea;
 
/**
 * Defines a form that triggers batch operations to download and process Tea
 * data from the Iguana API.
 * Batch operations are included in this class as methods.
 */
class IguanaTeaImportForm extends FormBase {
 
  /**
   * {@inheritdoc}
   */
  public function getFormId() {
    return 'iguana_tea_import_form';
  }
 
  /**
   * {@inheritdoc}
   */
  public function buildForm(array $form, FormStateInterface $form_state, Request $request = NULL) {
    $connection = new IguanaConnection();
    $data       = $connection->queryEndpoint('teasDetailFull', [
      'limit'     => 1,
      'url_query' => [
        'sort' => 'gid asc',
      ]
    ]);
 
    if (empty($data->pagination->total_count)) {
      $msg  = 'A total count of Teas was not returned, indicating that there';
      $msg .= ' is a problem with the connection. See ';
      $msg .= '<a href="https://www.metaltoad.com/admin/config/services/iguana">the Overview page</a>';
      $msg .= 'for more details.';
      drupal_set_message(t($msg), 'error');
    }
 
    $form['count_display'] = [
      '#type'  => 'item',
      '#title' => t('Teas Found'),
      'markup'  => [
        '#markup' => $data->pagination->total_count,
      ]
    ];
 
    $form['count'] = [
      '#type'  => 'value',
      '#value' => $data->pagination->total_count,
    ];
 
    $nums   = [
      5, 10, 25, 50, 75, 100, 150, 200, 250, 300, 400, 500, 600, 700, 800, 900,
    ];
    $limits = array_combine($nums, $nums);
    $desc   = 'This is the number of Teas the API should return each call ' .
      'as the operation pages through the data.';
    $form['download_limit'] = [
      '#type'          => 'select',
      '#title'         => t('API Download Throttle'),
      '#options'       => $limits,
      '#default_value' => 200,
      '#description'   => t($desc),
    ];
    $desc = 'This is the number of Teas to analyze and save to Drupal as ' .
      'the operation pages through the data.<br />This is labor intensive so ' .
      'usually a lower number than the above throttle';
    $form['process_limit'] = [
      '#type'          => 'select',
      '#title'         => t('Node Process Throttle'),
      '#options'       => $limits,
      '#default_value' => 50,
      '#description'   => t($desc),
    ];
 
    $form['actions']['#type'] = 'actions';
 
    $form['actions']['submit'] = [
      '#type'     => 'submit',
      '#value'    => t('Import All Teas'),
      '#disabled' => empty($data->pagination->total_count),
    ];
 
    return $form;
  }
 
  ...
}

Also, note the throttling options to provide site administrators the ability to adjust how many items will be processed in each batch iteration for the given operation. When all was said and done, I noticed that the site could handle downloading ten times as much data as it could handle saving to nodes.

The form's submit triggers two Batch API operations and protects against cron from interfering. Since the data processing will be take some time, I needed to set a state that the Tea importing is locked and clear out any cron jobs that may be currently queued.

drupal/modules/custom/iguana/src/Form/IguanaTeaImportForm.php (continued):

<?php
...
 
class IguanaTeaImportForm extends FormBase {
  ...
 
  /**
   * {@inheritdoc}
   */
  public function submitForm(array &$form, FormStateInterface $form_state) {
    $connection = Database::getConnection();
    $queue      = \Drupal::queue('iguana_tea_import_worker');
    $class      = 'Drupal\iguana\Form\IguanaTeaImportForm';
    $batch      = [
      'title'      => t('Downloading & Processing Iguana Tea Data'),
      'operations' => [
        [ // Operation to download all of the teas
          [$class, 'downloadTeas'], // Static method notation
          [
            $form_state->getValue('count', 0),
            $form_state->getValue('download_limit', 0),
          ],
        ],
        [ // Operation to process & save the tea data
          [$class, 'processTeas'], // Static method notation
          [
            $form_state->getValue('process_limit', 0),
          ],
        ],
      ],
      'finished' => [$class, 'finishedBatch'], // Static method notation
    ];
    batch_set($batch);
    // Lock cron out of processing while these batch operations are being
    // processed
    \Drupal::state()->set('iguana.tea_import_semaphore', TRUE);
    // Delete existing queue
    while ($worker = $queue->claimItem()) {
      $queue->deleteItem($worker);
    }
    // Clear out the staging table for fresh, whole data
    $connection->truncate('iguana_tea_staging')->execute();
  }
 
  ...
}

Note that for code cleanliness and maintainability, the operation and finished functions are static methods within this form class.

The first batch operation queries the API for limited number of Tea items and then saves them straight to the database table iguana_tea_staging while cycling through the API pages. Since any point in this batch can potentially be long running, I always build a $context['message'] to let the administrator know that something is processed so that they don't die of boredom.

drupal/modules/custom/iguana/src/Form/IguanaTeaImportForm.php (continued):

<?php
...
 
class IguanaTeaImportForm extends FormBase {
  ...
 
  /**
   * Batch operation to download all of the Tea data from Iguana and store
   * it in the iguana_tea_staging database table.
   *
   * @param int   $api_count
   * @param array $context
   */
  public static function downloadTeas($api_count, $limit, &$context) {
    $database = Database::getConnection();
    if (!isset($context['sandbox']['progress'])) {
      $context['sandbox'] = [
        'progress' => 0,
        'limit'    => $limit,
        'max'      => $api_count,
      ];
      $context['results']['downloaded'] = 0;
    }
    $sandbox = &$context['sandbox'];
 
    $iguana = new IguanaConnection();
    $data   = $iguana->queryEndpoint('teasDetailFull', [
      'limit'     => $sandbox['limit'],
      'url_query' => [
        'offset' => (string) $sandbox['progress'],
        'sort'   => 'gid asc',
      ],
    ]);
 
    foreach ($data->response_data as $tea_data) {
      // Check for empty or non-numeric GIDs
      if (empty($tea_data->gid)) {
        $msg = t('Empty GID at progress @p for the data:', [
          '@p' => $sandbox['progress'],
        ]);
        $msg .= '<br /><pre>' . print_r($tea_data, TRUE) . '</pre>';
        \Drupal::logger('iguana')->warning($msg);
        $sandbox['progress']++;
        continue;
      } elseif (!is_numeric($tea_data->gid)) {
        $msg = t('Non-numeric GID at progress progress @p for the data:', [
          '@p' => $sandbox['progress'],
        ]);
        $msg .= '<br /><pre>' . print_r($tea_data, TRUE) . '</pre>';
        \Drupal::logger('iguana')->warning($msg);
        $sandbox['progress']++;
        continue;
      }
      // Store the data
      $database->merge('iguana_tea_staging')
        ->key(['gid' => (int) $tea_data->gid])
        ->insertFields([
          'gid'  => (int) $tea_data->gid,
          'data' => serialize($tea_data),
        ])
        ->updateFields(['data' => serialize($tea_data)])
        ->execute()
      ;
      $context['results']['downloaded']++;
      $sandbox['progress']++;
      // Build a message so this isn't entirely boring for admins
      $context['message'] = '<h2>' . t('Downloading API data...') . '</h2>';
      $context['message'] .= t('Queried @c of @t Tea entries.', [
        '@c' => $sandbox['progress'],
        '@t' => $sandbox['max'],
      ]);
    }
 
    if ($sandbox['max']) {
      $context['finished'] = $sandbox['progress'] / $sandbox['max'];
    }
    // If completely done downloading, set the last time it was done, so that
    // cron can keep the data up to date with smaller queries
    if ($context['finished'] >= 1) {
      $last_time = \Drupal::time()->getRequestTime();
      \Drupal::state()->set('iguana.tea_import_last', $last_time);
    }
  }
 
  ...
}

Note that after the whole operation is finished, the iguana.tea_import_last state is set to log when the data was last downloaded, this will be used for the cron portion.

With all the data downloaded, the next operation takes each entry and attempts to convert it into Drupal node data, which is pretty unique to the client's needs, so that class's inner workings is omitted.

drupal/modules/custom/iguana/src/Form/IguanaTeaImportForm.php (continued):

<?php
...
 
class IguanaTeaImportForm extends FormBase {
  ...
 
  /**
   * Batch operation to extra data from the iguana_tea_staging table and
   * save it to a new node or one found via GID.
   *
   * @param array $context
   */
  public static function processTeas($limit, &$context) {
    $connection = Database::getConnection();
    if (!isset($context['sandbox']['progress'])) {
      $context['sandbox'] = [
        'progress' => 0,
        'limit'    => $limit,
        'max'      => (int)$connection->select('iguana_tea_staging', 'its')
          ->countQuery()->execute()->fetchField(),
      ];
      $context['results']['teas'] = 0;
      $context['results']['nodes']  = 0;
      // Count new versus existing
      $context['results']['nodes_inserted'] = 0;
      $context['results']['nodes_updated']  = 0;
    }
    $sandbox = &$context['sandbox'];
 
    $query = $connection->select('iguana_tea_staging', 'its')
      ->fields('its')
      ->range(0, $sandbox['limit'])
    ;
    $results = $query->execute();
 
    foreach ($results as $row) {
      $gid        = (int) $row->gid;
      $tea_data   = unserialize($row->data);
      $tea        = new IguanaTea($tea_data);
      $node_saved = $tea->processTea(); // Custom data-to-node processing
 
      $connection->merge('iguana_tea_previous')
        ->key(['gid' => $gid])
        ->insertFields([
          'gid'  => $gid,
          'data' => $row->data,
        ])
        ->updateFields(['data' => $row->data])
        ->execute()
      ;
 
      $query = $connection->delete('iguana_tea_staging');
      $query->condition('gid', $gid);
      $query->execute();
 
      $sandbox['progress']++;
      $context['results']['teas']++;
      // Tally only the nodes saved
      if ($node_saved) {
        $context['results']['nodes']++;
        $context['results']['nodes_' . $node_saved]++;
      }
 
      // Build a message so this isn't entirely boring for admins
      $msg = '<h2>' . t('Processing API data to site content...') . '</h2>';
      $msg .= t('Processed @p of @t Teas, @n new & @u updated', [
        '@p' => $sandbox['progress'],
        '@t' => $sandbox['max'],
        '@n' => $context['results']['nodes_inserted'],
        '@u' => $context['results']['nodes_updated'],
      ]);
      $msg .= '<br />';
      $msg .= t('Last tea: %t %g %n', [
        '%t' => $tea->getTitle(),
        '%g' => '(GID:' . $gid . ')',
        '%n' => '(node:' . $tea->getNode()->id() . ')',
      ]);
      $context['message'] = $msg;
    }
 
    if ($sandbox['max']) {
      $context['finished'] = $sandbox['progress'] / $sandbox['max'];
    }
  }
 
  ...
}

Finally, batch finished function needs to unlock cron so that it may do its part in updating the data.

drupal/modules/custom/iguana/src/Form/IguanaTeaImportForm.php (continued):

<?php
...
 
class IguanaTeaImportForm extends FormBase {
  ...
 
  /**
   * Reports the results of the Tea import operations.
   *
   * @param bool  $success
   * @param array $results
   * @param array $operations
   */
  public static function finishedBatch($success, $results, $operations) {
    // Unlock to allow cron to update the data later
    \Drupal::state()->set('iguana.tea_import_semaphore', FALSE);
    // The 'success' parameter means no fatal PHP errors were detected. All
    // other error management should be handled using 'results'.
    $downloaded = t('Finished with an error.');
    $processed  = FALSE;
    $saved      = FALSE;
    $inserted   = FALSE;
    $updated    = FALSE;
    if ($success) {
      $downloaded = \Drupal::translation()->formatPlural(
        $results['downloaded'],
        'One tea downloaded.',
        '@count teas downloaded.'
      );
      $processed  = \Drupal::translation()->formatPlural(
        $results['teas'],
        'One tea processed.',
        '@count teas processed.'
      );
      $saved      = \Drupal::translation()->formatPlural(
        $results['nodes'],
        'One node saved.',
        '@count nodes saved.'
      );
      $inserted   = \Drupal::translation()->formatPlural(
        $results['nodes_inserted'],
        'One was created.',
        '@count were created.'
      );
      $updated    = \Drupal::translation()->formatPlural(
        $results['nodes_updated'],
        'One was updated.',
        '@count were updated.'
      );
    }
    drupal_set_message($downloaded);
    if ($processed) {
      drupal_set_message($processed);
    };
    if ($saved) {
      drupal_set_message($saved);
    };
    if ($inserted) {
      drupal_set_message($inserted);
    };
    if ($updated) {
      drupal_set_message($updated);
    };
  }
 
}

Once I had my API-to-node class working and where all of the data was being imported, building out the cron portion was fairly easy.

Cron

From working on the data processing during the batch operations, I noticed that downloading the data directly to database table processed faster than saving the node data. So for cron, I started with gathering all of the data during the hook_cron() call and then queued up jobs to process that data. Since the Iguana API returns only the recent Tea revisions that were done since the start date, I assumed that only a manageable count will be downloaded at any given cron run. If the whole data set is updated, then it might require running the batch operation manually. The implementation of hook_cron() simply does a few minor checks, downloads any new content, and queues up enough jobs to process the data.

drupal/modules/custom/iguana/iguana.module:

<?php
 
/**
 * @file
 * Iguana API integration module file.
 */
 
use Drupal\Core\Database\Database;
use Drupal\iguana\IguanaConnection;
 
/**
 * Implements hook_cron().
 */
function iguana_cron() {
  $state     = \Drupal::state();
  $locked    = $state->get('iguana.tea_import_semaphore', FALSE);
  $last_time = $state->get('iguana.tea_import_last', FALSE);
 
  if (!$locked && $last_time) {
    $database   = Database::getConnection();
    $iguana     = new IguanaConnection();
    $queue      = \Drupal::queue('iguana_tea_import_worker');
    $api_limit  = $state->get('iguana.cron_download_limit', 100);
    $save_limit = $state->get('iguana.cron_process_limit', 10);
    $data       = NULL;
    $new_data   = [];
 
    // Pull all data into an array
    // TODO: limit checks in case all of the thousands of Teas have new
    // revisions
    do {
      // If there is have a 'next' URL returned, use that one for simplicity
      $next_page = NULL;
      if (isset($data->pagination->next)) {
        $next_page = $data->pagination->next;
      }
      $data = $iguana->queryEndpoint('teasDetailFull', [
        'limit'      => $api_limit,
        'start_time' => $last_time,
        'next_page'  => isset($next_page) ? $next_page : NULL,
      ]);
      $new_data = array_merge($new_data, $data->response_data);
    } while (isset($data->pagination->next));
 
    $gids      = [];
    $new_count = count($new_data);
    foreach ($new_data as $index => $tea_data) {
      if (empty($tea_data->gid)) {
        \Drupal::logger('iguana')->warning(t('Empty GID at progress @p for the data:<br /><pre>@v</pre>', [
          '@v' => print_r($tea_data, TRUE),
          '@p' => $index,
        ]));
        continue;
      }
      elseif (!is_numeric($tea_data->gid)) {
        \Drupal::logger('iguana')->warning(t('Non-numeric GID at progress @p for the data:<br /><pre>@v</pre>', [
          '@v' => print_r($tea_data, TRUE),
          '@p' => $index,
        ]));
        continue;
      }
      // Save the data to the local database
      $database->merge('iguana_tea_staging')
        ->key(['gid' => (int) $tea_data->gid])
        ->insertFields([
          'gid'  => (int) $tea_data->gid,
          'data' => serialize($tea_data),
        ])
        ->updateFields(['data' => serialize($tea_data)])
        ->execute()
      ;
      $gids[] = (int) $tea_data->gid;
      // If enough Teas have been stored or the last one just was strored,
      // then queue up a worker to process them and reset the IDs array
      if (count($gids) == $save_limit || $index + 1 == $new_count) {
        $queue->createItem(['gids' => $gids]);
        $gids = [];
      }
    }
    // Store the timestamp in state
    $last_time = \Drupal::time()->getRequestTime();
    \Drupal::state()->set('iguana.tea_import_last', $last_time);
  }
  elseif ($locked) {
    \Drupal::logger('iguana')->warning(t('Iguana Cron did not run because it is locked.'));
  }
}

If you are unfamiliar with Drupal's cron queue system, Drupal will run all of the implementations of hook_cron() and then processes as many queued jobs as it thinks it can before the PHP timeout. When a cron job is queued up, it's registered as an entry in the queue table of the database with a machine-name that's associated with a worker class and a bit of serialized data and, when it's picked up for processing, that class is instantiated with that bit of data unserialized. There is no guarantee which job is processed, so rather than running down a database query like in the batch operations, I chose to pass the worker which set of GIDs to process.

drupal/modules/custom/iguana/src/Plugin/QueueWorker/IguanaTeaImportWorker.php:

<?php
 
namespace Drupal\iguana\Plugin\QueueWorker;
 
use Drupal\Core\Queue\QueueWorkerBase;
use Drupal\Core\Database\Database;
use Drupal\iguana\IguanaTea;
 
/**
 * Updates Tea(s) from Iguana API data.
 *
 * @QueueWorker(
 *   id = "iguana_tea_import_worker",
 *   title = @Translation("Iguana Tea Import Worker"),
 *   cron = {"time" = 60}
 * )
 */
class IguanaTeaImportWorker extends QueueWorkerBase {
 
  /**
   * {@inheritdoc}
   */
  public function processItem($data) {
    $connection = Database::getConnection();
    $gids       = $data['gids'];
 
    if (empty($gids)) {
      \Drupal::logger('iguana')->warning(t('IguanaTeaImportWorker queue with no GPMS IDs!'));
      return;
    }
 
    $query = $connection->select('iguana_tea_staging', 'its');
    $query->fields('its');
    $query->condition('its.gid', $gids, 'IN');
    $results = $query->execute();
 
    foreach ($results as $row) {
      $gid      = (int) $row->gid;
      $tea_data = unserialize($row->data);
 
      try {
        $tea = new IguanaTea($tea_data);
        $tea->processTea(); // Custom data-to-node processing
 
        $connection->merge('iguana_tea_previous')
          ->key(['gid' => $gid])
          ->insertFields([
            'gid'  => $gid,
            'data' => $row->data,
          ])
          ->updateFields(['data' => $row->data])
          ->execute();
 
        $query = $connection->delete('iguana_tea_staging');
        $query->condition('gid', $gid);
        $query->execute();
      } catch (\Exception $e) {
        watchdog_exception('iguana', $e);
      }
    }
  }
 
}

Finally, Ultimate Cron was used so that Drupal's cron is triggered every minute to process the queue, but the various implementations of hook_cron() can be set to run at different intervals. So, iguana_cron() can be ran every quarter hour to queue up jobs that will take a minute or so to get through.

Apr 05 2018
Apr 05
February 23rd, 2018

Drupal Con Nashville has lifted the veil on sessions at this year’s event and we’re thrilled to be a part of it! Our Web Chefs will be giving talks, facilitating the Business Summit, and running BOFs, so keep an eye out for our green jackets. We’re always happy to have a conversation!


Michal Minecki
Director of Technology at Four Kitchens


Patrick Coffey
Senior JavaScript Engineer at Four Kitchens

Recently there have been strides in web-based VR which enable producers to publish VR experiences via the web. Four Kitchens has been keeping an eye on these technologies and we want to share our experiences building real WebVR applications.


Joel Travieso
Senior Drupal Engineer at Four Kitchens

Any amount of automation is worth it, as long as it is effective. From simple things like manipulating pull request labels and ticket statuses, or using your CI engine to build your changelogs, to strategic operations like removing obsolete Pantheon environments or ensuring you always pick the right database for your build, little chunks of automation can substantially improve your workflow.


Adam Erickson
Senior Drupal Engineer


Jeff Tomlinson
Architect

Drupal’s core search can only take you so far. In this session, we will talk about what it takes to ramp up the search functionality of your site by using Search API and Solr. We can achieve this with the addition of a few modules, configuration adjustments, and the set-up of a view. We will take you from with getting a plan in place all the way through to monitoring your site’s search usage and looking for ways to make improvements.


Randy Oest
Senior Designer and Frontend Engineer

With the growing shift towards a decoupled future a company’s presence is going to be represented by an ever-expanding collection of websites, apps, and talking speakers.

Maintaining design and tone consistency across those channels will be challenging but if done right, it can allow you to enter markets more quickly while keeping the style and tone of your company aligned.

Business Summit


Elia Albarran
Director of Operations

Elia will be co-leading the Business Summit, gathering and confirming speakers, giving feedback on the programming and schedule and emceeing the event.


Trasi Judd
Director of Support and Continuous Improvement

Trasi is speaking at the Summit with one of our South American partners, Alejandro Oses from Rootstack, on how to have a good partnership with near-shore vendors.

Four Kitchens

The place to read all about Four Kitchens news, announcements, sports, and weather.

Apr 05 2018
Apr 05

Last year one of the big topics for the Drupal Global Training Days (GTD) Working Group was sorting out what exactly we can do to help with organizing these events. To that end, we sent out a survey to learn more about the kinds of events that people doing GTD events run, or have offered in the past, and how the community can help. We got 33 responses to the survey and 9 of those fine folks also hopped on a phone call with us (myself (add1sun), Mauricio (dinarcon), or Marina (paych)) to talk about the survey answers in more depth. While it's been a little while since we conducted the survey and interviews, we figure this is really interesting and useful information to share with the community.

The first section of this post covers the questions we asked and the results on the survey. The second section dives into our takeaways from the interviews we conducted after the survey.

Survey Results

What is your motivation for organizing GTD?

Far and away the most common motivation for running GTD events is to grow the local Drupal community, with over 90% selecting this as at least one reason. The second biggest motivation (39%) was to promote a company or organization, which was then followed up equally at 24% with finding new hires or new clients.

Is your company sponsoring your time to prepare and present the training?

For this question, about 60% of respondents have their company cover their time. There was also a mixed bag of people who are their own business or who freelance, where counting company versus personal time is a blurrier line, as well as people who straddle both, doing some of the work on the clock and the rest on their own time. 21% of those surveyed stated that they are not supported by a company for GTD events.

In which country (and city) have you organized a GTD?

Our list from the survey covered 36 events in 18 different countries, plus an online course with attendees from all over the world.

  • Argentina
  • Australia
  • Bolivia
  • Brazil
  • Cameroon
  • Canada
  • China
  • Costa Rica
  • France
  • India (5)
  • Italy
  • Japan
  • Mexico (3)
  • Moldova
  • Romania
  • Russia
  • Slovakia
  • South Africa
  • United States (11)

In which languages have you organized GTD?

23 (64%) of events are being offered in English. There were 12 languages other than English in the list, with Spanish taking the number 2 slot with 6 events, which lines up with the number of events in Spanish-speaking countries.

Given the wide range of countries, it is a little surprising that there is definitely a concentration of events that are offered in English.

What materials do you use to present the training?

This was split almost evenly between those that use materials they created themselves and those that use a combination of existing materials and their own.

What topics have you covered in the trainings you have presented?

113 responses (with multiple select) indicated almost everyone covers more than 1 topic, and the vast majority of those topics are introductions to Drupal and getting started. Of the topics presented:

  • 94% cover What is Drupal?
  • 85% do Site Building
  • 70% cover the Drupal Community
  • 51% do Theming
  • 42% do module development.

From the results to this question it is clear that most GTD events do not stick with just one broad topic.

What format do you usually follow?

The most popular format (76%) is to have the instructor do a live demonstration and have the students follow along. Next in line is to only give presentations, and the least popular was to have the instructor do a live demo but not have the students work on the project. There were also a couple of people who use recorded videos and then offer Q&A and support to the students as they work through them.

How long does the training last?

  • 36% give full day workshop
  • 24% give half-day workshops
  • 30% do a mix of the 2 formats.

How many people attend your event on average?

Event size was interesting. Over 50% of events had 11-20 attendees. Smaller groups, from 1-10 came in second around 27%, and only 21% of events had more than 20 attendees.

Choose the statement that fits you most with regards to venue cost

Just over a third of respondents have given events at different free venues, while 21% have access to a permanent free venue to use. 30% have used both free and paid venues. Only 1 person has a permanent paid venue they use for GTD.

What type of venues do you use?

Most events use either a company office or a university/educational facility, with conference spaces and co-working/community spaces making up much of the rest. There were also a range of locations from coffee shops to libraries included.

What is the attendee capacity of your venue?

Compared to the class sizes mentioned above, there is certainly space for bigger groups overall, with 60% of venues capable of accepting over 20 attendees.

If you organize GTD in a paid venue, how much does it cost on average? (Use local currency)

For those who do pay for venues, the costs are all over the place, which makes sense given the huge range of locations (both world location and venue type) for these events. The most expensive came in around $400 USD or ~325 EUR.

Which of the following does your venue provide?

Most venues (88%) provide a good internet connection, and a projector with screen. 21% of the venues provide computers to use. Others noted extras they get with their venues include things like parking, snacks, and coffee.

Interview Results

We also spoke to 9 people from 5 countries to dig into what they're doing and how the community and GTD Working Group can help. While everyone has different struggles and needs, there are a few common themes that come through.

Organizing and Marketing

There was a wide variety of needs around organizing and marketing GTD events. This included things like matching up people who like to teach with people who can organize and market the event (many times people don't really want to both!), and there was definitely a repeated request for marketing materials and guidelines for groups to help promote their events. There were also some interesting ideas like creating badges for trainers and attendees, as well as having better ways for GTD organizers and trainers to share information, either through online knowledge bases or in-person events, like GTD-focused activities at DrupalCons.

Curriculum

Not surprisingly curriculum and course materials came up for a lot of people. As we saw from the survey results, a lot of people create their own materials, often through need, not because they necessarily want to. There was a common thread of requests for workshop agendas, slides, and all kinds of training materials, centrally located so that people could more easily build a workshop without investing a lot of curriculum time. A number of people also pointed out that not having materials in the local language was a problem, and is time-consuming work to translate existing materials.

Infrastructure

The last main theme that we saw was about the technical and venue needs. This ranged from funding for space to hold GTDs, having a standard way to get students set up with a local environment, and having a regular way to collect feedback on events, and be able to share that information.

While the GTD Working Group certainly can't tackle all of these things, this gives a good starting point for the biggest pain points that the community can address to accelerate GTDs and the adoption of Drupal. If there are particular topics or initiatives in here that you would like to help with, please reach out to the working group to get connected with others and see what resources are available to help.

Apr 05 2018
Apr 05

Drupal 8.5 was released on the 7th of March 2018 with a host of new features, bug fixes, and improvements. There are plenty of exciting updates for developers in this blog. Or if you're a business owner, click here to find out what this means for you.

Any projects using Drupal 8.4.x can and should update to Drupal 8.5 to continue receiving bug and security fixes. We recommend using composer to manage the codebase of your Drupal 8 projects.

For anyone still on Drupal 8.3.x or earlier I recommend reading the Drupal 8.4.0 release notes as Drupal 8.4 included major version updates for Symfony, jQuery, and jQuery UI meaning it is no longer compatible with older versions of Drush.

One of the great things we noticed from the update was the sheer number of commits in the release notes.

Seeing all the different issues and contributors in the release notes is a good reminder that many small contributions add up to big results.

Dries Buytaert, Founder of Drupal

So what are the highlights of the Drupal 8.5 release?

Stable Releases Of Content Moderation And The Settings Tray Modules

One of the changes to the way Drupal is maintained is the new and improved release cycle and adoption of semantic versioning. Major releases used to only happen once every couple of years, Drupal now uses a much smaller release cycle for adding new features to core of only 6 months. New features are added as “experimental core modules” and can be tested, bug fixed and eventually become part of Drupal core.

One example of the shorter release cycle is the BigPipe module. The module provides an implementation of Facebook’s BigPipe page rendering strategy, shortening the perceived page load speed of dynamic websites with non-cacheable content. This was an experimental module when Drupal 8.1 was released and became a part of Drupal core as a stable module in 8.2.

In Drupal 8.5 the BigPipe module is now enabled by default as a part of Drupal’s standard installation profile. BigPipe is actually the first new feature of Drupal 8 to progress from experimental to stable to being a part of a standard installation profile.

There are two exciting modules now stable in the update, they are:

  • Settings Tray
  • Content Moderation

Settings Tray is a part of the “outside-in” initiative where more of the content management tasks can be done without leaving the front end of the website, managing items in context such as editing the order of the menu items in a menu block.

The Content Moderation module allows the site builder to define states in which content can be placed such as “draft”, “needs review” and to define user permissions necessary to move content between those states. This way you can have a large team of authors who can place documents into draft or needs review states, allowing only website editors with specific permissions to publish.

New Experimental Layout Builder

Sticking with experimental modules, Drupal 8.5 sees the introduction of a new experimental layout builder. This module provides the ability to edit the layouts of basic pages, articles and other entity types using the same “outside-in” user interface provided by the settings tray.

This allows site builders to edit the layout of fields on the actual page rather than having to use a separate form in the backend. Another feature is the ability to have a different layout on a per-page / item basis if you so wish with the ability to revert back to the default if it doesn’t work for you. There’s still a long way to go and is currently only a basic implementation but it should be improving significantly over the coming months and hopefully will see a stable release in Drupal 8.6.

umami-8.5-layout-builder

The experimental layout builder in action 

PHP 7.2 Is Now Supported

This is the first version of Drupal to fully support the latest version of PHP. Support is not the only aspect of this release though, site owners are now also warned if they try to install Drupal on a version of PHP less than 7.0 they will no longer be supported by Drupal as of March 7, 2019.

Drupal 8.5 now also uses Symphony Components 3.4.5 since Symfony 3.2 no longer receives security coverage. I expect Drupal 8 to remain on 3.4 releases until late 2021 or the end of Drupal 8's support lifetime (whichever comes first). Finally, PHPUnit now raises test failures on deprecated code.

Media Module In Core Improved And Now Visible To All Site Builders

Drupal 8.4 added a Media API into core which was based on all the hard work done on the contributed Media Entity Module. The media module provides “media types” (file, audio, video, and image) and allows content creators to upload and play audio and video files and list and re-use media. The core media module can be expanded by the installation of key contributed modules which add the ability to add externally hosted media types such as YouTube and Vimeo videos.

The module has been present in the codebase but was hidden from the module management interface due to user experience issues. These issues have now been taken care of and anyone who has access to the module management page can now enable the module.


New “Out of the Box” Demo Site

One of the key initiatives is the “out of the box experience”. The aim is to showcase what Drupal can do by providing a simple to install demo website (called Umami presently) with example content, configuration, and theme.

According to Drupal, the main goal of the demo site is:

To add sample content presented in a well-designed theme, presented as a food magazine. Using recipes and feature articles this example site will make Drupal look much better right from the start and help evaluators explore core Drupal concepts like content types, fields, blocks, views, taxonomy, etc.

The good news is that Drupal 8.5 now comes with the demo website available as an installation profile. The profile is “hidden” at the moment from the installation GUI but can be installed using the command line / drush.

The demo website still needs a lot of work but the groundwork is firmly in place and may become selectable as an installation profile for demonstration and evaluation purposes in a future release of Drupal 8.5.x. I recommend users not to use the Umami demo as the basis of a commercial project yet since no backward compatibility or upgrade paths are provided.

Migrate Architecture, Migrate Drupal and Migrate UI Modules are now Stable

This item almost deserves its own blog post as it’s such a major milestone for Drupal, with over 570 contributors working on closing over 1300 issues over a 4 year period. As such the Migrate system architecture is considered fully stable and developers can write migration paths without worrying about the stability of the underlying system.

The Migrate Drupal and Migrate UI modules (which are used for Drupal 6 and 7 migrations to Drupal 8) are also considered stable for upgrading sites which are not multilingual, with multilingual support still being heavily worked on.

There is also support for incremental migrations meaning that the website can be worked on while the content is still being added on the site being upgraded/migrated from.

More information can be found in the official migrations in Drupal 8.5.0 post.

Links to Drupal 8 User Guide

Now on a standard installation you are greeted with a welcome page and a link to the new and improved Drupal 8 User Guide. While only a small addition, we can see this as a major win as it will improve the evaluation experience for new users.

Future Drupal Development

There is currently a proposed change to the 6-month release cycle to reduce it to a 4-month cycle because, according to Drupal, "currently, given time for alpha, beta and rc releases, issues that narrowly miss the beta window have to wait eight months to get into a tagged release."

This will require 2 core branches to be supported at once and additional work for core committers. However, new features and bug fixes will be available sooner so it will be interesting to see what the outcome of the proposal is.

What Does This Mean For Business Owners?

You’ll need to ensure you’ve updated your site from Drupal 8.4.5 to 8.5.0 to continue receiving bug and security fixes. The next of which is scheduled to be released on April 4th, 2018. If, however, you are on Drupal 8.3.x and below we urge you to read the release notes for Drupal 8.4.0 as there were some major updates to consider. These include a jump from jQuery 2 to 3 which may have some backward compatibility issues affecting any slideshows, carousels, lightboxes, accordions and other animated components.

Drupal 8.4 also dropped support for Internet Explorer 9 and 10 where ay bugs that affect these browsers will no longer be fixed and any workarounds for them have been removed in Drupal 8.5.

If your website is still on Drupal 7 then this is a good time to consider migrating to Drupal 8 as the hard work carried out on the migrate modules mentioned above will streamline the process of adopting the new platform.

If you have any questions about migrating your Drupal 7 website to Drupal 8 please let us know and we'll ensure one of our experts are on hand to help.

Get in touch

Important Dates

See https://www.drupal.org/core/release-cycle-overview for more:

  • 8.6.0 Feature Freeze: Week of July 18, 2018
  • 8.6.0 Release: September 5, 2018
  • 8.7.0 Feature Freeze January 2019
  • 8.7.0 Release: March 19, 2019
Apr 05 2018
Apr 05

Who are your visitors? Where do they come from? And what do they do precisely during their visits on your Drupal site? How long are their visits? What content on your site do they linger on and what content do they “stubbornly” ignore? Needless to say that for getting your answers to all these questions you need to set up Google Analytics on your website.

Since:

“This data--aka analytics--is the lifeblood of the digital marketer.” (Jeffrey Mcguire, Acquia, Inc. Evangelist)

The good news is that integrating it is nothing but a quick and simple 3-step process. And the great news is that:

Drupal's got you covered with its dedicated Google Analytics module, geared at simplifying the otherwise tedious and time-consuming process.

So, shall we dive into the installation guide?
 

1. But First: Why Web Analytics? And Why Precisely Google Analytics?

In an UX-dominated digital reality, that takes personalization to a whole new level, user behavior data turns into... superpower.

And by “user behavior data”, I do mean web analytics.

Therefore, injecting a web analytics service into your Drupal site is like... injecting true power into its “veins”.

But why precisely Google Analytics?

Why set up Google Analytics on your Drupal site instead of another web analytics tracking tool? Is its popularity a strong enough reason for you to jump on the trend?

To answer your question, I do think that its own key features make the best answers:
 

  • audience demographic reporting: discover where your site visitors come from, their native languages, the devices and operating systems they use for accessing your website...
  • goal tracking: monitor conversion rates, downloads, sales and pretty much all stats showing how close (or far) you are to reaching the goals that you've set for your website
  • acquisition reporting: identify your site's traffic sources; where do your visitors come from exactly?
  • on-site reporting: gain a deep insight into the way visitors engage with specific pieces of content on your website, so you know how to adjust the experience your deliver them on your site/app to their specific needs 
  • event-tracking: tap into this feature for measuring all activities carried out on your Drupal site
     

And the list of features could go on and on. Providing you with a high-level dashboard and enabling you to go as deep as you need to with your “data digging”.

For Google Analytics is only as powerful as you “allow” it to be. It empowers you to dig up both surface and “in-depth data”.

Moreover (or better said: “thanks to...”), being such a feature-rich tracking tool, Google Analytics's highly versatile, too. From email marketing to social media marketing, to any type of marketing campaign that you plan to launch, it's built to fit in just perfectly.

To power all forms of marketing strategies.

And where do you add that it's been a while now since we've been having Google Analytics for mobile apps and the Google Analytics 360 suite, too! 2 more powerful GA tools to add to your web analytics “tracking arsenal”.
 

2. The Drupal Google Analytics Module and How It Will Make Your Life (So Much) Easier

Let me try a lucky guess: 

Your Drupal site has... X pages (have I guessed it?)

The “standard” way to add Google Analytics to your Drupal site would involve:

Copying the tracking ID that Google Analytics provides you with and pasting it on each and every page on your website.

A hair-pulling monotonous and time-consuming process, don't you think?

And it starts to look even more cumbersome if you think that you have the alternative to set up Google Analytics on your Drupal site using the dedicated module.

But how does it streamline... everything more exactly? 

You'll just need to paste that Google Analytics javascript snippet for tracking data right to this module's Configuration page and... that's it!

The module will take it from there! It will distribute it itself to all the pages on your website.

Less effort, less time wasted for carrying out in a tedious and repetitive activity. And more time left for customizing all those statistics features to perfectly suit your goals and your site's needs.

How to Set Up Google Analytics on Your Drupal Site: The Google Analytics Drupal Module

Luckily enough, the Drupal Google Analytics module puts an admin-friendly UI at your disposal precisely for that:
 

  • use it to track down key data 
  • use it for tailoring your web analytics-tracking activity to your needs: by user role, by pages etc.
     

3. Set Up Google Analytics on Your Drupal Site In Just 3 Simple Steps 

As promised, here's a “dead-simple 3-step guide on how to add Google Analytics to your Drupal site (“leveraging the power of the dedicated Drupal module here included”)
 

Step 1

The very first thing you'll need to do is sign up for a Google Analytics account if you don't have one already. And then to add your Drupal site (obviously!).

And here are the quick steps to take:
 

  1. go to www.google.com/analytics
  2. hit “sign in” (you'll find it in the top right corner) and select “Google Analytics” from the unfolding drop-down menu
  3. click “Sign Up” and just follow the given steps for setting up your new account
  4. next, follow the instructions for setting up web tracking
     

Now you should be able to see your Drupal site displayed under your account, on your admin page in Google Analytics.

And it's now that you should be able to retrieve your site's “Tracking ID”, as well. You'll find it in the “Property Setting” section.
 

Step 2

The next major step to take as you set up Google Analytics on your Drupal site is to actually go back to your site and... install THE module itself.

Since I've already praised its “superpowers” and how they “conspire” to make your life easier, I'm not going to point them out once again.

Instead, I'll go straight to the steps to take once you've enabled the module on your website:
 

  1. access its configuration page (you'll find the “Configuration” tab on top of the page, “flanked by” the “Modules” and the “Reports” tabs)
  2. there, right under the “General Setting” section, just enter your “Web Property ID”
  3. … which is precisely the Google Analytics tracking code that you've just retrieved at Step 1
     

And this is precisely the “magic trick” that's going to add the Google Analytics tracking system site-wide. A monotonous, multiple-step process turned into a one-step operation.

This thanks to the Drupal Google Analytics module!
 

Step 3

Here you are now, ready to save your settings and to officially harness the power of Google Analytics on your website!

Normally you should be just fine with the default settings that the service provides you with, right out-of-the-box.

Yet, if you need to “refine” your searches, your entire tracking activity, feel free to do that. To explore all the options stored in the “Tracking Scope” tabs for you.

Speaking of which, let me give you just a few examples of how deep you could narrow down your “investigations” and customize the modules:
 

  • roles: a setting which lets you define which user roles to track (and which roles the system should ignore)
  • domains: indicate whether it's a single or multiple domains that you need monitoring
  • privacy: it enables you to make visitors' IP addresses anonymous
  • pages: indicate precisely which pages on your website you need to track
  • messages: track and monitor the messages displayed to your site visitors
  • search and advertising: keep track of your internal site searches and AdSense advertisements; do keep in mind, though, that some additional settings might be needed!

And... more! You actually get even more power for configuring your JavaScript setting and adding custom variables.

The END! This is how you set up Google Analytics on your Drupal site in 3 dead-simple steps, a streamlined process powered by the dedicated Drupal module.

Apr 05 2018
Apr 05

BEE makes it easy to quickly implement all kinds of booking & reservation use cases. We've created a new video that walks you through setting up BEE to handle event reservations with a moderation workflow:

[embedded content]

You may try out BEE with simplytest.me or start with this composer.json file.

Apr 05 2018
Apr 05

A lot of Drop Guard users faced their first Highly Critical SA-CORE-2018-002 update within the PSA-2018-001 release last week. We interviewed a bunch of them and want to share Drop Guard’s performance with you. This means that we will share its achievements, its flaws and its “should have performed better”.
 

The Good

Until today, Drop Guard performed 7370 updates for Drupal agencies and their clients all around the globe.

66% of those agencies updated on average 25 websites, Drupal 7 and Drupal 8, because of the Drupal Core Update SA-CORE-2018-002 last week. The rest were freelancers, universities, non-profits and agencies, which updated up to 10 sites or more than 30. So, all of them automated their update processes in part or even fully. Some users enabled Drop Guard to apply the Highly critical update directly to the live (or production) environment without a QA loop through several test options. In this case, security beat functionality.

A lot of users also benefited from using Drop Guard in part, so Drop Guard detected the update, initiates the update process and applied the updates to the stage branch (or also feature or update branch, depending on the preferences of a user) - or detected the patches which needs to be checked in detail so customized modules won’t get lost without permissions of the development team.

10% of the tasks which Drop Guard created to apply the updates, failed the test or showed an error status, so a human interaction was needed.

Our developers informed us that Drop Guard did not need more than 1h and 50-55 minutes from update release to performing the last task. Usually, an update takes not more than 5 minutes, but complicated by the difficulties of the reachability of Drupal.org, Drop Guard had to wait as well for the information.
 

I enjoyed to receive customer feedback like:

PSA-2018-001 - went to bed and 12 hours later every project was up to date.

The Bad

We also faced some difficulties with Drop Guard. These were mainly configuration issues, which weren't clear in the UI - so we need to optimize the error handling within the interface to make the process more transparent.

Other users faced incompatibilities with their (customized) code and Drop Guard’s checkup - they couldn’t apply the update easily without deciding whether to use Drop Guard’s suggested updated code or to keep their customized modules.

For those cases (and because of recent changes in the UI & setup options), we will add further details and explanation to our Docs & FAQ section.

All customers complimented our quick support where issues and questions got solved directly. You can always join our Drop Guard slack support channel and get in touch with us.

And the Ugly

Drop Guard’s performance was literally described as “a Swiss watch” - but the tool displayed some confusing information as well: it pretended to update to 8.5.x but actually did it to 8.3.x. For example - teething problems we will knock out.

It was also annoying that the project overview and configuration page loaded slow, as a lot of users accessed them at once.

One user pointed out, that it would be calming and more efficient if Drop Guard would update a project’s modules by priority / urgency - oh yes! That’s something we will improve as well to guarantee an intelligent approach.

End credits

Drop Guard showed again that it can cope with Highly Critical Drupal Core updates. All in all we are very proud of the performance and especially thankful for the great feedback, the critical review and the smart feature suggestions. We can’t wait to accomplish the next optimizations based on your feedback to make Drop Guard even more valuable for our users.

 

If you want to add your feedback, we’ll still look forward to hear about your SA-CORE-2018-002 experience in this 3-minutes-survey.

Do you want to read more about the latest #drupalgeddon candidate? Read about the processes & preparations of other agencies in Best of - Update marathon 2018.

If you still got any questions, just contact us!
 

Apr 05 2018
Apr 05

by Elliot Christenson on April 5, 2018 - 1:53am

We're trying something new this year at Drupalcon 2018! Book some time with a myDropWizard "Support Wizard" for some FREE help with your Drupal site!

You're a first time Drupalcon attendee? You're a veteran Drupaler? Either way, you made part of your Drupalcon mission to fix a lingering issue - or at least to be pointed in the right direction!

We're here to help!

We spend our days helping Drupalers just like you every day with their support needs, so we thought "Let's bring that myDropWizard Support Face-to-Face with Drupalers: FOR FREE!"

So, drop by Booth #818 or (better yet!) schedule with us below!

Where we'll be and when

  • Our booth: We again sponsored DrupalCon this year and will have a booth in the exhibit hall! We're in booth #818!
  • David's Session: In defense of small Drupal - Wednesday (4/11) at 5pm
  • Will's Session: Next Level Twig: Extensions - Wednesday (4/11) at 2:15pm
  • Everywhere! Just like you we want to get around the convention to see everyone and everything. Stop us and say "hello!"

Schedule a one-on-one meeting

Again, we'll be happy to discuss your current challenges (or successes!) anywhere at Drupalcon, but if you want to be double extra sure that you'll be able to chat with us, schedule a one-on-one meeting with us!

Maybe you simply want to check on the current status of Roundearth: Drupal 8 + CiviCRM.

Using Doodle, you can see when we have free time and request a meeting:

The user interface is a little confusing, so just in case you're having problems, this is the process:

  1. Click on April 10th (or any date in the week of DrupalCon) in the mini-calendar
  2. You should now see the week of DrupalCon!
  3. Find a free time, click the calendar and "resize" the block to fit the time you want to meet
  4. Fill in the "Meeting request for" field with a short description of what you want to talk about (it could just be "Connect during DrupalCon" - that's fine)
  5. And click the "Create a meeting request" button

Or if you don't want to mess with Doodle, you can just send us an e-mail at [email protected].

Can't wait to see you there!

We're super friendly, non-imposing people who love Drupal and the Drupal community. :-) We all look forward to hearing how your organization is using Drupal and how we can help! Have a great week in Nashville!

Apr 05 2018
Apr 05

DrupalCon is one of the most exciting Drupal events all year, especially for those of us who enjoy working in the community. We really get to shine.

This year is no different. It’s nice to share knowledge through presentations, but there are many ways to give back to the Drupal Open Source project.

We are proud to help behind the scenes with the making of DrupalCon. For the third year in a row, our CEO Aimee Degnan has helped organize the Business Summit. Our CTO Kristen Pol is the local Track Chair for Core Conversations this year. Core Conversations is an especially important track as they are normally focused on Drupal's future.

Participating in events at DrupalCon is a quintessential part of the whole Drupal project. Summits bring larger groups together for collaboration and thought leadership; Birds of a Feather (BoFs) provide a more intimate atmosphere for deeper conversation; and sprints move the Drupal project closer to perfection. Of course, the myriad of sessions also provide a great place to learn about a variety of subjects related to the whole Drupal ecosystem.

We’re also excited to be part of a couple social events this year! Not only are we sponsoring the Women in Drupal reception again this year, we'll also be providing fun games, snacks, and smiles for the Drupal Diversity & Inclusion Game Night.

Summits:

Business Summit

Monday, April 9 from 10:00am - 5:00pm - We're excited because this year we organized a really stellar group of business people tackling challenging current topics. The summit will focus on the growth and evolution of Drupal in the overall digital ecosystem and that growth’s impact to Drupal-focused businesses.

Government Summit

Monday, April 9 from 10:00am - 5:00pm - The Government Summit presents a rare opportunity for collaboration between Government staff, vendors, and Drupal community members to share cost-effective and innovative ways to serve citizens. We hope to provide insight, to listen, and to facilitate discussions that raise awareness. We plan to brainstorm solutions for infrastructure, application development, DevOps, security/compliance, and user-centered design.

Community Summit

Monday, April 9 from 10:00am - 5:00pm - Our community is the heart of Drupal. Sure, it’s great to be involved, but taking the next step and sharing with others how we collaborate and grow our own community is empowering! With this information, we can then better contribute not only to the Drupal Project, but with improving diversity, enhancing mentoring, and growing our local community through outreach.

Sprints:

Contribution sprints move the Drupal Project forward. We’re eager to help with core and contributed projects. We always enjoy mentoring folks who might be new to tech or Drupal, including those who don’t traditionally code for their jobs. Code is very important, but so are all the other parts surrounding it. We’ll be ready to help mentor anybody who wants to get onboard!

General Sprints

April 9-12 from 10:00am to 5:00pm in Room 104A-C in the Music City Center

Sprint Day

Friday, April 13th in Room 104A-C in the Music City Center

Includes General Sprints, Mentored Core Sprint, and First-Time Sprinter Workshop

BoFs:

Connecting Women in Drupal

Wednesday, April 11 at 10:45am in Room 203b - Continue the conversation! After enjoying some low-key networking and socialization at Tuesday night's Women in Drupal social event, swing by our BoF on Wednesday morning. We'll chat about what it means to be a minority in our industry, brainstorm and collaborate on opportunities for activism, and make some lasting connections across the community.

This BoF will be a safe space for anyone who identifies as a woman to chat and support each other. Others, we truly appreciate your support but respectfully ask that this space is kept for women only. Allies are encouraged to attend the Women in Drupal reception Tuesday night.

Drupal and Coffee

Thursday, April 12 at 10:45am in Room 102a - Many of us in the Drupal community enjoy coffee, let's get together and talk about the magic elixir that many of us use to fuel our days.

This is also an excellent opportunity to have a retrospective for the Drupal Coffee Exchange that Adam Bergstein organizes every quarter...Don't know about the exchange?? Whelp, now is a good time to start participating. Bring a pound of beans for trade and get to know your coffee community!

Start your Thursday morning at DrupalCon with some coffee and new friends.

Improving Drupal Core's Accessibility

Tuesday, April 10 at 5:00pm in Room 102a - Drupal 8 has seen significant improvements in its accessibility since the 8.0 release. The point release process has given developers room to make considerable advances in things like the Inline Form Errors module.

Following the Core Conversation on accessibility, we will discuss how we can help bring more people onboard with improving the accessibility of Drupal 8.

Social Events:

Women in Drupal Reception

Tuesday, April 10th at 6:00pm at the Tin Roof, 316 Broadway Avenue, Nashville - Each year we are happy to sponsor and attend the Women in Drupal event! As a women-owned company, we love to support other women’s growth. We’ve always been impressed by the number of allies to women in tech who also come out to the event and show their support! 

Drupal Diversity & Inclusion Game Night

Monday, April 9th from 7PM to 10PM at the Holiday Inn Express Nashville Downtown lobby - Join the Drupal Diversity and Inclusion working group as they host a drop-in style Game Night. One of our goals at Nashville is to do what we can to make it feel like a socially safer space for people. We would like to invite people to bring games and have fun playing them! We expanded the time range a bit because we have some folks who will skip the reception and others that will arrive later.

Catch y'all in Music City!

You can catch us at all the above events, but we’ll also be in the halls offering smiles, swag, and conversation. Swing by the Hook 42 booth on Tuesday, or you can catch some of our team at the DD&I and Mentoring booths as well.

In addition to the events above, we have the honor of presenting four sessions this year! For times and locations, read our article: Cowboy Boots, Hot Chicken, and DrupalCon - Hook 42 is heading to Music City!

Apr 04 2018
Apr 04

First (before the problem)

If you have a Drupal site and this is the first time you hear about the critical vulnerability published on March 28 2018 read the two last chapters immediately.

During the last week in the Drupal community around the world there has been a hustle about the security hole which was named DrupalGeddon2 [1] [2]. This vulnerability was "highly critical" and got many people scared - unnecessary. This post tries to explain when the vulnerability will become a problem? When the vulnerability is actually not a problem and how to handle the situation right. 

Drupal project has a own dedicated security team [3] which will take care of security issues like how to patch the found issues right and deal with the public announcement about it. A week before the publication [4] there was a announcement that a vulnerability has been found and a patch will be released on March 28th between 18:00-19:30 UTC.

In other words that is trying to tell all the site owners or people responsible about the updates that "Be ready to patch your site when we make the announcement. IMMEDIATELY!"

TIP Solution and many other companies who do things right reserved time from their calendars for 28.-29.3.2018 to patch the sites.

Day or couple days before the announcement

When the official announcement about the vulnerability was made it was known that the patch will be for core and sites will be patched pretty easily. So there won't be a lot of downtime.

We told the site owners about the update and the sites will be patched between 0-48 hours after the announcement. 

All our clients have a maintenance contract which makes it to our responsibility to keep on eye the announcements and sites updated without any additional costs.

Day of publication

The cores were easy to patch (thanks to composer workflow [5] and doing the development right) so all the sites were safely patched after a few hours. The process took a little longer than expected because we brought the sites up to date including the modules.

When the sites were patched the site owners were again informed.

Present moment (five days after the announcement)

Before the publication there was some talk that there might be attacks after a few hours. At the moment there is still no information if any sites has been compromised. We'll see...

Sometimes hackers share their knowledge how to exploit the vulnerability (PoC) and sometimes they just keep the knowledge to their selves that they can crack sites without that anyone notices. Therefore we can never be sure if there are attacks available or on going right now so the sites should be patched anyway.

How do you know if you site is patched?

Go to https://yourwebsite.com/admin/reports/status and check that your site's core that it is at least 7.58 or 8.5.1. Or alternatively someone has patched your site manually.

If you are not sure contact your site's administrator immediately!

Summary

You don't need to worry about the security updates if you are ready to patch the site as soon as there is a release. This is why a found security patch (by the white hats) is a good thing. Finding vulnerabilities and patching them is a natural part of every software project. 

Thank you security team for handle the case right!

[1] https://www.drupal.org/sa-core-2018-002

[2] https://www.drupal.org/PSA-2014-003

[3] https://www.drupal.org/drupal-security-team

[4] https://www.drupal.org/psa-2018-001

[5] https://github.com/drupal-composer/drupal-project

Apr 04 2018
Apr 04

04 Apr

Nick Veenhof

Drupal

Yesterday a highly critical security issue in Drupal was released. The issue itself is considered critical, because, the way we understood, it makes it possible to execute code as an anonymous user. This could lead to a complete hack of your site and complete exposure of your content - or, worse, if your webserver is badly configured, a full-scale hostile takeover of your server. (More background info available here and here.)

The issue was announced to the Drupal community a week early, so our Dropsolid team had plenty of time to anticipate and prepare. Currently, Dropsolid serves 482 unique and active projects, which contain on average three environments. To be more precise, this gave us a whopping 1316 active Drupal installations to patch. These environments are located on 65 different servers. 45 of those servers are out of our hands and are managed by other hosting companies, such as Combell or even dedicated hardware on site with the customer. At Dropsolid we prefer to host the websites within our own control, but to the Dropsolid Platform this ultimately makes no difference. For some customers we also collaborate with Acquia - these clients are taken care of by Acquia’s golden glove service.

So, back to preparing to patch all the different Drupal installations. We would be lying if we said that all Drupal installs were running on the latest and greatest, so we used Ansible and the Dropsolid Platform to gather all the necessary data and perform a so-called dry run. This was a real-world test across all our installations to verify if we could pass on a patch and then deploy it as soon as we have confirmed that the patch works for all the versions that we have available on our Dropsolid Platform. For example, it verified if the patch tool is available on the server, it injected a text file that we then patched to make sure the flow of patching a Drupal installation would go smoothly, etc. Obviously we detected some hiccups as we were testing, but we were left with enough time to resolve all issues in advance.

Throughout the evening, we had plenty of engineers on stand-by, ready to jump in should something in the automated process go wrong. The entire rollout took us about 2 hours - from the release of the patch over verifying the patch on all the different Drupal releases to rolling it out on all sites and, finally, relax with a few beers. This doesn't mean we had it easy. We had to work a lot, but a lot of hours just to make sure we could handle this load in this amount of time. That is why we are continuously building on our Dropsolid Platform.

Those who joined our hangout could bear witness to exactly how comfortable and relaxed our engineers were feeling during the rollout.

You might ask, joined our hangout? What are we on about exactly? Well, since the Drupal community was in this together, I suggested on Twitter to all join in together and at least make it a fun time.

A few nice things that happened during this hangout:

  • Someone played live ukelele for us while we waited
  • Someone posted a fake patch and made everyone anxious, but at least it was a good test!
  • People were able to watch Dropsolid in total transparency how we coped with this patch and were also able to interact and talk to others in the hangout.

It made the whole evening a fun activity, as witnessed by Baddy Sonja.

Obviously this couldn’t have happened without the help of our great engineers at Dropsolid - and also because we invest a lot of our R&D time into the development of the Dropsolid Platform, so we can do the same exercise times 10 or times 100 without any extra human effort. Thanks to the Drupal security team for the good care and the warning ahead of time. It made a tremendous difference!

All our Dropsolid customers can rest assured that we have their backs, all the time!

If you are not a Dropsolid customer yet and you are interested to see how we can help you make your digital business easy, we’d be more than happy to talk. If you are running a Drupal site and need help with your updates or with your processes, we’d be glad to to help out and onboard you onto our Dropsolid Platform. You can keep your server contract while benefiting from our digital governance and expertise. Are you in charge of many many digital assets and feeling the pain? Maybe it’s time you can start doing the fun things again - just have a chat with us!

Get in touch

Apr 04 2018
Apr 04

Structuring Your Drupal Website

Drupal has always been a strong content management platform. The number one reason we use Drupal is because it so easily adapts to our clients’ content models. It enables us to easily map and structure many different types of complex content.

Let’s look at how we go about structuring that content in Drupal, and how we use terminology to define, group and link different types of content.

Content Entities

In Drupal 8, every piece of content is an entity. To structure a site, you want to define different types of entities that will store different types of content.

Let’s take a publishing website as an example. We’re going to create entities for: books, authors, editions, interviews, reviews, book collections, book categories, and so on. You can start by drawing a map of all these nouns. I like mapping out content on a whiteboard because it’s easy to erase and change your mind and it’s bigger than a piece of paper.

Content mapping on a white board

Relationships

Once you’ve mapped all the different types of content that will exist on your site, identify the connections between them. Simply draw arrows arrows between the content types that are related to one another.

For example:

  • A book has an author (or multiple authors): draw an arrow from book to author

  • A book can have editions: draw an arrow from book to edition

  • A book can have reviews, interviews: connect these

  • A book collection has books: group books by collection

  • A book has categories: associate books with topics and categories

Entity Terminology: Bundles, Nodes, Taxonomy, Paragraphs, Blocks

Nodes, taxonomy terms, custom blocks, and paragraphs are all different types of entities. Each entity type has unique properties that make it better suited for different use cases and content types.

Here’s a breakdown of the most important Drupal terminology you need to know to structure your content:

  • Node: A page of content is a node, accessible via its own URL
  • Taxonomy terms: Used to categorize other content, taxonomy terms live in a hierarchy. They can be used to filter content and create unique landing pages.
  • Paragraphs: Content that lives within other content and doesn’t need a dedicated URL is a paragraph.
  • Custom Block: Any content that will be reused throughout the site becomes a custom block. You can also embed a block in a node.
  • Bundle: An entity sub-type is a bundle. Usually, bundles can have have unique fields and settings.
  • Field: A field is a component of the content, i.e. an ISBN, author’s name, or book title

Applying this Model to our Example Project

Here’s how we would decide which entity type to use for each content type:

  • Books and authors become nodes

  • Book categories become taxonomy terms

  • Interviews, reviews and editions could be paragraphs

  • Books and authors would be node bundles (aka content types)

  • A book category is a taxonomy bundle (aka vocabulary)

  • A book collection is a block bundle (block type)

  • Reviews and interviews are paragraph bundles (aka paragraph types)

  • A book collection that needs to be displayed on several pages becomes a block

Focusing on Each Entity to Create Fields

Once you’re looking at a book, you can start to think about what defines a book.

Ask yourself:

  • What information should it have?

  • Which information needs to be displayed?

  • How will we filter and order this content?

  • Will there be a single value for the field or multiple values?

List the various components of the content: title, author, ISBN, covers, genres, editions, reviews, interviews. Each of these will be a field.

Fields in Drupal can be single value (for example, each book has a single ISBN number) or multi-value (a book can have multiple reviews or authors). There are many other fields types that can store the content in a certain way that will affect how it can be displayed or used later (text, date, number, email, link, etc). A field that links one entity to another is a ‘reference’ field.

Information Architecture

So far we’ve talked about structuring your content using entities and bundles. But how do users actually access your content? When you’re building out your site map, you’ll probably picture top-level pages. These may link to dynamic lists of content, or they may have sub-pages that are added beneath them.

Linking to Content

In Drupal, we have three main ways to link to content: menus, views, and fields. In general, this is how we use them:

Menus are for static content: Menus are a static hierarchy of content. If you’re creating permanent content on the site that will be relevant for a long time, you’ll probably link to it through a menu.

Views are for dynamic content: Content that is ‘dynamic’ that will be added to frequently and is too abundant to add to a menu will probably be listed and linked to via views (the Drupal term for ‘list of content’).

Entity reference fields or link fields: You can also explicitly add a link from one content item to another using an entity reference field or a link field. For example, if you have a book and you want to have it link to three other hand-selected ‘related books’, you could create a ‘Content’ reference field for this.

You can go through your site map and figure out which pages are static (linked to by the menu) and dynamic content (linked via views). Landing pages tend to be connection pages. For example, a landing page might live in the menu, list a bunch of dynamic pages and also include explicit links to other pages via ‘calls to action’.  

Applying Menus and Views to Our Example

Using our example, you may have a static page for ‘About Us’, ‘Contact Information’, or ‘History of Publishing’. These would be created as pages and linked to via the menu.

You may also have a page that lists all the books and another that lists all the authors. Because your lists of books and authors are likely to change often, these lists should be created using views. When you add a new book or a new author, it automatically appears in the list.

Taxonomies make creating lists more interesting because we can create lists of content that are filtered by a particular taxonomy term. For example, if ‘prize winning’ is a book category, a taxonomy allows us to create a list of all the books that are ‘prize-winning’.

Finally, you might have a landing page for an upcoming book tour that includes details about the tour, a link to the book being promoted, and also links to other books by the author.

Conclusion

There are many more things to know to build a site with Drupal. But when you’re planning out your content, you simply need to be able to draw out the structure and communicate this with your team. Knowing the basic Drupal concepts will help you communicate clearly and think about the site’s architecture at a high level.

To read about a real-life project in which we built out book content in Drupal 8, read about our project for Princeton University Press.

Apr 04 2018
Apr 04

Drupal Commerce 2 allows to define out of the box multiple checkout flows, allowing to customize according to the order, the product purchased, the customer profile this buying process and modify it accordingly. This is an extremely interesting feature, in that it can simplify as much as necessary this famous checkout flows. Do you sell physical (and therefore with associated delivery) and digital (without delivery) products? In a few clicks you can have two separate checkout flows that will take into account these specificities.

On the other hand, during my first contact with Drupal Commerce 2, I was somewhat taken aback by the fact that an order type could only be associated with one and only one checkout flow. And that a product type could only be associated with one and only one order type. As a result, one product type could only be associated with one and only one checkout flow. Diantre! But how to make products that can have physical and digital variations (like books for example, with a paper version and a digital version) ?

Sélection d'un tunnel d'achat sur le type de commande

Which checkout flow to associate with an order type that can correspond to physical or digital products? Should we multiply the order types accordingly? What impact on the catalog architecture?

As you have understood, this has raised many questions.

A default purchase tunnel

Fortunately, I misinterpreted this setting of the checkout flow on the order type. After analysis, it became clear to me that it was not a question here of setting up a single checkout flow for an order type, but to define the checkout flow that would be used by default for this order type.

Indeed, as for the prices, the taxes, the order elements, the shop, etc. Drupal Commerce 2 uses the Resolver concept to determine which checkout flow to use. And because of this, Drupal Commerce 2 makes it possible to address multiple needs very easily, while offering a standard operation as soon as it is installed.

Thus, the determination of a checkout flow to be used for an order is made during the first entry of a cart (or a draft order) in the checkout flow.

/**
 * {@inheritdoc}
 */
public function getCheckoutFlow(OrderInterface $order) {
  if ($order->get('checkout_flow')->isEmpty()) {
    $checkout_flow = $this->chainCheckoutFlowResolver->resolve($order);
    $order->set('checkout_flow', $checkout_flow);
    $order->save();
  }

  return $order->get('checkout_flow')->entity;
}

Indeed, as can be seen, when entering into the checkout flow, if the order does not yet have an associated checkout flow, then it is called the Resolver chainCheckoutFlowResolver->resolve() which then determines which checkout flow to use and then saves it on the order.

Changing a checkout flow for a given order then becomes child's play.

A dynamic checkout flow

To determine in a very granular way which checkout flow to use for each order, it is then enough to carry out two operations.

  • When adding or deleting a product to the cart, simply reset the checkout flow associated with the order (because as we have seen, the latter is determined only if it has not already been associated and registered with an order)
  • Create a service that will implement the Resolver commerce_checkout.checkout_flow_resolver, with a higher priority than the service responsible for determining the default checkout flow (set to -100)

Let's move on to practice.

In our module named MY_MODULE, let's create a EventSubscriber service that will subscribe to add and delete products to a command to reset the purchase tunnel on the order.

In the directory of our module, we create the file my_module.services.yml and declare our service my_module.cart_update_subsciber.

services:

  my_module.cart_update_subscriber:
    class: Drupal\my_module\EventSubscriber\CartEventSubscriber
    arguments: ['@entity_type.manager']
    tags:
      - { name: event_subscriber }

  my_module.checkout_flow_resolver:
    class: Drupal\my_module\Resolver\CheckoutFlowResolver
    tags:
      - { name: commerce_checkout.checkout_flow_resolver, priority: 100 }

We also take this opportunity to create our my_module.checkout_flow_resolver service which will dynamically determine which checkout flow to use.

The first step is to ensure that the checkout flow associated with an order is reset when adding or deleting a product from the order with our Class CartEventSubscriber.

<?php

namespace Drupal\my_module\EventSubscriber;

use Drupal\commerce_cart\Event\CartEntityAddEvent;
use Drupal\commerce_cart\Event\CartEvents;
use Drupal\commerce_cart\Event\CartOrderItemRemoveEvent;
use Symfony\Component\EventDispatcher\EventSubscriberInterface;

class CartEventSubscriber implements EventSubscriberInterface {

  /**
   * {@inheritdoc}
   */
  public static function getSubscribedEvents() {
    $events = [
      CartEvents::CART_ENTITY_ADD => ['onCartEntityAdd', -50],
      CartEvents::CART_ORDER_ITEM_REMOVE => ['onCartOrderItemRemove', -50],
    ];
    return $events;
  }

  /**
   * Resets the checkout flow status when an item is added to the cart.
   *
   * @param \Drupal\commerce_cart\Event\CartEntityAddEvent $event
   *   The cart event.
   */
  public function onCartEntityAdd(CartEntityAddEvent $event) {
    $cart = $event->getCart();
    if ($cart->hasField('checkout_flow')) {
      $cart->set('checkout_flow', NULL);
    }
  }

  /**
   * Resets the checkout flow status when an item is removed from the cart.
   *
   * @param \Drupal\commerce_cart\Event\CartOrderItemRemoveEvent $event
   *   The cart event.
   */
  public function onCartOrderItemRemove(CartOrderItemRemoveEvent $event) {
    $cart = $event->getCart();
    if ($cart->hasField('checkout_flow')) {
      $cart->set('checkout_flow', NULL);
    }
  
    // If an order item is removed, this can be the only shippable item in the
    // cart. So we reset the shipmemnt order when on order item is removed. The
    // customer could already go to the checkout and get a shipment. And if so
    // then he can have a free order but with a shipment and so a shipping amount.
    /** @var \Drupal\commerce_shipping\Entity\ShipmentInterface[] $shipments */
    $shipments = $cart->get('shipments')->referencedEntities();
    foreach ($shipments as $shipment) {
      $shipment->delete();
    }
    $cart->set('shipments', NULL);
  }

}

Note here that in addition to the need to reset the checkout flow during these two events, we also delete any shipment item that could have been associated with an order. Indeed, in this example, if we want to determine different checkout flows depending on whether or not the order requires a shipment (and associated fees), we must ensure that if an order had shipment items, those they are recalculated if necessary in the checkout flow (the typical case is an order which have a physical product to be delivered, which would have gone through the checkout flow for the first time without reaching its conclusion, and for which the customer changes his mind to return to the cart and delete this physical product, thereby making his order without further shipment necessary).

Once this step has been completed, all that remains is to create our Resolver, which can dynamically determine which checkout flow to use.

<?php

namespace Drupal\my_module\Resolver;

use Drupal\my_module\MyModuleInterface;
use Drupal\commerce_checkout\Entity\CheckoutFlow;
use Drupal\commerce_checkout\Resolver\CheckoutFlowResolverInterface;
use Drupal\commerce_order\Entity\OrderInterface;

class CheckoutFlowResolver implements CheckoutFlowResolverInterface {

  /**
   * {@inheritdoc}
   */
  public function resolve(OrderInterface $order) {

    // Free product flag.
    $free = TRUE;
    // Immaterial product.
    $numerical = TRUE;

    if (!$order->getTotalPrice()->isZero()) {
      $free= FALSE;
    }

    // Have we a shippable product in the order ?
    foreach ($order->getItems() as $item) {
      $purchased_entity = $item->getPurchasedEntity();
      if ($purchased_entity->hasField(MyModuleInterface::WEIGHT_FIELD_NAME)) {
        if (!$purchased_entity->get(MyModuleInterface::WEIGHT_FIELD_NAME)->isEmpty()) {
          /** @var \Drupal\physical\Measurement $weight */
          $weight = $purchased_entity->get(MyModuleInterface::WEIGHT_FIELD_NAME)->first()->toMeasurement();
          if (!$weight->isZero()) {
            $numerical = FALSE;
            break;
          }
        }
      }
    }

    // If we only have product free and without weight. So go to the
    // direct checkout flow.
    if ($free && $numerical) {
      return CheckoutFlow::load('direct');
    }
    // If numerical product but non free, go to to the download checkout
    elseif (!$free && $numerical) {
      return CheckoutFlow::load('download');
    }

  }

}

Here, in this example, we determine if the order contains free products and / or containing a (non-empty) weight which therefore requires shipment. You can of course customize as much as you need, depending on your project, the determination of the checkout flow. The possibilities are endless, at your fingertips.

Thus in a few lines, we have just defined three types of checkout flows that can be used in a granular way for each order according to the elements of this order. Note here that we do not determine the checkout flow to use unless the order contains only digital and/or free products. In the opposite case, then we leave the hand to the default Resolver which will determine the checkout flow compared to the setting made on the order type configuration.

This example shows us how it can be easy, with the help of a Drupal 8 developer, to set up specific process with Drupal Commerce 2.

A modular design of Drupal Commerce 2

This introduction to checkout flows, finally, could be duplicated to many configuration elements of a Drupal Commerce 2 website. The configuration of a standard catalog, from the products, orders to order's items, the determination of the product price, a shop (in the case of a marketplace with multiple shops) is actually not a fixed configuration in the marble, but a configuration of the default behavior for an online store. This behavior can then be altered according to business needs in a simple and robust way.

An undeniable asset when it comes to setting up an e-commerce site with the logic and needs of the most classic to the most unusual.

Apr 04 2018
Apr 04

This article is the second in our series on Continuous Integration tools for Drupal 8, which started with CircleCI. This time, we explore Travis CI.

Travis CI is the most well known CI tool for open source projects. Its setup process is straightforward and it offers a lot of flexibility and resources to implement Continuous Integration for any kind of project. In this article we will implement the same set of jobs that we did with CircleCI and then compare both tools.

Resources

This article makes references to the following resources:

Browse the demo project to discover where the CI components are placed, then use the one-line installer to add these components automatically to your project.

The goal

We want to run the following jobs in a Drupal 8 project when someone creates a pull request:

To accomplish the above, we will use the following tools in Travis CI:

  • Drush, Drupal’s command line interface, to perform Drupal-related tasks like installing Drupal or updating the database.
  • Docker Compose, via docker4drupal, to build the environment where Behat tests run.
  • Robo, a PHP task runner, to define a set of tasks for each of the above jobs.

Here is a screenshot of the Travis CI dashboard with the above setup in place:

Travis CI dashboard

Now, let’s see how this has been set up. If you want to dive straight into the code, have a look at the demo Drupal 8 repository.

Setting up Travis CI

Travis CI requires the presence of a .travis.yml file at the root of the repository that dictates how it will build and test the project. I have used this installer that adds the following files:

Additionally, a few dependencies are added via Composer, which are required for the CI jobs.

After adding the above files to the repository, it’s time to give Travis CI access to it. Open https://travis-ci.org and authenticate there with your GitHub account. Next, add the repository at the Travis CI dashboard as shown below:

Travis CI add project

That’s it! After this, future changes to the repository should trigger builds at Travis CI. If you create a pull request, you will see a status message like the following one:

Travis CI pull request

Seeing the jobs at work

Here is an excerpt of the .travis.yml file. We are leveraging Travis’ build matrix for spinning up three jobs that run in parallel:

env:
  matrix:
    - JOB=job:check-coding-standards
    - JOB=job:run-unit-tests
    - JOB=job:run-behat-tests

install:
  - composer --verbose install

script:
  - vendor/bin/robo $JOB

The script section is called three times: one for each value assigned to the $JOB variable. It calls a different Robo task each time. We decided to write the implementation of each job as Robo tasks because:

  • It makes the .travis.yml file easier to read and maintain.
  • It makes the job implementations portable between CI tools.
  • It gives developers an opportunity to run the jobs locally.

If you are curious what a Robo task looks like, here is the implementation of the one that runs Behat tests:

/**
 * Command to run behat tests.
 *
 * @return \Robo\Result
 *   The result of the collection of tasks.
 */
public function jobRunBehatTests()
{
    $collection = $this->collectionBuilder();
    $collection->addTaskList($this->downloadDatabase());
    $collection->addTaskList($this->buildEnvironment());
    $collection->addTask($this->waitForDrupal());
    $collection->addTaskList($this->runUpdatePath());
    $collection->addTaskList($this->runBehatTests());
    return $collection->run();
}

Building the environment with Docker Compose

The build environment task shown above, $this→buildEnvironment(), uses Docker Compose to build a Docker environment where the Drupal site will be configured, the database will be updated, and finally, Behat tests will run.

In contrast with CircleCI, where we define the mix of Docker images that the test environment will use to run the jobs, Travis CI offers two environments (Precise and Trusty) with common pre-installed services. Trusty has everything that we need for checking coding standards or running PHPUnit tests, but Behat tests require more setup which we find easier to manage via Docker Compose.

Here are the contents of the build environment task. For simplicity, we have removed a few unrelated lines:

/**
 * Builds the Docker environment.
 *
 * @return \Robo\Task\Base\Exec[]
 *   An array of tasks.
 */
protected function buildEnvironment()
{
    $force = true;
    $tasks = [];
    $tasks[] = $this->taskFilesystemStack()
        ->copy('.travis/docker-compose.yml', 'docker-compose.yml', $force);
    $tasks[] = $this->taskExec('docker-compose pull --parallel');
    $tasks[] = $this->taskExec('docker-compose up -d');
    return $tasks;
}

The above task uses this docker-compose.yml file to build the environment.

Generating and visualizing coverage reports

Travis CI does not support storing artifacts like CircleCI does. Therefore, we need to use a third-party service to host them. Travis documentation suggests either uploading them to an Amazon S3 bucket or using Coveralls, a hosted analysis tool. We chose the latter because it posts a summary in each pull request with a link to the full coverage report.

Setting up Coveralls is straightforward. Start by opening https://coveralls.io and then, after authenticating with your GitHub account, use their browser to find and connect to a repository, like this:

Coveralls add repository

Next, it is recommended to review the repository settings so we can customize the developer experience:

Coveralls settings

With that in place, new pull requests will show a status message with a one-line summary of the coverage report, plus a link to the full details:

Coveralls pull request

Finally, when we click on Details, we see the following coverage report:

Coveralls report

A comparison to CircleCI

CircleCI can do all that Travis CI does with less setup. For example, coverage reports and Behat screenshots can be stored as job artifacts and visualized at the CircleCI dashboard. Additionally, CircleCI’s Command Line Interface gives a chance to developers to debug jobs locally.

Travis CI shines on flexibility: for example, only the Behat job uses Docker Compose to build the environment while the rest of the jobs use the Trusty image. Additionally, there is a huge amount of articles and documentation, which you will surely find helpful when tweaking the jobs to fit your team's needs.

If you liked Travis CI, check out this installer to get started quickly in your Drupal 8 project.

What next?

We aren’t sure about which tool to pick for our next article in this series on CI tools for Drupal 8. Do you have a preference? Do you have feedback on what you’ve found relevant about this article? Please let us know by posting a comment.

Acknowledgements

Apr 04 2018
Apr 04

The drupal-project repository is quickly becoming the de facto starter for all Drupal 8 projects. So how can you quickly spin up a new site with Composer and drupal-project? How do you take drupal-project and customize it to suit your particular needs? And, how do you leverage post-install tasks to keep yourself DRY? This February I gave a talk at DrupalCamp Florida where I got into all of these questions. Get clicking to see my answers and watch the video of my talk.

In putting together my DrupalCamp Florida talk, I wanted to help people save time, follow best practices, make their own development experience more enjoyable, and look cool in the process. (Super important.) You’ll probably get a lot out of watching the video of my session if:

  • You've been wanting to try out Composer

  • You already use Composer but want to learn more about the best practices

  • You frequently spin up new sites, whether for module development, agency work, or for funzies

  • You're thinking about using one of the popular distributions (e.g., Lightning) and wonder if there's a better option

  • You are a Composer expert, so you can tell me all about my mistakes ;)
     

[embedded content]

Thanks for watching. If you have any questions, you can hit me up at [email protected] or on d.o.

Apr 04 2018
Apr 04
Check Out the New Page Builder in Drupal 8.5!

Earlier on the OSTraining blog, Steve Burge gave an introduction to the new Layout Builder in Drupal 8.

Many users have been eagerly waiting for this module and it was released in version 8.5.

In this tutorial, you will take a further look at how to work with this module. You will see how to use the Layout Builder to configure content types and nodes.

This module is one of the new major changes. I feel strongly it will really improve the usability of Drupal. Let’s try it out!

Step #1. Enable the Layout Builder Module

  • Click Extend.
  • Scroll down to the CORE (EXPERIMENTAL) section.
  • Check the Layout Builder module.
  • Click Install.

Click Install

  • The Layout Discovery module will be enabled as a requirement.
  • Click Continue.

Click Continue

Step #2. Create Content

For the purpose of this tutorial, I’m going to generate five articles with the Devel module. This is a handy module that will help you with development tasks.

  • Install the Devel module.
  • Enable both the Devel and Devel Generate parts of the plugin.
  • Click Install.

Click Install

  • Click Configuration > Generate content in order to generate five articles.
  • Click Generate.

Click Generate

  • Click Content and you’ll see the generated articles.

Step #3. Configure the Layout of the Article Content Type

  • Click Structure > Content types.
  • To the right of the Article content type click the dropdown list.
  • Choose Manage display.

Choose Manage display

You’ll be presented with a different interface than the one you’re used to.

  • Click the Manage layout button.

Click Manage layout

This drag-and-drop interface will allow you to configure the layout of all nodes of the Content type Article. Please notice that the layout capabilities refer to the Content itself (i.e. the Content region).

This drag-and-drop interface will allow you to configure the layout of all nodes

  • Click the Add Section link at the top.

You will see a slide menu on the right with different layout options.

  • Choose one of them, for example, the 3 “equal” columns layout.

Choose 3 equal columns

You’ll see the newly created layout surrounded by blue dashed lines:

You’ll see the newly created layout surrounded by blue dashed lines

  • You can click on each of the Add Block links to place Drupal’s default and custom blocks within this new layout regions, for example, within a block specifying the language of the content:

You can click on each of the Add Block links

  • Drag and drop each of the fields of the Content type inside each one of the layout regions.
  • For example, place the image on the left column, the body text on the middle column, the tags on the right column and the comments in the bottom part (footer) of this particular section:

Place the image on the left column, the body text on the middle column, the tags on the right column and the comments in the bottom part

  • When you’re finished with the configuration for your desired layout, scroll to the top of the page.
  • Click Save Layout.
  • If you leave some part empty, it won’t display in the node:

If you leave some part empty, it won’t display in the node

  • All your articles have the same layout now. Take a look at them!

All your articles have the same layout now

All your articles have the same layout now

Step #4 - Configure the layout of a single node

  • Once again, click Structure > Content types.
  • Choose Manage display from the drop-down of the article content type.
  • Check the Allow each content item to have its layout customized checkbox.
  • Click Save.

Click Save

  • Click Content.
  • Choose one of your articles.
  • You’ll see a new tab above the content called Layout: 

You’ll see a new tab above the content called Layout

  • Click this tab. You’ll be presented with the same interface.

The process is the same as the one I described above. The only difference is that you’re configuring the layout just for this article. You can add blocks or even an additional image the same way as explained before.

ou can add blocks or even an additional image

This layout capabilities make Drupal even more accessible for site builders, who don’t know how to override templates or just to speed up the development time.

As already stated, this module is in the experimental phase. Please, don’t use it in your production sites yet. Play with it and send your feedback to Drupal.org if you find any bugs.

I hope you liked reading this tutorial. Thank you and please leave your comments below!

Additional Reading

What's Next

Learn how to build great websites with Drupal by reading the latest version of the "Drupal 8 Explained" book by Steve Burge. Join the OSTraining Book Club and get an instant access to the book's PDF.


About the author

Jorge lived in Ecuador and Germany. Now he is back to his homeland Colombia. He spends his time translating from English and German to Spanish. He enjoys playing with Drupal and other Open Source Content Management Systems and technologies.
Apr 04 2018
Apr 04

I was not planning to go to DrupalCon this year due to so many things going on at the company, but with a little delegation effort, I will be able to go. 

I would not like to miss this one, to be honest. So here is what I am looking forward to in Nashville.

Meeting new people

DrupalCons in the US are the biggest Drupal events, and even if you are an active community member for 11 years like I am, you still see a lot of new faces.

Developers are generally more on the introvert side, so you don't see so much intentional networking like on some other events, but don't hesitate to ask the person sitting next to you, waiting for the session to start, where they are from and what they do. You would be surprised what can come of that simple conversation.

Having intriguing conversations

Some people say they come for the sessions. I think that is a wrong reason to be there. I have a 10-hour flight to America. I will use that idle time to watch some lectures, but when I am at the conference, I want to engage in some conversations.

Even if you do come for the sessions, don't be so passive. Ask questions after the session and ask other people how they perceive the topics you were listening to. 

Nashville

 

Mindset shifting

My first Drupal in 2010 in Copenhagen was a tipping point in my career. On that conference, I realised how big Drupal is and how many opportunities it offers. If I did not visit DrupalCon in 2010, there would be no AGILEDROP. 

With all the new people you meet, conversations you have, something will stick, and you will come home inspired and full of enthusiasm to make the next big step.

Get swag

I'm just joking with this one. But yes, go ahead, take as many t-shirts and pens as you can carry. 

Let's meet there

I am looking forward to meeting new folks and catching up with people I meet before. This blog post is an open invitation for everyone who would like to know more about my company and me. Please use the link below to propose a time to meet, and we can take it from there.

Schedule a meeting

See y'all in Nashville!

Photo by Joshua Ness on Unsplash

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web