Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Sep 24 2018
Sep 24

I moved over to DDEV for my local development stack back in February. One of my favorite things is the ease of using Xdebug. You can configure Xdebug to always be enabled, or turn it on and off as needed (my preferred method.) When you have Xdebug enabled, it also enables it for any PHP scripts executed over the command line. That means you can debug your Drush or Drupal Console scripts like a breeze!

This article is based on using Xdebug within PhpStorm, as it is my primary IDE.

Thumbnail

When using PhpStorm with any non-local stack you will have to set up a mapping. This allows PhpStorm to understand what files on your local stack match the contents within the PhpStorm project. For example, DDEV serves files from /var/www/html within its containers but the project files are actually /Users/myuser/Sites/awesomeproject  on your machine.

If you haven't yet, set up the configuration for this mapping. Generally, I never do this upfront and wait until the first time I end up using Xdebug over a web request and PhpStorm prompts me to. In this configuration, you will provide a server name. I set this as the DDEV domain for my project.

Thumbnail

Now, to get Xdebug over the CLI to work we need to ensure a specific environment variable is present: PHP_IDE_CONFIG. This contains the server name to be used, which PhpStorm maps to the configured servers for your project.

An example value would be

PHP_IDE_CONFIG=serverName=myproject.ddev.local

Now, you could export this every time you want to profile a PHP script over the command line -- but that is tedious. To ensure this variable is always present, I use an environments file for DDEV. To do this, create a docker-compose.env.yaml file in your .ddev directory. DDEV will load this alongside its main Docker Composer file and merge it in.

version: '3.6'

services:
  web:
    environment:
    - PHP_IDE_CONFIG=serverName=myproject.ddev.local

And now you can have PHP scripts over the command line trigger step debugging via Xdebug!

Sep 22 2018
Sep 22

If you want to have multiple Solr indexes using Search API, you need to have a core for each index instance. For my local development stack, I use DDEV. The documentation has a basic example for setting up a Solr service, but I had quite the fun time figuring out how to ensure multiple cores.

After reading the Solr image for Docker, I discovered you can override the container's command and pass additional commands as well. I found the issue "Create multiple collections at startup" from Sep 2017 which summed up my desired outcome. You just need to add this as the command for your container to generate additional cores:

bash -e -c "precreate-core collection1; precreate-core collection2; solr-foreground"

This ensures collection1 and collection2 and could create however many are needed. 

To add Solr to your DDEV with multiple cores, here is the docker-composer.solr.yaml you can add to your project

version: '3.6'

services:
  solr:
    container_name: ddev-${DDEV_SITENAME}-solr
    image: solr:6.6
    command: 'bash -e -c "precreate-core collection1; precreate-core collection2; solr-foreground"'
    restart: "no"
    ports:
      - 8983
    labels:
      com.ddev.site-name: ${DDEV_SITENAME}
      com.ddev.approot: $DDEV_APPROOT
      com.ddev.app-url: $DDEV_URL
    environment:
      - VIRTUAL_HOST=$DDEV_HOSTNAME
      - HTTP_EXPOSE=8983
    volumes:
      - "./solr:/solr-conf"
  web:
    links:
      - solr:$DDEV_HOSTNAME
      # Platform.sh hostname alias
      - solr:solr.internal

Sep 17 2018
Sep 17

At Drupal Europe, Dries announced the release cycle and end of life for Drupal's current and next version. Spoiler alert: I am beyond excited, but I wish the timeline could be expedited. More on that to follow.

Here is a quick breakdown:

  • Drupal 8 uses Symfony 3. Symfony 3 will be at its end-of-life at the end of 2021, forcing Drupal 8 into end-of-life
  • Drupal 9 will use Symfony 4 or 5 and be released in 2020 (a year before end-of-life.)
  • Drupal 7 will also end-of-life at the end of 2021, matching Drupal 8, simplifying security and release management.
Drupal 7, 8, 9 End of Life timeline

Timeline copied and cropped from https://dri.es/drupal-7-8-and-9

I feel some will use this as an excuse to bash and argue against our adoption of Symfony within Drupal core. Instead, I see it as a blessing. It is forcing Drupal to adopt new technology and force our hosting platforms and providers to do the same.

At a minimum, here are the aging technologies we are forced to support:

MySQL 5.5.3/MariaDB 5.5.20/Percona Server 5.5.8

MySQL 5.6 provides many improvements over 5.5, alone. Yes, you can use Drupal with MySQL 5.6, but the core product itself cannot harness any of its benefits. Or, what about MySQL 5.7 which includes support for JSON fields? In the 8.6 release, support for MySQL 8 was added -- but the only benefit is performance gains, not new technology.

PHP 5.5.9+ and 5.6

This is one of my biggest pain points. If you look at Drupal's PHP requirements page we support 5.5.9+, 5.6, 7.0, 7.1, and 7.2 However, Drupal 8 will drop support for PHP 5.5 and 5.6 on March 6, 2019. PHP 5.5 was end-of-life 2 years ago and 5.6 will at the end of 2018. Not to mention, 7.0 is end-of-life before 5.6!

We need modern technology in our stacks.

I am aware that hosts were slow to adopt PHP 7. Know why? Because our software was slow to adopt. We never caused them to update technology stacks. Symfony 4 requires PHP 7.1.3 or higher. Meanwhile, a stable Symfony 5 is planned to be released Nov 2019 (one year before Drupal 9 release.) I wasn't able to find information on the minimum PHP requirement, but I would assume it is at least 7.2 which means 7.3 will have full support.

I am excited for 7.3, like beyond excited. Why? Because garbage collection is severely improved. And Drupal loves to create a lot of objects and trigger a garbage collection during a request. For example, in Drupal 7 we had to disable it during a request to improve performance (was worth the tradeoffs.) Here's a link to the PR which hit 7.3.0beta3 and looks to improve the process by 5x its current speeds.

Having MySQL 5.7 would allow us to fully use JSON fields. This would be immensely useful in Drupal Commerce.

And now we wait (well, don't wait, Contribute!)

I wish we could make this process happen earlier. But that would be pretty unreasonable. There is a lot to accomplish in Drupal core to make this happen. The release managers and core maintainers have done an amazing job at managing Drupal 8 and shipping stable releases. They now have had a whole additional release branch and process thrown at them.

I encourage you to embrace this change and find ways to contribute to the 9.0.x branch, which might be easier than you think. Part of the 9.0.x effort is removing deprecated code (which is pretty easy to find.) I'm not sure if a plan has been cooked up yet, but there are issues on the 9.x branch https://www.drupal.org/project/issues/drupal?version=9.x

Jul 06 2018
Jul 06

Today I was working on a custom Drupal 8 form where I needed an option to purge existing entities and their references on a parent entity before running an import. It seemed pretty straightforward until I saw "ghost" values persisting on the parent entity's inline entity form. Here's my journey down the rabbit hole to fix broken entity reference values.

The first thing I did was pretty straightforward. It's in a batch, so I grabbed a few entities and just deleted them, hoping magic would happen.

$limit = 25;   
$price_list_item_ids = array_slice($price_list->getItemsIds(), 0, $limit);
$price_list_items = $price_list_item_storage->loadMultiple($price_list_item_ids);
$price_list_item_storage->delete($price_list_items);

That didn't work. When I would view the price list the references still existed, albeit broken.

So then I tried to run the filter empty items method, thinking "well, the value is empty." Fingers crossed, I did the following:

$price_list->get('items')->filterEmptyItems();

No dice. So I dig in and see why this isn't working. The \Drupal\Core\Field\FieldItemList::filterEmptyItems method invokes \Drupal\Core\TypedData\Plugin\DataType\ItemList::filter and checks if the field item reports that it has an empty value.

  /**
   * {@inheritdoc}
   */
  public function filterEmptyItems() {
    $this->filter(function ($item) {
      return !$item->isEmpty();
    });
    return $this;
  }

Well, it should be empty, right? WRONG AGAIN. The EntityReferenceItem class doesn't care if the reference is valid or not, only if it has values in its properties. In my case, there was always a target_id set and sometimes $this->entity was populated from a previous access.

  /**
   * {@inheritdoc}
   */
  public function isEmpty() {
    // Avoid loading the entity by first checking the 'target_id'.
    if ($this->target_id !== NULL) {
      return FALSE;
    }
    if ($this->entity && $this->entity instanceof EntityInterface) {
      return FALSE;
    }
    return TRUE;
  }

So I took another approach. I checked to see if validating the field would return validation constraints and let me remove field values that way. Running validate gave me a list of all the broken references... except it was near impossible to traverse them and discover what field value delta was incorrect.

$price_list->get('items')->validate();

So then I backtracked a bit. I noticed that the filter method for the item list was public. The filterEmptyItems method wasn't working for me, so why not roll my own! And I did. And it failed.

$items->filter(function (EntityReferenceItem $item) {
  return !$item->validate()->count();
});

Why? Because the ValidReference constraint is on the entity reference list class and not on the entity reference item class itself. There were never any violations at the field value level, only for the entire list of values.

Since I couldn't trust validations, I could trust that the computed entity field would be null when it was passed through the entity adapter. And that's how I got to my final version which removes all invalid entity reference values:

// The normal filter on empty items does not work, because entity
// reference only cares if the target_id is set, not that it is a viable
// reference. This is only checked on the constraints. But constraints
// do not provide enough data. So we use a custom filter.
$price_list->get('items')->filter(function (EntityReferenceItem $item) {
  return $item->entity !== NULL;
});
$price_list->save();

Hopefully, this aids someone else going down a unique rabbit hole of programmatically removing entities and mending references.

May 13 2018
May 13

I have been spending the month of May cleaning out my basement office. I have found a treasure trove of old papers and random things bringing up memories and other thoughts. Just like my last blog post on an organization's values and the culture it creates. This time I found two legal pads which had my notes and planning for my first Drupal Meetup talk and first came session.

Thumbnail

My first talk about Drupal was at the Milwaukee Meetup. I have a talks explaining the differences between the Omega theme's versions 3 and 4. Which I decided would be like comparing apples to oranges. 

I had just started my own Meetup in Kenosha and had not yet made it up to Milwaukee. I had been meaning to but failed to commit. Then one day David Snopek sent out an email saying the speaker for that month couldn't make it and asked if anyone wanted to talk about building theme' with Omega. All of our sites were built using Omega 3 or Omega 4, so I decided to jump in and offer to talk. Before that talk, I had just presented shortly on Drupal at my own meetup, mostly amongst friends.

The slides are still available on SlideShare: https://www.slideshare.net/MattGlaman/gettin-responsive-using-omega-3-a…

Thumbnail

A few months later I was encouraged by Mike Herchel to propose a session for Florida DrupalCamp. At work, I had begun using Panels more and more on each site builder over things like Context and Delta. It took a while, but I decided to talk about ways we were using Panels and custom layouts to build responsive and adaptive content.

Florida DrupalCamp 2014 was my second camp and first time speaking at a Drupal conference. I also had some Mediacurrent employees in my session who showed their work Classy Panel Styles, which was a project to try and simplify customization of a pane's display.

My slides for that are still available on SlideShare as well: https://www.slideshare.net/MattGlaman/rockin-responsive-content-with-pa…

May 01 2018
May 01

This is a follow up to my early blog post about enabling RESTful web services in Drupal 8Jesus Olivas, one of the creators of Drupal Console, brought up that you can actually manage your RESTful endpoint plugin configurations direct from the command line!

If you are not familiar, Drupal Console is a command line utility built for managing Drupal 8. I use it mostly for code generation and some other basic management features, such as debugging the routing system. I talk about Drupal Console it in the "The Drupal CLI" chapter of the Drupal 8 Development Cookbook. However, the tool itself could have a book all on its own for all of its features.

Get Drupal Console

Drupal Console needs to added to your Drupal project, for each project, using Composer. The command is below, and more details in the project documentation: https://docs.drupalconsole.com/en/getting/composer.html

composer require drupal/console:~1.0 \
--prefer-dist \
--optimize-autoloader

Here's the sample output from running the command on the Drupal 8 instance used in my last article.

Thumbnail

Review the current state of RESTful endpoints

Listing the currently available REST endpoint plugins we just need to run the following command

$ ./vendor/bin/drupal debug:rest

This will list all of the REST module plugins, their URL endpoints, and status.

Thumbnail

Using this command we can also inspect individual plugins. Here's the output from inspecting the Node (content) resource plugin.

./vendor/bin/drupal debug:rest entity:node
Thumbnail

Full documentation can be found at https://docs.drupalconsole.com/en/commands/debug-rest.html

We can then use two other commands to enable or disable endpoints.

Enabling a RESTful endpoint using Drupal Console

Now, let's use the command line to enable the Node (content) endpoint.

./vendor/bin/drupal rest:enable entity:node

It will prompt you to select methods that you would like to enable. In this example we'll focus on just enabling the GET resource, for consuming content elsewhere.

$ ./vendor/bin/drupal rest:enable entity:node

 commands.rest.enable.arguments.methods:
  [0] GET
  [1] POST
  [2] DELETE
  [3] PATCH
 > 0

Then you decide what formats you would like to accept. You can choose between JSON or XML out of the box with Drupal core. Other modules can provide more formats (YAML, CSV.)

$ ./vendor/bin/drupal rest:enable entity:node

 commands.rest.enable.arguments.methods:
  [0] GET
  [1] POST
  [2] DELETE
  [3] PATCH
 > 0

Selected Method GET

 commands.rest.enable.arguments.formats:
  [0] json
  [1] xml
 > json

And, then we choose an authentication provider. We will choose the cookie authentication provider.

$ ./vendor/bin/drupal rest:enable entity:node

 commands.rest.enable.arguments.methods:
  [0] GET
  [1] POST
  [2] DELETE
  [3] PATCH
 > 0

Selected Method GET

 commands.rest.enable.arguments.formats:
  [0] json
  [1] xml
 > json

commands.rest.enable.messages.selected-format json

 Available Authentication Providers:
  [0] basic_auth
  [1] cookie
 > cookie

And then, success!

Selected Authentication Providers cookie
 Rest ID "entity:node" enabled

Here's a screenshot of the entire command process.

Thumbnail

Modifying a RESTful endpoint using Drupal Console

Now, there is not a command to quick edit the configuration for a RESTful endpoint plugin. To do so, we will have to use the provided configuration management commands to discover the configuration name and edit it. So, we will use the debug:config command to list our options. It's a large output. So I recommend using grep to limit the results.

$ ./vendor/bin/drupal debug:config | grep "rest.resource"
 rest.resource.entity.node 

Now that we have the configuration name, rest.resource.entity.node, we can use the configuration edit command to modify it. Running the command will open a terminal based text editor to modify the configuration YAML.

./vendor/bin/drupal config:edit rest.resource.entity.node

When you save changes, they will be imported. Be warned, however: you could ruin your configuration if you are not sure about what you're doing.

Thumbnail

You can rerun the rest:enable command to add new HTTP methods to an endpoint, it seems. However, you will need to edit it to remove. For example, let's say you enabled the DELETE method but want to remove it.

Apr 27 2018
Apr 27

Recently I ran a good ole composer update on a client project. This updated Drupal and PHPUnit. I ran our PHPUnit tests via PhpStorm and ran into an odd error:

Fatal error: Class 'PHPUnit_TextUI_ResultPrinter' not found in /private/var/folders/dk/1zgcm66d4vqchm8x5w4q57z40000gn/T/ide-phpunit.php on line 253

Call Stack:
    0.0019     446752   1. {main}() /private/var/folders/dk/1zgcm66d4vqchm8x5w4q57z40000gn/T/ide-phpunit.php:0

PHP Fatal error:  Class 'PHPUnit_TextUI_ResultPrinter' not found in /private/var/folders/dk/1zgcm66d4vqchm8x5w4q57z40000gn/T/ide-phpunit.php on line 253
PHP Stack trace:
PHP   1. {main}() /private/var/folders/dk/1zgcm66d4vqchm8x5w4q57z40000gn/T/ide-phpunit.php:0

If you didn't know, PhpStorm creates an ide-phpunit.php script that it invokes. Depending on the PHPUnit version it will generate the required code. In my case, it generated for 4.8.36

//load custom implementation of the PHPUnit_TextUI_ResultPrinter
class IDE_Base_PHPUnit_TextUI_ResultPrinter extends PHPUnit_TextUI_ResultPrinter
{
    /**
     * @param PHPUnit_Util_Printer $printer
     */
    function __construct($printer, $out)
    {
        parent::__construct($out);
        if (!is_null($printer) && $printer instanceof PHPUnit_TextUI_ResultPrinter) {
            $this->out = $printer->out;
            $this->outTarget = $printer->outTarget;
        }
    }

    protected function writeProgress($progress)
    {
        //ignore
    }
}

The quick fix: Let PhpStorm know that the PHPUnit version was updated. The error comes from the scripts PhpStorm generates for running PHPUnit. Once I clicked the refresh button, PHPUnit was recognized as 6.5.8, and the error was resolved.

Thumbnail
Apr 26 2018
Apr 26

Drupal 8 ships with the RESTful Web Services module which allows you to expose various API endpoints for interacting with your Drupal site. While the community is making a push for the JSON API module, I have found the core' RESTful module to be pretty useful when I have custom endpoints or need to implement Remote Procedure Calls (RPC) endpoints. However, using the module and enabling endpoints is a bit rough. So, let's cover that! Also note, this blog covers the content from the introduction of the Web Services chapter from the Drupal 8 Development Cookbook.

For the curious: RESTful stands for Representational state transfer. It's about building APIs limited to the constraints of the HTTP definition itself. It's stateless and allows does not require you to build a super custom client when working with the API. If you want to nerd out, read Roy Thomas Fielding's dissertation which describes the REST architecture.

Let's get started!

Thumbnail

So, there is one major problem with the core RESTful Web Services module. It has no user interface. Yep, that's right. A Drupal component without a user interface out of the box, I didn't believe when I wrote my book the first time nor the second. Luckily, there is a module for that (the Drupal way.) You can either manually edit config through the means of your choice, or grab the REST UI module. The REST UI module provides a user interface to enable and modify API endpoints provides by the RESTful Web Services module.

First things first, we need to add the REST UI module to our Drupal site.

cd /path/to/drupal8
composer require drupal/restui

If you are new to Composer, here's an example screenshot of the output from when I ran the command.

Thumbnail

Now that the module has been added to Drupal, let's go install the modules. Go on and log into your Drupal site. Then head over to Extend via the administrative toolbar and install the following Web services modules: Serialization, RESTful Web Services, and REST UI. Although not needed, I'm also going to install HTTP Basic Authentication to show off what it looks like to configure authentication providers (covered properly in the book.)

Thumbnail

Click on Install to get the modules all settled into Drupal and available. Once you see that green message 4 modules have been enabled: HTTP Basic Authentication, RESTful Web Services, Serialization, REST UI. we are good to go to move to the next step!

Configuration: enabling and modifying endpoints

Yay! We have modules installed, we can begin to get all decoupled, or progressively decoupled. Go to Configuration and click on REST under Web Services to configure the available endpoints. To expose content (nodes) over the API, click on Enable for the Content row.

Thumbnail

With the endpoint enabled, it must be configured. Check the GET method checkbox to allow GET requests. Then, check the json checkbox so that data can be returned as JSON. All endpoints require a selected authentication provider. Check the cookie checkbox, and then save it. This gives us an endpoint to read content that returns a JSON payload.

Thumbnail

Go ahead and click Save configuration to finish enabling the endpoint. Also note, any RESTful resource endpoint enabled will use the same create, update, delete, and view permissions that have been already configured for the entity type. In order to allow anonymous access over GET for content,

Testing it out!

Give it a test run! Using cURL on the command line, a piece of content can now be retrieved using the RESTful endpoint. You must add ?_format=json to the node's path to ensure that the proper format is returned.

Here's an example command

curl http://drupal8.ddev.local/node/1?_format=json

And its response (super truncated for space.)

{
    "nid": [
        {
            "value": 1
        }
    ],
    "uuid": [
        {
            "value": "1b92ceda-135e-4d7e-9861-bbe9a3ae1042"
        }
    ],
    "title": [
        {
            "value": "This is so stateless"
        }
    ],
    "uid": [
        {
            "target_id": 1,
            "target_type": "user",
            "target_uuid": "421f721e-d342-4505-b1b4-57849d480ace",
            "url": "\/user\/1"
        }
    ],
    "created": [
        {
            "value": "2018-04-26T02:44:54+00:00",
            "format": "Y-m-d\\TH:i:sP"
        }
    ],
    "changed": [
        {
            "value": "2018-04-26T02:45:23+00:00",
            "format": "Y-m-d\\TH:i:sP"
        }
    ],
    "body": [
        {
            "value": "\u003Cp\u003EIn my experience, there is no such thing as luck. What?! Hey, Luke! May the Force be with you. What good is a reward if you ain\u0027t around to use it? Besides, attacking that battle station ain\u0027t my idea of courage. It\u0027s more like\u2026suicide.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ERemember, a Jedi can feel the Force flowing through him. The Force is strong with this one. I have you now.\u0026nbsp;\u003Cstrong\u003EHey, Luke!\u003C\/strong\u003E\u003Cem\u003EMay the Force be with you.\u003C\/em\u003E\u0026nbsp;He is here.\u003C\/p\u003E\r\n\r\n\u003Ch2\u003EDon\u0027t be too proud of this technological terror you\u0027ve constructed. The ability to destroy a planet is insignificant next to the power of the Force.\u003C\/h2\u003E\r\n\r\n\u003Cp\u003EDon\u0027t be too proud of this technological terror you\u0027ve constructed. The ability to destroy a planet is insignificant next to the power of the Force. He is here. Ye-ha! Ye-ha! I have traced the Rebel spies to her. Now she is my only link to finding their secret base.\u003C\/p\u003E\r\n",
            "format": "full_html",
            "processed": "\u003Cp\u003EIn my experience, there is no such thing as luck. What?! Hey, Luke! May the Force be with you. What good is a reward if you ain\u0027t around to use it? Besides, attacking that battle station ain\u0027t my idea of courage. It\u0027s more like\u2026suicide.\u003C\/p\u003E\n\n\u003Cp\u003ERemember, a Jedi can feel the Force flowing through him. The Force is strong with this one. I have you now.\u00a0\u003Cstrong\u003EHey, Luke!\u003C\/strong\u003E\u003Cem\u003EMay the Force be with you.\u003C\/em\u003E\u00a0He is here.\u003C\/p\u003E\n\n\u003Ch2\u003EDon\u0027t be too proud of this technological terror you\u0027ve constructed. The ability to destroy a planet is insignificant next to the power of the Force.\u003C\/h2\u003E\n\n\u003Cp\u003EDon\u0027t be too proud of this technological terror you\u0027ve constructed. The ability to destroy a planet is insignificant next to the power of the Force. He is here. Ye-ha! Ye-ha! I have traced the Rebel spies to her. Now she is my only link to finding their secret base.\u003C\/p\u003E\n",
            "summary": ""
        }
    ]
}

Woo! we did it! You can now consume content from your Drupal site from anything your dreams desire.

Want to know more?

There is a whole chapter in the Drupal 8 Development Cookbook on Web Services which covers the core RESTful Web Services module: GET, POST, PATCH, using Views, and Authentication. It also gives a brief introduction to the JSON API module.

Apr 18 2018
Apr 18

DrupalCon is always something I look forward to, ever since attending my first one at DrupalCon Los Angeles 2015. As I wrote over a week ago, I drove down from Wisconsin with my wife and two boys to Nashville. We came down for the weekend before and stayed for the weekend after to do some touristing and vacationing. I tried to write one blog about DrupalCon but realized I couldn't really condense everything I had to say. So I plan on pushing out a few post-Nashville blogs.

Thumbnail

My wife is a country music fan, so we for sure had to head on over to the Country Music Hall of Fame museum. Outside and engraved in on the corner of the building there were some quotes from famous country musicians. The Hank Williams quote outside of the Country Music Hall of Fame museum caught my eye.

You ask what makes our kind of music successful. I'll tell you. It can be explained in just one word. Sincerity.

The quote stuck with me. It helped set the tone of DrupalCon for me. While Hank may have been talking about music, I think it's a recipe that transcends just about anything. When we talk about Drupal, we always talk about the community. Right? Come for the code, stay for the community. That is our mantra. But, what makes us so different?

Maybe it's our community's sincerity. Overall I find the Drupal community to be pretty genuine, trustworthy and have a high level of integrity. In fact, these traits of the community are what ended up bringing me into the community and going Full Drupal™.

While working the Commerce Guys booth, we had a wide array of people come up and talk. I would like to say we had a split audience. About half were wanting to learn about Drupal Commerce (new users), wanting an update (previous users), or current users validating their architecture and other ideas. The other half, however, was our community members coming up and saying "thanks." 

It was a morale boost to have people come up and say "thank you" for support in the Slack channel, Drupal.org issue queues, or just our general work on Drupal Commerce and the other contrib. Being in a distributed workplace and contributing to a distributed open source community can be wearing on the soul. Problems are harder to solve and communication can be misinterpreted.  Regional DrupalCamps are great, especially for local networking. But DrupalCon is the one thing which can bring people all over together for more than a few days. 

I have come back motivated. I love working in open source. I love contributing to Drupal and Drupal Commerce. I love the fact my job is helping businesses solve the hardest parts of selling online via our platform. But, after a while, it seems intangible. Then I get to hear from our end users that are thankful for our help. And that is awesome. It elevates the work I do because it's not just to earn a paycheck and deliver some software artifact. It is making an impact bigger than myself. And I love that.

That is one reason I brought my family down to Nashville. Often times I'm sitting at home geeked about some contribution or some other Act of Awesome by our community, and I wanted my wife to experience it, too. "Look! I'm not completely crazy for getting on the computer at 5 am or staying up until midnight to work on this thing called a patch!"

Our community is so badass I just want to show it off to those who are missing out on the experience and the great connections within it.

You can read about how the Drupal community made an impact in my life and career in these two blogs

Apr 06 2018
Apr 06

It is that time of year, again! It is DrupalCon time! Woooooo. Last year DrupalCon Baltimore saw 3,271 attendees, and I'm betting Nashville will bring in more (because, Nashville.) When this publishes and hits various feed, I will be on the road and (hopefully) an hour into the eight-hour drive to Nashville with my family.

Thumbnail

My wife and two kids joined me at DrupalCon New Orleans - and it was awesome. We enjoyed Frenchman Street, while avoiding Bourbon Street. Our oldest son, then four, loved having beignets and walking through the French Quarter. Cafe Beignet was a nightly affair. I doubt our youngest, only nine months at the time, remembers the trip much.

But! Now, that they are six and two, I bet they will have a blast in Nashville. My wife is a big country fan, too, so I know she will. There is a listening room near our lodging that is all ages friendly, and apparently has the best acoustics in town.

They won't be coming into the conference. But, I'm thankful that I get to bring them along and experience the food, music, and other things Nashville will bring. I spend a lot of my mornings and nights doing extra work, most often my open source "commitments" that fall outside of client time. So, getting to do this is pretty awesome.

A lot has changed in two years, since New Orleans. I hope that I'll have the chance to introduce my family to members of my Drupal family,  all of whom make such a big impact in my day-to-day work.

I cannot wait to see everyone in Nashville!

Apr 02 2018
Apr 02

In prep for DrupalCon Nashville, I was working on our Drupal Commerce demo sites that we'll be showing off. They have been running in silent mode for some time and recently received an overhaul so they use our demo and out of the box theme for Drupal Commerce, Belgrade. For one of the sites I received the following error: To start over, you must empty your existing database and copy <em>default.settings.php</em> over <em>settings.php</em>. Wait, what? When I ran drush si it dropped all the tables. Apparently, not.

Whenever I would use Drush to reinstall the website, I would get the following error message:


<ul>
<li>To start over, you must empty your existing database and copy <em>default.settings.php</em> over <em>settings.php</em>.</li>
<li>To upgrade an existing installation, proceed to the <a href="http://glamanate.com/update.php">update script</a>.</li>
<li>View your <a href="http://example.com">existing site</a>.</li>
</ul> 

Now, I was getting ready to pull my hair out. I set up our demos on a Drupal multisite via DrupalVM. All the other sites were installing fine, except this one. So I started debugging.

When I ran SHOW TABLES; in the MySQL command line... there were no tables. When I ran DROP DATABASE db_name ... I got an error. Instead of everything be happy, I got an error like the following:

Error Dropping Database (Can't rmdir '.test\', errno: 17)

I have never seen that. So I did what all proper software engineers do: go to Google, search the error, and see what the hell other people have done when they run into this issue. Luckily I came across a Stack Overflow question and answer https://stackoverflow.com/questions/4584458/error-dropping-database-can…. Basically, this error happens when there is crud data in the database directory. So I checked what was in the database directory (since there were no tables.)


[email protected]:/var/www/drupal$ sudo ls -la /var/lib/mysql/db_name/
total 40
drwxr-x---  2 mysql mysql 20480 Mar 30 18:35 .
drwxr-xr-x 10 mysql mysql  4096 Mar 30 18:20 ..
-rw-r-----  1 mysql mysql 65536 Nov 23 03:54 drupal_install_test.ibd

There was an InnoDB table file called drupal_install_test.ibd. From the MySQL reference

InnoDB's file-per-table tablespace feature provides a more flexible alternative, where each InnoDB table and its indexes are stored in a separate .ibd data file. Each such .ibd data file represents an individual tablespace. This feature is controlled by the innodb_file_per_table configuration option, which is enabled by default in MySQL 5.6.6 and higher.

So running a manual removal fixed everything.

sudo rm /var/lib/mysql/db_name/drupal_install_test.ibd

Turns out this table is part of the Drupal installation process to check it has all the required access it needs to the database. I found the following in lib/Drupal/Core/Database/Install/Tasks.php and it's also in Drupal 7, it seems. The problem is, those, that a bad install and then re-attempted install breaks, bad.


  /**
   * Structure that describes each task to run.
   *
   * @var array
   *
   * Each value of the tasks array is an associative array defining the function
   * to call (optional) and any arguments to be passed to the function.
   */
  protected $tasks = [
    [
      'function'    => 'checkEngineVersion',
      'arguments'   => [],
    ],
    [
      'arguments'   => [
        'CREATE TABLE {drupal_install_test} (id int NULL)',
        'Drupal can use CREATE TABLE database commands.',
        'Failed to <strong>CREATE</strong> a test table on your database server with the command %query. The server reports the following message: %error.<p>Are you sure the configured username has the necessary permissions to create tables in the database?</p>',
        TRUE,
      ],
    ],
    [
      'arguments'   => [
        'INSERT INTO {drupal_install_test} (id) VALUES (1)',
        'Drupal can use INSERT database commands.',
        'Failed to <strong>INSERT</strong> a value into a test table on your database server. We tried inserting a value with the command %query and the server reported the following error: %error.',
      ],
    ],
    [
      'arguments'   => [
        'UPDATE {drupal_install_test} SET id = 2',
        'Drupal can use UPDATE database commands.',
        'Failed to <strong>UPDATE</strong> a value in a test table on your database server. We tried updating a value with the command %query and the server reported the following error: %error.',
      ],
    ],
    [
      'arguments'   => [
        'DELETE FROM {drupal_install_test}',
        'Drupal can use DELETE database commands.',
        'Failed to <strong>DELETE</strong> a value from a test table on your database server. We tried deleting a value with the command %query and the server reported the following error: %error.',
      ],
    ],
    [
      'arguments'   => [
        'DROP TABLE {drupal_install_test}',
        'Drupal can use DROP TABLE database commands.',
        'Failed to <strong>DROP</strong> a test table from your database server. We tried dropping a table with the command %query and the server reported the following error %error.',
      ],
    ],
  ];

I have no idea how the site got into such a corrupt state - and installed. The only relevant issue I could find was https://www.drupal.org/project/drupal/issues/1017050.

Mar 30 2018
Mar 30

Back in February, I automated some of my content workflows. I use the Scheduler module to publish posts and have them automatically pushed into Buffer to be shared across my social networks. I'm attempting a new experiment once this node publishes. This should show up at my Medium account, https://medium.com/@mglaman.

Thumbnail

I don't agree with using Medium the canonical blogging platform for companies. But I know there are plenty of users on Medium, and it provides its own virality. It also pushes content to platform readers that they may find relevant. So, this is my attempt to bridge to new readers.

The Medium API docs allow you to specify a canonical URL that is "The original home of this content, if it was originally published elsewhere." So, here goes nothing on this experiment!

Mar 17 2018
Mar 17

At the end of October 2017, I wrote about the new and upcoming changes to ContribKanban.com. I decided to migrate off of a MEAN stack and go to a more familiar (aka manageable) stack. I decided upon Drupal 8 to manage my backend. Drupal is what I do, and Drupal is amazing at modeling data. People can moan and whine - it handles data models like a boss. I decided to treat it as a "progressively" decoupled web application.

WTF is "Progressively Decoupled"

I caught myself at Florida DrupalCamp, MidCamp and other conversations calling the new ContribKanban.com a progressively decoupled application. Which made me wonder, what in the hell does that really even mean? Well, according to Dries back in 2015, this description seems to best fit how it all works:

where the skeleton of the page loads first, then expensive components such as "songs I listened to most in the last week" or "currently playing" are sent to the browser later and fill out placeholders.

Drupal 8 handles routing, data, and initial rendering. Then the client picks it up in the browser. And that is how ContribKanban.com works. Drupal is a data storage for Drupal.org API query configurations. When you visit a board, Drupal gets that information and bubbles it up to the client via the JavaScript API's drupalSettings. Then the client side scripts take over and render fancy kanban boards off API results from Drupal.org

So, I guess it's a decoupled progressively decoupled application. Data comes from Drupal.org over RESTful interfaces and board configuration comes from the Drupal instance via its render and template system.

Makes sense, right?

Why I chose ReactJS

October 17th through about the 26th I had a sprint of activity rebuilding ContribKanabn on Drupal 8. I had built the entities for board and list storage developed design patterns for representing the board configuration in drupalSettings, had my JavaScript built out - the whole nine yards. In fact, I had even managed to kill the requirement of jQuery and have it nearly fully functional. While that made me feel like a badass for not needing to depend on jQuery and use VanillaJS, I hit a brick wall. I need to support a few very specific requirements

  • I needed to be able to allow users to filter by issue priority and issue type when viewing a board, causing the lists to re-render and filter by this constraint
  • I needed to be able to take a project node ID and translate it to the project title within a card
  • I needed to be able to take a user ID and translate it to their username when assigned to a card.

In AngularJS I knew exactly what to do. Use a for loop and then two directives. Boom. Done. In plain old JavaScript, I was not sure the approach to take. I had read a lot of hubbub about Polymer and web components, especially thanks to content by Bryan Ollendyke. I had played around with Angular but didn't really love the next generation of its development (maybe blame TypeScript?) I knew the popular kids on the scene were React and Vue.js One is developed by Facebook and has a huge community following and the other seems to be React filled with those who want to flip the bird to corporate-sponsored open source projects*. Plus it appeared (and it true) that the Drupal core Javascript Modernization Initiative would pick React and get started at improving Drupal's ability to decouple.

So I picked it back up and decided to move forward with React so I could solve my user interface woes. I blogged about some of my first experiences using ReactJS, which was pretty enjoyable. I'm happy with my choice.

* I know I probably offended a handful of readers, there. But when I've asked people the difference between Vue.js and React there have been no defining differences. You have JSX and Nuxt. I understand where Vue.js is different than Angular. But the micro differences between React and Vue.js seem very minuscule.

What's under the new hood?!

So! What's under the new hood? Now that I don't need to manage running a MEAN stack and can just do what I do best, I can focus on features and general improvements.

Manageable data model for boards and lists.

In the original version of ContribKanban, boards and lists were one giant JSON blob in MongoDB with some overrides in a JSON file for how the board should appear. There was no administrative user interface. When Parse.com went away it made my life a living hell. The project became stagnant and undeveloped. So this brings new life and management. yay!

Thumbnail

Super duper custom boards and configurations

One of the best things about this switch is the fact it is a user interface for Drupal.org's API. An example of such a board can be found here: https://contribkanban.com/board/1. That board is for Lullabot's AMP projects on Drupal.org. It combines the module and theme issue queue into a single board.

Another example is the Drupal Core Novice board, https://contribkanban.com/board/DrupalCoreNovice. It contains issues for the currently active Drupal 8 branches that are tagged as Novice, but only of "normal" and "minor" priority.

User accounts for more awesome things

I wanted to also allow for people register user accounts so that there can be user-based data. One such thing is to display issues of projects that a user maintains, or at least the recently updated issues that they follow. Creating accounts also allows me to figure out privileges for creating custom boards and owning project boards. As a maintainer of Commerce Reports may I want all issues to filter on a parent issue which I use to track as goals for the next release.

This is an example of the "Your Issues" tab on your Drupal.org profile parsed into a kanban board when you have an account and provide your Drupal.org username.

Thumbnail

A board of custom issues

One of my first AngularJS projects was a Chrome app to track all the issues we needed to be resolved for our SaaS product built on top of Drupal. Shawn McCabe also pinged me about a similar requirement, but a way to track their "Looking for Work" board to give guided down-time contribution work at AcroMedia, Inc.

You add issue node IDs and create custom boards. Pretty easy way to build a sprint board without needing issue tags, especially if it's a private project.

Thumbnail

Next steps

Next... I need to fix some lost functionality - such as browsing by branch. Browsing boards is lacking finess, but analytics show most people enter the application via direct links than discovering boards.

I would also like to add OAuth2 authentication. The idea would be to provide a browser extension that lets you authenticate to ContribKanban and add issues from Drupal.org to a node board inline. Similar to "following" an issue, it would let you add an issue to an existing board or create new boards.

I would also like to experiment with GitHub integration, as some projects use GitHub as well.

Mar 16 2018
Mar 16

Every software release needs to have release notes. End users need to be able to understand what it is that they are upgrading to and any associated risks, or risks mitigated by upgrading and receiving bug fixes. Across the board proprietary and open source software projects either hit or miss on delivering decent release notes. During MidCamp I decided to help fix that problem for Drupal.org projects.

What makes good release notes?

Probably the first question is: what makes good release notes? It depends on the audience of the product and scale of the product. As an Android user, I'm used to my perpetual weekly "Bug fixes and improvements" message. As a developer that kills me because I want to know. But for my wife, she probably doesn't care or read the reason why there is an update, she just lets the auto-updater run.

Take a look at Spotify versus Buffer, for instance.

Thumbnail

Spotify, being a music app, has a more general audience and has a larger user base. Buffer is for social media savvy and marketing oriented users, it's base is also more niche. Release notes should educate the end-user about what they are getting into and what problems will be fixed. Maybe your end audience does not care that much (Shopify) so you just maintain your internal release logs and keep an eternal generic release notes message up. Or you make a blend, like Buffer. Or you go all out with technical details and documentation like Drupal, WordPress, and others.

Making better Drupal module release notes

When a module rolls out a new release and the contents are just "some bugs fixed" or no content, I get sad inside. I know most of us are maintaining Drupal modules on the side without true sponsorship or business-oriented motivation. But, as maintainers, or job is to ship dependable software. Dependable software doesn't mean just committing bugs and release fixes. It means communicating those to our end users.

If you look at the Drupal 8.5.0 release notes, you see a lot of detail. There has to be! It's Drupal core! Drupal core has release managers and product managers that can work to generate these notes that communicate important updates, major bugs fixed or any features added, and any known issues. This allows for proper evaluation of an incoming release and the impact it will make on operations. It's a large product with a technical and management level audience.

Now, let's look at another module that has nearly 150,000 sites using it - Superfish. The 8.x-1.2 and 8.x-1.1 release notes look like a quick manual copy and paste grab from the commit log. Since most module maintainers follow the standard commit message practices, I could have just reviewed the commit log and reviewed for my self. I have no context on what those issues were for or if they are bugs or features. Please note: this is no knock on the maintainer mehrpadin. I am just making an observation and stating my motivation. Release notes take time to curate, and most open source maintainers use their spare time to manager issue queues. It's why we have a documentation problem as well, and a whole other topic.

Next let's check out the Drupal Commerce release notes, which has had two format changes since 2.2. If you look at the 8.x-2.2 release notes, we did the same as Superfish - a blob of text which was our commit log and requires manual work to learn more about the included changes. In 8.x-2.4 there was a change. Changes link to their proper issue queue and we link to the users who helped. There's a tool for this, and I'll cover that next. In 8.x-2.5 we have an even better format, in my opinion. Not only are issues linked, but they are grouped by the type of fix: bug, feature, task. An end user of Drupal Commerce can quickly evaluate if the release is mostly features or bug fixes and determine possible impact. There's a tool for this, too. The one I worked on at MidCamp.

Thumbnail

There are tools for that

Through means now lost to memory, I came across the Git Release Notes for Drush project. It takes the commit log of a Drupal.org module and turns it into release notes which automatically link to the issue and users. The drush rn module was used to generate the Drupal Commerce 8.x-2.4 release notes. I also use it heavily on all of my other modules. However, after a few years of use, I wanted something a little bit more. I didn't want to keep using a Drush command (especially since this pre-Drush 9 command, back in Drupal 6 days.) 

I have a Drupal.org CLI tool that I use for my day to day maintainership and contribution items. While at MidCamp I hacked on a drupalorg maintainer:release-notes command. It's heavily based off of the GRN Drush command. It compares tags, takes the commits, does some processing, makes pretty release notes.

Thumbnail

The command provides some additional features. I want to thank Mike Lutz who gave some ideas! Having issues broken apart by their issue type makes it more readable and understandable. I also wanted to call out our contributors, because they are essential to our project.

Thumbnail

The end result provides a nice release note summary. I would like to find even more improvements that give Drupal.org project maintainers the ability to create insightful release notes without the management overhead.

You can find the project at https://github.com/mglaman/drupalorg-cli. Currently, it's only available via a global Composer install, which can cause conflicts with Drush. I'm working on providing a Phar for download.

Mar 11 2018
Mar 11

At DrupalCon Dublin I caught Fabianx’s presentation on streaming and other awesome performance techniques. His presentation explained how BigPipe worked to me, finally. It also made me aware of the fact that, in Drupal, we have mechanisms to do expensive procedures after output has been flushed to the browser. That means the end user sees all their markup but PHP can chug along doing some work without the page slowing down.

What is the Kernel::TERMINATE event

The Kernel::TERMINATE event is provided by the Symfony HttpKernel. As documented in Symfony\Component\HttpKernel\KernelEvents:

The TERMINATE event occurs once a response was sent. This event allows you to run expensive post-response jobs.

Thumbnail

Drupal core uses this event, usually to write caches.

  • path_subscriber writes the alias cache.
  • request_close_subscriber write the module handler’s cache.
  • automated_cron.subscriber runs cron
  • user_last_access_subscriber updates the user’s last access timestamp

If you worked with Drupal 7, consider this to be like drupal_page_footer()

A practical real-life (client) use case

I use this in a client project to work with Salesforce. The Salesforce API is pretty slow (no, it’s extremely slow.) Especially when you need to do a lot of operations:

  • create an account
  • create contact references
  • generate an opportunity
  • provide the line items for that opportunity

You don’t want the customer to sit on a forever loading page while they wait for the checkout complete page to load. People expect their eCommerce experience to be fast. The Kernel::TERMINATE event is a very special trick. It can allow you to do remote API calls or other length procedures without slowing down the user.

The checkout process we have ends up working something like this, thanks to the Kernel::TERMINATE event.

  1. The customer enters payment information.
  2. Upon validation, the user is brought to a provisioning step
  3. API requests are made to provision their product account and subscription. The order is queued for Salesforce
  4. Upon success, the user is brought to the checkout complete page
  5. The Customer sees their confirmation page, account information, and a link to log into the product.
  6. The server begins running Salesforce API integration calls on the same PHP FPM worker that flushed the content for the checkout complete page.

Watching the logs stream the entire event is pretty amazing. The end user experience is only blocked on business critical validations and not internal business operations (CRM and reporting.)

Here’s how we make the magic happen

We have a service called provision_handler. This is a class which helps us run various provisioning calls for each scenario. The service allows us to set the order which should be queued and provisioned after the checkout complete page has been flushed to the end user.

  /**
   * Sets the queued order to run on Kernel::TERMINATE.
   */
  public function setQueuedOrder(OrderInterface $order) {
    $this->queuedOrder = $order;
  }

  /**
   * Get the queued order to finish provisioning.
   */
  public function getQueuedOrder() {
    return $this->queuedOrder;
  }

The queued order is just a property on our provision handler. At no time during any single request will we be handling multiple orders, a known operation constraint. And thanks to the way that PHP works, we know that this data remains stable during our request. After we have executed our business critical validations and provisioning we set the queued order from our checkout flow code.

$provision_handler->setQueuedOrder($this->order);

Then our event subscriber picks up the order and ensures all that order information is synchronized into Salesforce.

  /**
   * Ensures system paths for the request get cached.
   */
  public function onKernelTerminate(PostResponseEvent $event) {
    $order = $this->handler->getQueuedOrder();
    if ($order) {
      $this->handler->finalizeProvisioning($order);
    }
  }

  /**
   * {@inheritdoc}
   */
  public static function getSubscribedEvents() {
    return [
      KernelEvents::TERMINATE => ['onKernelTerminate'],
    ];
  }

Other use cases

Drupal Commerce has many uses for this event. Many times when an order has completed checkout many different things need to happen. Send emails, send the order to an ERP, update stock, change promotion usage stats, etc. Or, what if you are generating order reports?

In Commerce Reports 8.x–1.x I decided to use the Kernel::TERMINATE event to generate order reports. This offloads an expensive procedure to, essentially, process itself in the background after the user has seen that their order was paid and completed.

The module listens on two events: the order to go through the placed transition and then for Kernel::TERIMATE

  /**
   * {@inheritdoc}
   */
  public static function getSubscribedEvents() {
    $events = [
      'commerce_order.place.pre_transition' => 'flagOrder',
      KernelEvents::TERMINATE => 'generateReports',
    ];
    return $events;
  }

When the place transition runs, the flagOrder method is called to identify the order for later processing. Note: the current method isn’t great, hence the @todo.

  /**
   * Flags the order to have a report generated.
   *
   * @todo come up with better flagging.
   *
   * @param \Drupal\state_machine\Event\WorkflowTransitionEvent $event
   *   The workflow transition event.
   */
  public function flagOrder(WorkflowTransitionEvent $event) {
    $order = $event->getEntity();
    $existing = $this->state->get('commerce_order_reports', []);
    $existing[] = $order->id();
    $this->state->set('commerce_order_reports', $existing);
  }

This adds the order to be processed in Drupal’s state along with any other queued orders. When the time comes, we load all the queued orders and run the reporting plugins against them.

  /**
   * Generates order reports once output flushed.
   *
   * This creates the base order report populated with the bundle plugin ID,
   * order ID, and created timestamp from when the order was placed. Each
   * plugin then sets its values.
   *
   * @param \Symfony\Component\HttpKernel\Event\PostResponseEvent $event
   *   The post response event.
   *
   * @throws \Drupal\Core\Entity\EntityStorageException
   */
  public function generateReports(PostResponseEvent $event) {
    $order_ids = $this->state->get('commerce_order_reports', []);
    $orders = $this->orderStorage->loadMultiple($order_ids);
    $plugin_types = $this->reportTypeManager->getDefinitions();
    /** @var \Drupal\commerce_order\Entity\OrderInterface $order */
    foreach ($orders as $order) {
      foreach ($plugin_types as $plugin_type) {
        /** @var \Drupal\commerce_reports\Plugin\Commerce\ReportType\ReportTypeInterface $instance */
        $instance = $this->reportTypeManager->createInstance($plugin_type['id'], []);
        $order_report = $this->orderReportStorage->create([
          'type' => $plugin_type['id'],
          'order_id' => $order->id(),
          'created' => $order->getPlacedTime(),
        ]);
        $instance->generateReport($order_report, $order);
        // @todo Fire an event allowing modification of report entity.
        // @todo Above may not be needed with storage events.
        $order_report->save();
      }
    }

    // @todo this could lose data, possibly as its global state.
    $this->state->set('commerce_order_reports', []);
  }

Making this easier

I have been wanting to expose this pattern within Drupal Commerce. My vision is that Drupal Commerce will flag orders on the place transition and provide a subscriber on Kernel::TERMINATE. Then all custom and contributed code can subscribe to our own even which files in Kernel::TERMINATE that provides the order to be processed.

This would make Drupal Commerce more performant out of the box and remove operations performed before the output is flushed to the browser. This would mitigate performance bottlenecks for operations non-essential to the customer’s client-side experience. For instance:

  • Generating log entry for order completion
  • The order receipt email
  • Registering promotion and coupon usage

When it comes down to the code execution, the process is merely shifted to runtime milliseconds later. But the end user is guaranteed to see their page’s content and not be blocked by these operations.

It is an easy win to boost perceived performance within Drupal.

Feb 20 2018
Feb 20

DrupalCamp London is coming around the corner! If you have the chance to go, I highly recommend it. The organizers put on a top-notch event. Last year I had the privilege of giving my first keynote at the conference. I firmly believe that open source is a creator of opportunity. There is no such thing as free software. In open source, we donate our time to provide software that has no monetary cost. This lowers the barrier to entry by removing a layer economic limitations. This is why I love working in open source, and Drupal.

I realized I never wrote up my keynote or shared it beyond the recording. A year later it seems like a great reflection point.

Thumbnail

I had been playing with code since middle school. I learned PHP because our Battle.net and Counter-Strike teams wanted websites, somewhere we could post news and stuff. Thanks to XTemplate and (I think?) PHPNuke. I then met some friends who played Ultima Online and began writing C# so we could hack on RunUO and have our own shard. For the record, C# is probably my favorite to work with and is one reason I'm glad to see OOP in Drupal 8.

What is open source?

The term "open source" refers to something people can modify and share because its design is publicly accessible.

This definition is from opensource.com. I like it because it is not limited to code. Having ideas open, shareable, and hackable leads to opportunities that would not be found in a locked down environment.

This is what allowed me to grow my hobby. It was a career possibility back then. It was just something fun I could do. I got to learn how to code by reading open source software and sandboxing around. I wrote a C# desktop client that communicated with our PHP website - security holes and glaringly bad code included.

Thumbnail

Fast forward some years and I still wrote code as a hobby. I built a CMS for a World of Warcraft guild so we could manage raid times and membership. But I had met my wife and needed to get a real job that didn't involve making sandwiches and provided insurance. So I got a job at the local beer distributed and eventually became a driver, throwing kegs of beer around for thirsty college students.

Sometimes I think things happen for a reason. It was November of 2012 and I slipped going down some stairs with my dolly and two barrels. I hurt my back and had a doctors note that I could not work for a few days. At the same time, a local marketing agency had opened a job posting for a web developer. So I took the chance. I knew I had some skill, but I didn't think enough. But I went for it.

The job interview was to take a Photoshop mockup and convert it into a WordPress theme. I had worked with WordPress once or twice before, but not much. I had always opted to hack things on my own so I could fiddle around.

Thanks to open source...

I could review and learn.

I was able to review other existing themes and WordPress documentation to accomplish my task. I firmly believe you gauge someone's skill by the ability to solve problems. Not by how well they have memorized a process.

No training or formal education

I have a bachelors in Software Engineering, now. At that time I had an Associates degree in IT Networking. All of my code writing had been self-taught and learning by examples. 

I was as knowledgeable as I chose to be.

Working with open source, my only limitation was my willingness to debug and reach the Google nethers for an answer.

Thumbnail Photo by Heymo_V: https://flic.kr/p/G5RzEC

Roughly 5 years ago, now, I started my first Drupal site. With Commerce Kickstart 2. I installed the demo store. I had to rebuild the site because you cannot uninstall the demo. It was a fun first ride. While we all of our sites on WordPress, we had a large eCommerce project. WooCommerce was in its infancy and there were many half-baked WordPress solutions. Through research, I found Drupal via Commerce Kickstart.

Why did I get hooked?

I got hooked, initially, out of the fact it was free and was stable compared to my other options. Magento felt too closed and heavy, requiring too much of an ask on hosting requirements. 

Drupal had centralized issue queues, an active IRC channel and a free spirit of help. With basic research, I felt I could take on this task by just experiencing the static parts of the Drupal community.

The Community.

There is something special about the people we work and connect with through Drupal. It still feels special today as it did those first few stressful weeks of figuring out what the hell I was doing while using Drupal. The code did not give me all the answers. Reading hook documentation only takes you so far when you're new. 

  • StackExchange question and answers came from people.
  • Documentation on Drupal.org and blogs came from people
  • IRC (now Slack, too) is full of people willing to donate some time and answer questions.

This community had sold me on Drupal, and it was from just the tip of the iceberg experience I had. 

Thumbnail

What makes the community?

At its basis, I think a community is made of people who share ideas. Is that not how open source projects start? Someone shares an idea or code. Someone else thinks it is a great idea or has a similar idea. These individuals work together to create more ideas and code. Now there is a project attracting more individuals and it becomes a community, regardless if anyone has ever met.

However, a community does not get far that way. What makes Drupal special is the friendship that seems to span all these individuals in different timezones, nations, and cultures. We make these friendships and bonds at user groups meetings and conferences (Drupal Camps for the win!)

Drupal & the community brought new opportunities

So, I got hooked on Drupal because of Drupal Commerce. We shifted away from using WordPress to using Drupal. Drupal's features allowed us to handle custom content models customers wanted, with less code to maintain. Community support made it easier to solve hard problems. Working with Drupal you feel like you have a support team behind you. It may not be immediate, but there are community members who will go above and beyond to help you solve problems (if you're polite!)

Drupal helped me land my first freelance gig!

In the past, I had done some freelance here and there. Nothing big, unless you count the two lost Bitcoin from when they were worth $5. Oops.

I had started contributing patches to some of the modules we were using. Specifically the USPS and UPS modules, just to help improve them. We were using them at work and I wanted to give something back in exchange. My activity in the queue turned into freelance work from the module maintainer, helping me pay down some bills.

DrupalCamp Atlanta 2013!

Thumbnail

Andy Giles, said maintainer of the USPS and UPS modules, offered to fly me down to attend DrupalCamp Atlanta. I accepted the offer, with my mind yet again blown that this opportunity was happening. It's the career-defining moment of my life. I was shell-shocked by the openness of the community and the discussions being made. This was before Weather.com had officially launched on Drupal, and people were discussing those challenges. I couldn't believe it.

Thanks to Mike Anello, Ryan Szrama, and many others I spoke to at the conference

  • I offered to co-maintain Commerce Reports and opened the 7.x-4.x branch to overhaul it
  • I created a local Drupal user group
  • I chipped away at my imposter syndrome
  • I realize I could make a career out of Drupal
Thumbnail

Networking and meeting similar individuals

Drupal Camps and local user group meetings introduced me to people from all over the world. This networking allows us to continue to share ideas, work on our projects, and introduce and nurture new members. I love the fact that the handful of times I needed to go to New York for a client that I could attend the NYC Drupal meet up.

Provides a way to show skillset and interact with others

I had horrible imposter syndrome. It is why I never pursued a career until I realized I was going to break my body down if I continued to throw barrels of beer around. The community allowed me beat back that imposter syndrome and realize we're all normal folk.

Creates demand for skills and people with those skills

If Dries has never released Drupal, hundreds of thousands of us would not have jobs. If those other individuals had not gathered around Drupal, those same hundreds and thousands of us would not have jobs. Even consider Drupal Commerce. If Ryan had not made Ubercart, ventured to Commerce Guys and built Drupal Commerce... I would not be where I am today. Everything I have today rests on a foundation laid by the Drupal community and the open source software we build.

Thumbnail

As a community, we contribute Support, Inspiration, and Jobs. These are my motivators. For example

  • If we build a better Drupal Commerce, we build a product people can sell
  • If we build a better Drupal Commerce, more code samples and bug fixes in Drupal
Thumbnail

The Driesnote at DrupalCon Dublin really resonated with me. I highly recommend reading his Drupal's collective purpose blog post. As open source users and contributors, we are making an impact on peoples lives. We are creating opportunities for people and lowering barriers. We may not be solving world problems, but we are doing something a little bit bigger than ourselves.

Some days I miss my CDL and driving a truck. Shooting the breeze with bar owners. But I also don't miss getting my truck ready at 5:30 AM and stacking barrels in a basement. So, thanks, Drupal and everyone who makes it!

Thumbnail
Feb 17 2018
Feb 17

Five years ago I went to my second Drupal Camp and first Drupal Camp that I presented at. It has been four years, but I am finally going back to Florida Drupal Camp. I am pretty excited, as a lot has changed since then.

My first Drupal event was DrupalCamp Atlanta in 2013. Thanks for making it a possibility, Andy Giles! I met some amazing people there and fell in love with the Drupal community. In fact, it's where, after talking to Ryan Szrama for a brief moment I decided to opt in and offer a co-maintainer of Commerce Reports. I met some amazing people from the Southeast community. One of them was Mike Herchel.

I cannot remember how it exactly came about, but he said I should propose a session for Florida DrupalCamp. I had no idea what I could talk about, or why it would even make sense for someone from Wisconsin to propose a session for a conference in Drupal. But I did, and I am so happy I took that chance.

I decided to propose a session on how I was using Panels as my workplace. I worked at a marketing agency who focused on great content that would be well received on any device. I found Panels to be our solution - Rockin` Responsive Content with Panels. In fact, the sample code I wrote is still available on my GitHub.

My session got accepted. Thanks, Mike. I'm sure you played a big part in making that happen. But, I had to figure out how to go there. The idea still seemed ludicrous to me. I was a novice at my trade, going to conferences was a new concept. The fact I had just been to a conference in Atalanta blew my mind. But, then things lined up.

Short interjection: I want to give a huge shout out to my wife who supported me on these adventures and continues to.

I believe you need to think big and strive to achieve above and beyond what is currently available to you. Otherwise, you miss chances. And that is exactly what allowed me to make my first visit to Florida Drupal Camp.

My employer did a lot of work for several local chapters of a national non-profit. There just so happened to be invited to a conference the group was hosting -- in Florida. The conference was before Florida Drupal Camp, but they saw value in attending, especially since they would be down there. If I had not taken the risk, decided "why not, what can I lose?" I doubt the dots would have connected for us to attend Florida Drupal Camp, leaving me at home in the Wisconsin post-winter slush.

Just as Drupal Camp Atlanta was an eye-opening experience, so was Florida Drupal Camp. After my session, I got to learn that Derek "Hawkeye Tenderwolf" DeRaps and Kendall Totten were working on Classy Panel Styles to give more power to the content editor who was working in panels. I got to experience my first true community collaboration.

Back at Atlanta, my mind was merely blown by the community's willing ability to share and help. Now I got to witness first-hand collaboration and idea sharing. I was being exposed to my ability to make an impact with some other effort.

People also liked my session! This blew away the layers of imposter syndrome I had sitting on my shoulders. By this time I had been improving Commerce Reports - but that was still in my bubble. I had gotten the ability to open a new Git branch and make fixes which made our client happy. I still had a lingering feeling that I wasn't successful.

I also wanted to get to Florida so I could tell Mike Anello I started a Drupal user group. Back in Atlanta, at dinner after the conference, he told me I should "plant a flag" for Drupal and just start a Drupal user group. Which, I did and ran for almost a year.

So I'm pretty happy and nostalgic as I am on this flight to Orlando (well, now in a hotel room editing and posting.) Florida Drupal Camp 2014 made a major impact in my career. It opened my eyes to new opportunities and careers choices available. It broke down my imposter syndrome to show we all are awesome at some things and equally horrible at others. But that's okay; this is how people work.

I want to give credit to those who made such an impact that caused a jumpstart to my career and deep dive into the Drupal community

  • Andy Giles
  • Mike Herchel
  • Mike Anello
  • Every single attendee that I have talked to in Atlanta and Florida.

Making an impact in someone's journey is not always done in big statements. It is the little things.

Feb 07 2018
Feb 07
Thumbnail

Drupal 8 has a robust routing system built on top of Symfony components. Robust can be taken in many ways. It's powerful and yet magical, making it confusing at times. There is a lot of great documentation on Drupal.org which covers the routing system, route parameters, and highlights of the underlying functionality. But the magic isn't fully covered.

If you are not familiar with the Drupal 8 routing system, head over to the docs and dive in. I'm also assuming you have written routes using parameters and have experienced the magic of parameter conversions. If you haven't, again head over to the docs and read up on using parameters in routes

I don't want to dive completely into how routes work with ParamConverter services to do parameter upcasting. I want to cover more on how Drupal knows to pass proper arguments to your controller method. Parameter upcasting topic is covered fairly well in the documentation on parameter upcasting in routes

I want to discuss how the controller's callback arguments are resolved and put into proper order in our method. For example, how I can take this custom 404-page route I created for ContribKanban.com

contribkanban_pages.not_found:
  path: '/not-found'
  defaults:
    _controller: '\Drupal\contribkanban_pages\Controller\PagesController::on404'
    _title: 'Resource not found'
  requirements:
    _access: 'TRUE'

And have a method which knows the exception that was thrown, the request and route match.

class PagesController extends ControllerBase {
  public function on404(\Exception $exception, Request $request, RouteMatchInterface $match) {
     // How did I get the exception injected?
     // What magic knows Request and RouteMatchInterface are in that order ?!
  }
}

Understanding ControllerResolver::doGetArguments

In Drupal 8 there is the controller_resolver service which maps to the \Drupal\Core\Controller\ControllerResolver class. This extends the controller resolver class provided by Symfony. From what I can understand, this is done to use Drupalisms added to the routing system, along with some other goodies. Our main concern, however, is the Drupal override of the doGetArguments method. This method is where the magic happens.

Determining the parameters for a controller callback

The doGetArguments method is a protected helper method that is invoked by ControllerResolver::getArguments. Before calling doGetArguments the controller definition is passed through PHP's Reflection API to detect its required arguments. In our example above, \ReflectionMethod will be used to find out the parameters for our on404 method. The code would look something like this.

$r = new \ReflectionMethod(
  '\Drupal\contribkanban_pages\Controller\PagesController',
  'on404'
);

This allows us to access the parameters of the method and pass them to doGetArguments.

return $this->doGetArguments($request, $controller, $r->getParameters());

This will leave us with an array loosely representing the following as ReflectionParameter objects.

  • A parameter called exception that expects \Exception
  • A parameter called request that expects Request
  • A parameter called route_match that expects RouteMatchInterface

Now, let's dig into how they get resolved into proper values and in their proper parameter order.

Putting the right things in the right place

Now that the system knows the expected parameters, it can loop through them and attempt to provide values. This is the purpose of the overridden doGetArguments method in Drupal. I am going to walk through the lines of the function, but the method in its entirety can be reviewed on the Drupal API documentation page.

Before looping through the array of ReflectionParameter objects, attributes are extracted from the request object.

$attributes = $request->attributes->all();
$raw_parameters = $request->attributes->has('_raw_variables') ? $request->attributes->get('_raw_variables') : [];

Here's an example output, taken from a 404 page. It is worth noting that the content of the request attributes is not consistent. In this case, the exception value is only available on 4xx and 5xx requests.

Thumbnail

With these values primed, each parameter is run against a series of checks.

First, the method checks if the parameter name matches an array key in the request attributes or raw variables. If there is a match, the value of the request is taken as the parameter variable.

if (array_key_exists($param->name, $attributes)) {
  $arguments[] = $attributes[$param->name];
}
elseif (array_key_exists($param->name, $raw_parameters)) {
  $arguments[] = $attributes[$param->name];
}

It is worth noting here that route parameters have been upcasting by ParamConverter services at this point. That is why the value is first pulled from the $attributes array. In the $raw_parameters array, it has not been upcasted. If we were on /node/{node} the node parameter would be a loaded node in $attributes and still the node ID (1234) in $raw_parameters.

The next check is to see if the parameter is the request object itself.

elseif ($param->getClass() && $param->getClass()->isInstance($request)) {
  $arguments[] = $request;
}

The next parameter check allows you to receive the server-side representation of the HTTP request, as a PSR7 ServerRequestInterface object. The object returned is a Zend\Diactoros\ServerRequest instance. This allows you to have an immutable state for inspection and true representation of the request, unmodified by any other scripts.

elseif ($param->getClass() && $param->getClass()->name === ServerRequestInterface::class) {
  $arguments[] = $this->httpMessageFactory->createRequest($request);
}

Next is the check to see if a RouteMatch was requested. The route match makes it easier to work with the current route and parameters passed to it.

elseif ($param->getClass() && ($param->getClass()->name == RouteMatchInterface::class || is_subclass_of($param->getClass()->name, RouteMatchInterface::class))) {
  $arguments[] = RouteMatch::createFromRequest($request);
}

The final check applies the parameter's default value if provided.

elseif ($param->isDefaultValueAvailable()) {
  $arguments[] = $param->getDefaultValue();
}

If the parameter met none of the conditions an exception is raised. An exception is raised because the system cannot reliably provide parameter values for the route callback.

else {
  if (is_array($controller)) {
    $repr = sprintf('%s::%s()', get_class($controller[0]), $controller[1]);
  }
  elseif (is_object($controller)) {
    $repr = get_class($controller);
  }
  else {
    $repr = $controller;
  }

  throw new \RuntimeException(sprintf('Controller "%s" requires that you provide a value for the "$%s" argument (because there is no default value or because there is a non optional argument after this one).', $repr, $param->name));
}

If you reach this point, you will see the cursed white screen of death and a message like the following:

The website encountered an unexpected error. Please try again later.</br></br><em class="placeholder">RuntimeException</em>: Controller &quot;Drupal\contribkanban_pages\Controller\PagesController::on404()&quot; requires that you provide a value for the &quot;$server&quot; argument (because there is no default value or because there is a non optional argument after this one).

Debugging \RuntimeExceptions from parameters

Now, hopefully, parameter resolving for controller callbacks has been cleared up. Whenever you hit bugs you can now know to go and debug that exception in \Drupal\Core\Controller\ControllerResolver::doGetArguments.

  • Is there a typo in your parameter in the controller method?
  • If a route parameter, is there a typo in your routing.yml?
  • Did you request a request attribute that is not available (like exception on a valid route)?
  • Were you expecting an upcasted value that was not upcasted?

I was inspired to write the post based on a debugging session in Drupal Slack. The following was their routing.yml definition

commerce_coupon_batch.data_export:
  path: '/promotion/{commerce_promotion}/coupons/export1'
  defaults:
    _controller: '\Drupal\commerce_coupon_batch\Controller\ExportController::exportRedirect'
    _title: 'Export Coupons'
  options:
    _admin_route: TRUE
    parameters:
      commerce_promotion:
        type: 'entity:commerce_promotion'
  requirements:
    _permission: 'administer commerce_promotion'

The route was throwing an exception, even though the route parameter had the proper entity type provided. In Drupal, if the route parameter matches an entity type, the routing system will try to load an entity with that value (identifier or machine name.) Here's an example of the controller callback.

class ExportController extends ControllerBase {
  public function exportRedirect(Promotion $promotion) {
    // Logic.
  }
}

Did you notice the bug, after going through the parameter resolving process? The method expects a parameter named promotion yet our route parameter is called commerce_promotion. There is no matching route attribute, variable, or default value. Therefore the system crashes.

Resources

Here's a summary of documentation links that I mentioned in the article

Jan 31 2018
Jan 31

In client projects, I push for as much testing as possible. In Drupal 8 we finally have PHPUnit, which makes life grand and simpler. We use Kernel tests to run through some basic integration tests using a minimally bootstrapped database. However, we need Behat to cover behavior and functional testing on an existing site for speed and sanity reasons.

For our client project, we host our repository on GitHub and our environments on Platform.sh. Work for sprints and hotfixes are done on their own branch and handled via pull request. With Platform.sh that gives us a unique environment for each pull request, above and beyond our development, stage, and production.

Thumbnail

There is one slight problem, however. Platform.sh environment URLs are a bit random, especially for pull request environments. There is a pattern: branch-randomness-project. It used to be easier to guess, but they made a change in Q3 2016 to better support GitFlow (yay!). It changes a bit when you're using pull requests. The URL does not contain a branch name, but the PR number.

Luckily, there is the Platform.sh CLI tool which provides a way to look up URLs and other goodies. After the Platform.sh blog Backup and Forget, I realized the same idea can be used for running Behat against our dynamic environments effectively.

Thumbnail

Configuring CircleCI

It is not hard to get set up, there are just a few steps required. 

Add API token to environment variables

In order to use the Platform.sh CLI tool, you will need an API token added to your build environments. Follow the instructions on the Platform.sh documentation page https://docs.platform.sh/gettingstarted/cli/api-tokens.html#obtaining-a-token to configure a token.

Configure your project's environment variables. You will want to add a variable named PLATFORMSH_CLI_TOKEN that contains your token. The CLI will automatically detect this environment variable and use it when communicating with the Platform.sh API.

Thumbnail

Install Platform CLI and get the URL

We are using CircleCI 2.0 and their circleci/php:7.1-browsers image. For our setup, I created a script called prepare-behat.sh in our .circleci directory. I'll break the script down into different chunks and then provide it in its entirety.

The first step is to download and install the Platform.sh CLI tool, which is simple enough.

curl -sS https://platform.sh/cli/installer | php

Next, we determine what environment to get a URL for. If the build is for a regular branch, then we can use the branch name. However, if it is a pull request we have to extract the pull request number from the CIRCLE_PULL_REQUEST environment variable.  CircleCI provides a CIRCLE_PR_NUMBER environment variable, but only when the pull request is from a forked repository. We do not fork the repository and make pull requests off of branches from the project repository.

if [ -z "${CIRCLE_PULL_REQUEST}" ]; then
  PLATFORMSH_ENVIRONMENT=${CIRCLE_BRANCH}
else
  PR_NUMBER="$(echo ${CIRCLE_PULL_REQUEST} | grep / | cut -d/ -f7-)"
  PLATFORMSH_ENVIRONMENT="pr-${PR_NUMBER}"
fi

Now that we have our environment, we can get the URL from Platform.sh! I used the pipe option so it displayed values instead of attempting to launch the URL, and modify the output so we use the first URL returned. We export it to a variable.

BEHAT_URL=$(~/.platformsh/bin/platform url --pipe -p 3eelsfv6keojw -e ${PLATFORMSH_ENVIRONMENT} | head -n 1 | tail -n 1)

Behat will read the BEHAT_PARAMS environment variable to override configuration. We'll pipe our environment URL into the Behat configuration.

export BEHAT_PARAMS="{\"extensions\" : {\"Behat\\\MinkExtension\" : {\"base_url\" : \"${BEHAT_URL}\"}}}"

All together it looks something like this

#!/usr/bin/env bash
curl -sS https://platform.sh/cli/installer | php

if [ -z "${CIRCLE_PULL_REQUEST}" ]; then
  PLATFORMSH_ENVIRONMENT=${CIRCLE_BRANCH}
else
  PR_NUMBER="$(echo ${CIRCLE_PULL_REQUEST} | grep / | cut -d/ -f7-)"
  PLATFORMSH_ENVIRONMENT="pr-${PR_NUMBER}"
fi

BEHAT_URL=$(~/.platformsh/bin/platform url --pipe -p projectabc123 -e ${PLATFORMSH_ENVIRONMENT} | head -n 1 | tail -n 1)
export BEHAT_PARAMS="{\"extensions\" : {\"Behat\\\MinkExtension\" : {\"base_url\" : \"${BEHAT_URL}\"}}}"

Make sure you commit the script as executable.

Start PhantomJS and run tests!

All that is left now is to add steps to boot up PhantomJS (or other headless browsers), prime the environment variables, and run Behat!

      - run:
          name: Start PhantomJS
          command: phantomjs --webdriver=4444
          background: true
      - run:
          name: Get environment URL and run tests
          command: |
            . .circleci/prepare-behat.sh
            ./bin/behat -c behat.defaults.yml --debug
            ./bin/behat -c behat.defaults.yml

I keep a debug call in so that I can see the output, such as the URL chosen. Just in case things get squirrely.

Testing Goodness

We now have tests running. With some caveats.

Thumbnail

The environment will be deployed regardless if testing fails, kind of avoiding the concept of CI/CD.

The workaround would be to kill the GitHub integration and use the environment:push command in a workflow. In this example we'd set up the following flow:

  • Run Unit and Kernel tests via PHPUnit
  • Push to Platform.sh
  • Run Behat tests

There's more to it, like ensuring the environment is active or not. But the command would look something like

platform environment:push -p projectabc123 -e ${PLATFORMSH_ENVIRONMENT}

Sometimes the Behat tests can run before the environment is deployed

The first option is to Implement full on CI/CD flow with the above. Or, in our case, define a flow which runs the PHPUnit stuff first. That happens to take longer than a deploy, so we're pretty much set. Unless the next issue happens.

Sometimes GitHub sends events before the merge commit is updated on a PR, causing the environment to miss a build. But this only seems to happen after a PR receives a lot of commits.

Then we're kind of stuck. We have to make a comment on the pull request to trigger a new event and then a build. This causes Platform.sh to kick over but not the tests.

Final notes

There's more which can be done. I run the preparation script in the same step as our tests to ensure variables are available. I really could export them to $BASH_ENV so they are sourced in later steps. I'm sure there are workarounds to my caveats. But we were able to integrate a continuous integration flow within a few hours and start shipping more reliable code.

Jan 20 2018
Jan 20

This year I joined a coffee exchange for some members of the Drupal community. I had known there was one floating around, but finally got signed up. Over the past two years, I have gotten more and more into coffee - being a coffee snob about roasts and learning brewing techniques. Last week we were paired up. And sent out some roasts.

I sent my match some coffee from one of my favorite Wisconsin roasters, Just Cofee Cooperative. I shipped out the WTF Roast, a single origin Honduras, which is named after the podcast by Marc Maron. And it turns out the recipient is a huge fan of the podcast! Coincidence score!

I received the Sumatra roast from Three Fins out of Cape Cod. Not only did I receive some rad beans, there was a note from the owner.

Grew up in Kenosha, went to Tremper, lived near there. Grab a Spot burger for me.

Talk about a small world. Turns out the owner of the roaster is actually from my town, albeit the Southside.

Thumbnail

In Kenosha, we have two classic drive-in diners. The Spot on the Southside and Big Star on the Northside. For the record, I grew up on the Northside, which is the better half of the city. I'm definitely partial to Big Star, but a good burger is a good burger. I also realized that I haven't been to The Spot in a few years, maybe only once since their second location on my side of town closed years ago.

Thanks for some delicious coffee and a reminder to get a good burger, Ron! I had a burger for you, and it is still damn good.

Thumbnail
Jan 17 2018
Jan 17

It seems like RSS is not quite as a buzz as it once was, years ago. There are reasons for that, but I partly believe it is because more services mask direct RSS feed subscriptions in larger aggregate tools. This change also makes it more interesting to get analytics about where that traffic is coming from, and what feed. When I migrated my site to Drupal 8, I decided to take an adventure on adding UTM parameters to my RSS feeds.

This was not nearly as easy as I had thought it would be. My RSS feeds are generating using Views. My Views configuration is straightforward, pretty much the out of the box setup. It uses the Feed display plugin and the Content row display. The setup I assume just about everyone has.

Blog RSS feed configuration

.

In order to start attributing my RSS links and understanding my referral traffic, I needed to adjust the links in my RSS feed for the content. So my first step was to review the node_rss plugin. The link is generated from the entity for its canonical version and with an absolute path, which is to be expected, but there is no way to alter this output (unless you possibly want to alter every time a URL is generated.)

// \Drupal\node\Plugin\views\row\Rss::render()

    $node->link = $node->url('canonical', ['absolute' => TRUE]);

// ...

    $item->title = $node->label();
    $item->link = $node->link;
    // Provide a reference so that the render call in
    // template_preprocess_views_view_row_rss() can still access it.
    $item->elements = &$node->rss_elements;
    $item->nid = $node->id();

// If only I could alter $item!

    $build = [
      '#theme' => $this->themeFunctions(),
      '#view' => $this->view,
      '#options' => $this->options,
      '#row' => $item,
    ];

    return $build;


My next thought was to just extend the node_rss plugin and add my link generation logic. But that's messy, too. I would have to override the entire method, not some "get the link" helper, meaning I'd have to maintain more code on my personal site.

So I kept digging. My next step was to visit template_preprocess_views_view_row_rss and see if I can do a good ole preprocess, hack it in, and call it a dirty day. And, guess what? I could. I did. And I feel a little dirty about it, but it gets the job done.

function bootstrap_glamanate_preprocess_views_view_row_rss(&$variables) {
  /** @var \Drupal\views\ViewExecutable $view */
  $view = $variables['view'];
  if ($view->id() == 'taxonomy_term') {
    $term = \Drupal\taxonomy\Entity\Term::load($view->args[0]);
    $label = $term->label();
    if ($label == 'drupal') {
      $source = 'Drupal Planet';
    }
    else {
      $source = 'Term Feed';
    }
    $variables['link'] .= '?utm_source=' . $source . '&utm_medium=feed&utm_campaign=' . $term->label();
  }
}

My taxonomy term feed is what funnels into Drupal Planet, so only posts tagged "Drupal" show up. So in my preprocess I check the vie and decide to run or not. I then attribute the source, medium, and campaign.

Ideally, I should make a UTM friendly row display which extends node_rss and allows Views substitutions to populate the UTM parameters, if available when generating the URL object. I might do that later. But hopefully, this helps anyone looking to do something similar.

Jan 09 2018
Jan 09

On New Years Day I sat down over ReactJS and decided to see what all the commotion was about. I primarily work on the backend and have dabbled lightly with AngularJS. Generally, my JavaScript work just falls to basic DOM manipulation using vanilla JS or jQuery. Nothing fancy, no components. ReactJS is fun.

Why is ReactJS so much fun, and popular? It is hackable, easy to manipulate, and it is solving hard problems in a new way. New solutions to hard problems bring a surge of euphoria. In the Drupal world, this is allowing us to harness Drupal as a data and content management system and not the presentation system. Not everyone uses Drupal in this way, but I have always envisioned Drupal as a great backend for any kind of application beyond normal content. The main roadblock I ended up encountering is fighting the render and form API in complex scenarios.

I have some experience in AngularJS and have fiddled with Angular (2 and then 4.) I felt more headache and disappointment when working with Angular (and AngularJS.) When learning Angular it may have been TypeScript instead of just Babel transpiling ES6. One thing I appreciated when working with ReactJS is just handling the rendering of components and not needing to also use its provided services, factories and other patterns forced by Angular.

There is a thing for everything

Since ReactJS is fun and popular, there are also tons of libraries. I was able to find solutions to many problems by just searching the NPM package repository. I do not always believe in dependency stuffing and do not encourage adding a dependency just for a small feature. But, libraries provide a great reference point for learning or quick prototyping.

I learned some tips and tricks by reviewing existing libraries. I would pick something that kind of did what I needed it to, then reviewed how to implement it on my own without the bells and whistles I did not need, or fix what I did need.

The component state

When working with your component there is the state property, which is just a JavaScript object of properties. You can use this for logic checks and provide values in your rendered component. The following example would render the text in a heading element.

import React, { Component } from 'react'

class App extends Component {
  state = {
    header: 'Check out this super duper rad text!',
  }
  render () {
    return (
      <h1>{this.state.header}</h1>
    )
  }
}

The state property can be modified directly, but it should be treated as an immutable object. Whenever you need to change a state value, use the this.setState({}) function. I did not know this at first and had some weird behaviors. If you set the state directly, it actually does get saved. But it is not reflected until some other action forces the component to re-render itself.

To change the heading text, I would run something like the following in a function

this.setState({
  header: 'Mah new awesome header!',
});

State management is not recursive

Once I started to understand state and properly used this.setState, I hit a whole new roadblock. The method does a merge when setting values, but not a recursive merge. When trying to set a nested value it will cause all other nested values to be removed. Take the following state definition.

  state = {
    data: [],
    filterParams: {
      _format: 'json',
      name: ''
    },
  }

We have a data property and then filter parameters we would pass to an API endpoint. For example sake, somewhere on the form an input updates the name to filter by. If we ran the following setState operation whenever the input changed, we'd end up losing some values.

this.setState({
  filterParams: {
    name: 'Widget'
  }
});

After running this, our state object would look like the following.

  state = {
    data: [],
    filterParams: {
      name: ''
    },
  }

We lost our _format key! Luckily in JavaScript here is the Spread Operator. The spread operator is just three dots (...) allows you to expand an expression. In the following example we use ...this.state.filterParams to pass the existing values, and then any specific overrides. This allows us to maintain previous values and provide new ones.


this.setState({
  filterParams: {
    ...this.state.filterParams,
    name: 'Widget'
  }
})

Updating the component state is asynchronous.

Like most things JavaScript, setting the component state is not a synchronous operation. I discovered this after playing around with the paging in the React DB Log prototype. The component state kept track of the current page. When clicking a paging button the page was incremented or decremented and then results fetched from the API endpoint.

The page was updated, but it was not fetching the proper page. That is because it was not using a callback.

// Before
this.setState({
  page: this.state.page - 1
});
this.fetchLogEntries(this.state.page);

// After
this.setState({
  page: this.state.page - 1
}, () => {
  this.fetchLogEntries(this.state.page);
});

Chrome debugger tool is super sauce

The React team has created a Chrome browser extension that adds a React tab to the Chrome Developer Tools. This allows you to inspect the components and their state, which makes debugging a lot easier. Using that tool is how I learned the caveats in the above scenarios.

Get it here: https://chrome.google.com/webstore/detail/react-developer-tools/fmkadmapgofadopljbjfkapdkoienihi

There is a FireFox extension and standalone app that can be found at https://reactjs.org/community/debugging-tools.html

Jan 05 2018
Jan 05

On a client project, we wanted to prevent search engines from accessing pages on the site. It needed to be directly linked to from some various sources, public, but not queryable. So we did the reasonable thing. We modified our robots.txt and added our meta tags. Unfortunately, this didn't seem to work as we would expect and pages still showed up in Google searches.

We received reports that links were showing up in Google search results. And they were. With the page title and the description of  No information is available for this page.

Google Search WTF

But... why?

After digging through Google's support documentation, I came across the topic Block search indexing with 'noindex'  at https://support.google.com/webmasters/answer/93710?hl=en

Important! For the noindex meta tag to be effective, the page must not be blocked by a robots.txt file. If the page is blocked by a robots.txt file, the crawler will never see the noindex tag, and the page can still appear in search results, for example if other pages link to it.

Well, that is conflicting. If you have a robots.txt available, the bot does not crawl any pages and cannot read meta tags. That makes sense. However, if someone links to your page, without attributing nofollow in their page's anchor tag, it will get indexed. So, they do not respect available meta tags when your content is directly linked. That's cool.

X-Robots-Tag to the rescue?

Then, at the bottom of the article, it states that Google supports an X-Robots-Tag tag. There is no disclaimer here about meta information, so I am assuming this might be our fix all. To add the header I created a response event subscriber. The event subscriber adds our header to the response if it is an HtmlResponse.

Services file

services:
  mymodule.response_event_subscriber:
    class: \Drupal\mymodule\EventSubscriber\ResponseEventSubscriber
    tags:
      - { name: 'event_subscriber' }

EventSubscriber class

<?php

namespace Drupal\mymodule\EventSubscriber;

use Drupal\Core\Render\HtmlResponse;
use Symfony\Component\EventDispatcher\EventSubscriberInterface;
use Symfony\Component\HttpKernel\Event\FilterResponseEvent;
use Symfony\Component\HttpKernel\KernelEvents;

/**
 * Class ResponseEventSubscriber.
 */
class ResponseEventSubscriber implements EventSubscriberInterface {

  /**
   * Explicitly tell Google to bugger off.
   *
   * @param \Symfony\Component\HttpKernel\Event\FilterResponseEvent $event
   */
  public function xRobotsTag(FilterResponseEvent $event) {
    $response = $event->getResponse();
    if ($response instanceof HtmlResponse) {
      $response->headers->set('X-Robots-Tag', 'noindex');
    }
  }

  /**
   * {@inheritdoc}
   */
  public static function getSubscribedEvents() {
    $events[KernelEvents::RESPONSE][] = ['xRobotsTag', 0];
    return $events;
  }

}

Results?

It has only been active for a few days, so I do not know the exact results. I am guessing the client will need to perform specific actions within Google and Bing webmaster consoles. 

Jan 02 2018
Jan 02

I spent the last 8 days of 2017 not touching my computer. Except for one night, after a few old fashions in, I decided to upgrade my MacBook to High Sierra "for the hell of it." Then New Years came, and we are riding into 2018. I'm going to also try to focus more on blogging. This was my goal for the end of 2017, but I did not stick to it. However, a tweet sent out by Dries resonated that goal and is something I plan to work more on.

One of the things currently on my mind as we enter 2018: I feel like using social media less, and blogging more. Be part of the change that you wish to see in the world.

— Dries Buytaert (@Dries) January 1, 2018

Learning something new: ReactJS

On New Years Day, I followed my annual tradition: learn and try something new. I don't remember what I did in 2016, but in 2015 I wanted to play work with modern JavaScript and picked up AngularJS. The outcome was the creation of ContribKanban. This time around I spent time learning ReactJS. I had been loosely following the JavaScript Framework Initiative for Drupal core and know the JavaScript maintainers had decided to move forward with ReactJS for initial prototyping. The initial spec: using a ReactJS UI for the database log.

ReactJS

The proof of concept can be found on GitHub at https://github.com/jsdrupal/drupal-react-db-log. I used this as my starting point to learn ReactJS. On a local development site, I created my own Views REST export and built a UI to consume its output. I wanted to take sample code and write my own version, so I understood it. I learned a lot, which I'll cover in a later blog post. I also was able to provide two pull requests

I have been working with Angular here and there since I was familiar with AngularJS. After one day I am much happier with ReactJS. But that's because ReactJS is more hack and play, where Angular has a forced set of opinions. Also, I was introduced to Angular through TypeScript which adds a barrier of its own. ReactJS with ES6 and Babel is still easier to grok, in my opinion.

A year of travel

2017 was heavy on the travel side, and probably the most I have ever traveled. In March I had the honor to keynote my first conference and go to London for the first time. Then MidCamp in Chicago and DrupalCon Baltimore. Then it was back over to Europe for DrupalCon Vienna. I was more excited at the fact our seven-person team at Commerce Guys was able to be together in the same room for the first time. Working in a remote team across five zones is challenging, and the in-person time is invaluable. 2017 closed off with a trip to New York with Ryan and Bojan to speak at the Drupal NYC Meetup.

Bike in Belgrade

In between all of that was a non-Drupal conference that we attended: IRCE. IRCE is the Internet Retailer Conference and Expo. It was a great experience to see what services Drupal Commerce is competing against and how it stacks up. It was a bit difficult, though, explaining to people the lack of licensing fees they are used to.

Focus on the offline

With the push to get Drupal Commerce from alpha at the turn of 2017 and to a stable release by the end of September, there was not much for offline-time. I did not read as many books as I normally would. I listened to plenty of audiobooks and podcasts, but that was due to travel. I do fairly well at preventing work and open source work creeping in on my family time, but I prevent it from giving me legitimate personal time. A goal of mine for 2018 is to read more books, not just listen to them, and find a hobby that isn't writing code... or drinking cofee or drinking whisky.

Whisky and coffee
Dec 02 2017
Dec 02

I recently hit a curveball with a custom migration to import location data into Drupal. I am helping a family friend merge their bespoke application into Drupal. Part of the application involves managing Locations. Locations have a parent location, so I used taxonomy terms to harness the relationship capabilities. Importing levels one, two, three, and four worked great! Then, all the sudden, when I began to import levels five and six I ran into an error: Specified key was too long; max key length is 3072 bytes

Migration error output

False positives

My first reaction was a subtle table flip and bang on the table. I had just spent the past thirty minutes writing a Kernel test which executed my migrations and verified the imported data. Everything was green. Term hierarchy was preserved. Run it live: table flip breakage. But, why?

When I run tests I do so with PHPUnit directly to just execute my one test class or a single module. Here is the snippet from my project's phpunit.xml

    <!-- Example SIMPLETEST_DB value: mysql://username:[email protected]/databasename#table_prefix -->
    <env name="SIMPLETEST_DB" value="sqlite://localhost/sites/default/files/.ht.sqlite"/>

Turns out when running SQLite there are no constraints on the index key length, or there's some other deeper explanation I didn't research. The actual site is running on MySQL/MariaDB. As soon as I ran my Console command to run the migrations the environment change revealed the failure.

Finding a possible fix

Following the wrong path to fixing a bug, I went in a frantic Google search trying to find an answer which asserted by expected fix: there is some bogus limitation on the byte size that I can somehow bypass. So reviewed the fact that InnoDB has the limit of 3072 bytes, and that MyISAM has a different one (later, I found it is worse at 1000 bytes and one reason why we're all using MyISAM.)

I found an issue on Drupal.org which was reported about the index key being too long. The bug occurs by having too many keys listed in the Migration source plugin. Keys identify unique rows in the migration source. They are also used as the index in the create migrate_map_MIGRATION_ID. So if you require too many keys for identifying unique rows, you will experience this error. In most cases, you can probably break up your CSV and normalize it to be it easier to parse.

Indexes were added in Drupal 8.3 to improve performance. So, I had to find a fix. First I tried swapping back to MyISAM and realizing that was a fool's errand, but I was desperate.

I thought about trying to normalize the CSV data and make different, smaller, files. But there was a problem: some locations share the same name but have different parent hierarchies. A location in Illinois could share the same name as a location in a Canadian territory. I needed to preserve the Country -> Administrative Area -> Locality values in a single CSV.

A custom idMap plugin to the rescue

If you have worked with the Migrate module in Drupal you are familiar with process plugins and possibly familiar with source plugins. The former helps you transform data and the latter brings data into the migration. The migrate also has an id_map plugin. There is one single plugin provided by Drupal core: sql. I never knew or thought about this plugin because it is not defined. In fact, we never have to.

  /**
   * {@inheritdoc}
   */
  public function getIdMap() {
    if (!isset($this->idMapPlugin)) {
      $configuration = $this->idMap;
      $plugin = isset($configuration['plugin']) ? $configuration['plugin'] : 'sql';
      $this->idMapPlugin = $this->idMapPluginManager->createInstance($plugin, $configuration, $this);
    }
    return $this->idMapPlugin;
  }

If a migration does not provide an idMap definition it defaults to the core default sql mapping.

Hint: if you want to migrate into a non-SQL database you're going to need a custom id_map plugin!

Once I found this plugin I was able to find out where it created its table and the index information. Bingo! \Drupal\migrate\Plugin\migrate\id_map\Sql::ensureTables

  /**
   * Create the map and message tables if they don't already exist.
   */
  protected function ensureTables() {

In this method, it creates the schema that the table will use. There is some magic in here. By default, all keys are treated as a varchar field with a length of 64. But, then, it matches up those keys with known destination field values. So if you have a source value going to a plain text field it will change to a length of 255.

      // Generate appropriate schema info for the map and message tables,
      // and map from the source field names to the map/msg field names.
      $count = 1;
      $source_id_schema = [];
      $indexes = [];
      foreach ($this->migration->getSourcePlugin()->getIds() as $id_definition) {
        $mapkey = 'sourceid' . $count++;
        $indexes['source'][] = $mapkey;
        $source_id_schema[$mapkey] = $this->getFieldSchema($id_definition);
        $source_id_schema[$mapkey]['not null'] = TRUE;
      }

      $source_ids_hash[static::SOURCE_IDS_HASH] = [
        'type' => 'varchar',
        'length' => '64',
        'not null' => TRUE,
        'description' => 'Hash of source ids. Used as primary key',
      ];
      $fields = $source_ids_hash + $source_id_schema;

      // Add destination identifiers to map table.
      // @todo How do we discover the destination schema?
      $count = 1;
      foreach ($this->migration->getDestinationPlugin()->getIds() as $id_definition) {
        // Allow dest identifier fields to be NULL (for IGNORED/FAILED cases).
        $mapkey = 'destid' . $count++;
        $fields[$mapkey] = $this->getFieldSchema($id_definition);
        $fields[$mapkey]['not null'] = FALSE;
      }

This was my issue. The destination fields are the term name field, which has a length of 255. Great. But there is no way to interject here and change that value. All of the field schemas are coming from field and typed data information.

The solution? Make my own plugin. The following is my sql_large_key ID mapping class.

<?php

namespace Drupal\mahmodule\Plugin\migrate\id_map;

use Drupal\migrate\Plugin\migrate\id_map\Sql;

/**
 * Defines the sql based ID map implementation.
 *
 * It creates one map and one message table per migration entity to store the
 * relevant information.
 *
 * @PluginID("sql_large_key")
 */
class LargeKeySql extends Sql {

  protected function getFieldSchema(array $id_definition) {
    $schema =  parent::getFieldSchema($id_definition);
    if ($schema['type'] == 'varchar') {
      $schema['length'] = 100;
    }
    return $schema;
  }

}

The following is an example migration using my custom idMap definition.

id: company_location6
status: true
migration_tags:
  - company
idMap:
  plugin: sql_large_key
source:
  plugin: csv_by_key
  path: data/company.csv
  header_row_count: 1
 # Need each unique key to build proper hierarchy
  keys:
    - Location1
    - Location2
    - Location3
    - Location4
    - Location5
    - Location6
process:
  name:
    plugin: skip_on_empty
    method: row
    source: Location6
  vid:
    plugin: default_value
    default_value: locations
  # Find parent using key from previous level migration.
  parent_id:
    -
      plugin: migration
      migration:
        - company_location5
      source_ids:
        company_location5:
          - Location1
          - Location2
          - Location3
          - Location4
          - Location5
  parent:
    plugin: default_value
    default_value: 0
    source: '@parent_id'
destination:
  plugin: 'entity:taxonomy_term'
migration_dependencies: {  }

And, voila! 6,000 locations later there is a preserved hierarchy!

Oct 29 2017
Oct 29

The JSON API module is becoming wildly popular for in Drupal 8 as an out of the box way to provide an API server. Why? Because it implements the {json:api} specification. It’s still a RESTful interface, but the specification just helps bring an open standard for how data should be represented and requests should be constructed. The JSON API module exposes collection routes, which allows retrieving multiple resources in a single request. You can also pass filters to restrict the resources returned.

An example would be “give me all blog posts that are published.”

GET /jsonapi/node/article?filter[status][value]=1

But, what about complex queries? What if we want to query for content based on keywords? Collections are regular entity queries, which means they are as strong as any SQL query. If you wanted to search for blogs of a certain title or body text, you could do the following. Let’s say we searched for “Drupal” blogs

GET /jsonapi/node/article?
filter[title-filter][condition][path]=title
filter[title-filter][condition][operator]=CONTAINS
filter[title-filter][condition][value]=Drupal
filter[body-filter][condition][path]=title
filter[body-filter][condition][operator]=CONTAINS
filter[body-filter][condition][value]=Drupal

This could work. However, it starts to break down on complex queries and use cases - outside of searching for a blog.

Providing a collection powered by Search API

I have a personal project that I’ve been slowly working on which requires searching for food. It combines text, geolocation, and ratings for returning the best results. The go-to solution for returning this kind of result data is an Apache Solr search index. The Search API module makes this a breeze to index Drupal entities and index them with this information. My issue was retrieving the data over a RESTful interface. I wanted a decoupled Drupal instance so the app could eventually support native applications on iOS or Android. Also, because it sounded fun.

I was able to create a new resource collection which used my Solr index as its data source. There are some gotchas, however. At the time of when I first wrote this code (a few months ago), I had to manually support query filtering since the properties did not exist as real fields inside of Drupal. In my example, I don’t really adhere to the JSON API specification, but it worked and it’s not finished.

Here is the routing.yml within my module

forkdin_api.search:
  path: '/api/search'
  defaults:
    _controller: '\Drupal\forkdin_api\Controller\Search::search'
  requirements:
    _entity_type: 'restaurant_menu_item'
    _bundle: 'restaurant_menu_item'
    _access: 'TRUE'
    _method: GET
  options:
    _auth:
      - cookie
      -   always
    _is_jsonapi: 1

My Search API index has one data source: the restaurant menu item. JSON API collection routes expect there to be an entity type and bundle type in order to re-use the existing functionality. All JSON API routes are also flagged as _is_jsonapi to get this fanciness.

The following is the route controller. All that is required is that we return a ResourceResponse which contains an EntityCollection object, made from all of the entity resources to be returned. This controller takes the filters passed and runs an index query through Search API. I load the entities from the result and pass them into a collection to be returned. Since I’m not re-using a JSON API controller I made sure to add url.query_args:filter as a cacheable dependency for my response.

<?php

namespace Drupal\forkdin_api\Controller;

use Drupal\Core\Cache\CacheableMetadata;
use Drupal\Core\Controller\ControllerBase;
use Drupal\Core\Entity\EntityTypeManagerInterface;
use Drupal\jsonapi\Resource\EntityCollection;
use Drupal\jsonapi\Resource\JsonApiDocumentTopLevel;
use Drupal\jsonapi\ResourceResponse;
use Drupal\jsonapi\Routing\Param\Filter;
use Drupal\search_api\ParseMode\ParseModePluginManager;
use Symfony\Component\DependencyInjection\ContainerInterface;
use Symfony\Component\HttpFoundation\Request;

class Search extends ControllerBase {

  /**
   * @var \Drupal\search_api\IndexInterface
   */
  protected $index;

  /**
   * The parse mode manager.
   *
   * @var \Drupal\search_api\ParseMode\ParseModePluginManager|null
   */
  protected $parseModeManager;

  public function __construct(EntityTypeManagerInterface $entity_type_manager, ParseModePluginManager $parse_mode_manager) {
    $this->entityTypeManager = $entity_type_manager;
    $this->index = $entity_type_manager->getStorage('search_api_index')->load('food');
    $this->parseModeManager = $parse_mode_manager;
  }

  /**
   * {@inheritdoc}
   */
  public static function create(ContainerInterface $container) {
    return new static(
      $container->get('entity_type.manager'),
      $container->get('plugin.manager.search_api.parse_mode')
    );
  }

  public function search(Request $request) {
    $latitude = NULL;
    $longitude = NULL;
    $fulltext = NULL;

    // @todo move to a Filter object less strict than `jsonapi` one
    //       this breaks due to entity field defnitions, etc.
    $filters = $request->query->get('filter');
    if ($filters) {
      $location = $filters['location'];
      if (empty($location)) {
        // @todo look up proper error.
        return new ResourceResponse(['message' => 'No data provided']);
      }

      // @todo: breaking API, because two conditions under same type?
      $latitude = $location['condition']['lat'];
      $longitude = $location['condition']['lon'];

      if (isset($filters['fulltext'])) {
        $fulltext = $filters['fulltext']['condition']['fulltext'];
      }
    }
    $page = 1;

    $query = $this->index->query();
    $parse_mode = $this->parseModeManager->createInstance('terms');
    $query->setParseMode($parse_mode);

    if (!empty($fulltext)) {
      $query->keys([$fulltext]);
    }

    $conditions = $query->createConditionGroup();
    if (!empty($conditions->getConditions())) {
      $query->addConditionGroup($conditions);
    }
    $location_options = (array) $query->getOption('search_api_location', []);
    $location_options[] = [
      'field' => 'latlon',
      'lat' => $latitude,
      'lon' => $longitude,
      'radius' => '8.04672',
    ];
    $query->setOption('search_api_location', $location_options);
    $query->range(($page * 20), 20);
    /** @var \Drupal\search_api\Query\ResultSetInterface $result_set */
    $result_set = $query->execute();

    $entities = [];
    // @todo are the already loaded, or can be moved to get IDs then ::loadMultiple
    foreach ($result_set->getResultItems() as $item) {
      $entities[] = $item->getOriginalObject()->getValue();
    }

    $entity_collection = new EntityCollection($entities);
    $response = new ResourceResponse(new JsonApiDocumentTopLevel($entity_collection), 200, []);
    $cacheable_metadata = new CacheableMetadata();
    $cacheable_metadata->setCacheContexts([
      'url.query_args',
      'url.query_args:filter',
    ]);
    $response->addCacheableDependency($cacheable_metadata);
    return $response;
  }

}

The request looks like

GET /api/search?
filter[fulltext][condition][fulltext]=
filter[location][condition][lat]=42.5847425
filter[location][condition][lon]=-87.8211854
include=restaurant_id

With an example result:

Results
Oct 23 2017
Oct 23

Back in 2015, I created ContribKanban.com as a tool to experiment with AngularJS and also have a better tool to identify issues to sprint on at my previous employer before Commerce Guys. At the Global Sprint Weekend in Chicago, I received a lot of feedback from YesCT and also ended up open sourcing the project. That first weekend made me see it was more useful than just being a side project.

I was pretty excited to hear it was used by the Drupal 8 Media Initiative (not sure if they actively are) and other projects. But, then it started to become a burden. It is a MEAN stack application and I'm not an expert with MongoDB or using pm2 to manage a node server. Basically, it is at the point where I don't want to touch it, lest it breaks and I burn an afternoon on it.

Next Moves

Instead of letting the project go into retirement, I have decided to revamp it. I'm rebuilding it on top of Drupal 8 and a SQLite database with a light front end (jQuery and Drupal.* functions.) 

Preview of what is next

Moving to Drupal 8 allows me to work with something I know best, and also makes data management much easier. Currently, projects are stored in a MongoDB document and then their boards can be overridden via some JSON in the repository. This allowed projects to have boards limited to a specific tag or a project release parent issue. But it involved some level of effort on my end. It also wasn't super easy adding core-specific boards.

Here's an admin screen for creating and editing a board

Creating a board

The end result is something like this https://nextgen.contribkanban.com/board/Migrationsystem

As you can see by that URL, a preview is available at https://nextgen.contribkanban.com/. Boards and sprints can be added like they used to be - by entering a project's machine name or an issue tag. There are some features missing and small bugs to work out. But this should make it much easier to create custom boards with any kind of list needed.

The idea is that there are the board level configurations and then list level overrides.

I'm excited to move it to Drupal 8 because I hope it provides example code for some others. The code can be found at https://github.com/mglaman/contribkanban.com/tree/nextgen. The code is hosted on GitHub and uses CircleCI to deploy to my DigitalOcean droplet.

Want to support the project?

This is a side project and comes second to my work and duties as a maintainer of Drupal Commerce. There is a Gratipay project page and I have a Patreon account.

I'd like to give a shout out to Joris Vercammen who found my Patreon page and has pledged enough to fund my monthly hosting cost, and kind of helped rekindle the project.

Oct 16 2017
Oct 16

Secure sites. HTTPS and SSL. A topic more and more site owners and maintainers are having to work with. For some, this is a great thing and others it is either nerve-wracking or confusing. Luckily, for us all, getting an SSL and implementing full site HTTPS is becoming easier.

Why does an SSL matter?

First. Let's talk about why having an SSL and wrapping your site in HTTPS matters. For instance, there are people who think it is fine to have their e-commerce site behind regular HTTP because their payment gateway is PayPal.

Beyond security, consider the fact that in Google announced in 2014 that HTTPs would be used as a ranking signal. Three years ago Google made the push for a more secure web by making this choice. According to the Internet, Bing has stayed away from this sort of decision. But, if you care (or your customer) cares about SEO, I hope this helps make a case.

Google's search rankings are not your only worry. Chrome and FireFox are starting to alert users that the site is not secure if they fill in sensitive form data: passwords, credit card fields. The Google Security Blog announced the move last year, and Firefox did the same in early 2017.

Isn't HTTPS slow?

Years ago it was thought that SSL was slow due to the handshakes involved. The fact is that it is actually faster. If you do not believe me, go to http://www.httpvshttps.com/.

Getting an SSL is easier, now.

I remember when having to go purchase and then install and SSL was a drag. It cost extra money, even if a paltry amount is broken down to monthly costs (~$5 a month), and required time to install. Thanks to the service Let's Encrypt it has become easier to get an SSL certificate. Let's Encrypt is a free and open certificate authority (CA), which means they can provide signed and authorized SSL certificates. You won't get Organization Validation (OV) or Extended Validation (EV) certificates; but, generally, you do not need those.

Let's Encrypt logo

Let's Encrypt is great, but it requires you to run some tools to help automate certificate renewal and installation. Luckily, there are more hosting platforms and CDNs providing free or bundled SSL certificates.

Let's roll through some options. Please comment and share corrections or services that I have missed! The following items are services I use or found to be great on cost and ease of use.

Content Delivery Networks (CDN)

Putting a CDN in front of your website is one of the simplest ways to get your site wrapped around HTTPS without having to change your server or hosting information. This is your best option if your own servers and don't want to mess with certificates directly or your current host does not provide free/cheap SSL certificate support. It also improves visitor performance of your website.

CloudFlare is the CDN solution I use for this site in order to provide fully wrapped HTTPS support. CloudFlare has a great free plan that provides DDoS mitigation, CDN, SSL certificate and some other goodies. This is my go-to solution.

Hosting providers

More and more hosting providers are providing free SSL certificates. I've done some basic research, but these are based on services I have used or are familiar with.

Pantheon is a managed hosting service for Drupal and WordPress. Starting at $25 a month you get a managed hosting service, three environments (development, test, production), and a CDN with free SSL. If you want to install a custom SSL certificate, though, you will need to jump up to the Professional offering at $100 a month. Before Pantheon announced their global CDN and free SSL I had never considered them due to the price of the monthly service when you have an SSL. Next to using CloudFlare, it's your best bet for the "hands off" and ease of mind approach.

Platform.sh is my favorite and general go-to for price and value. You can host your PHP, Node.js, Python, and Ruby projects.  Plans start at $50 for a production site, which seems a bit expensive. But that gets you an automatic free SSL and you can still install custom SSL certificates without additional charge. You also get other goodies such as the ability to use Solr, Redis caching and more.

Gandi.net is a hosting provider that was brought to my attention when finding homes for ContribKanban. For $7.50 a month you can get their Simple Hosting with free SSL. You can run your PHP, Node.js, Ruby or Python apps on their hosting powered by a web administrative interface.

Using Let's Encrypt itself

You can of course use Let's Encrypt yourself on your own hosting - using certbot on your DigitalOcean droplet. Or just generating certificates and adding them to your existing host.

Oct 08 2017
Oct 08

During DrupalCon Vienna, the second edition of the Drupal 8 Development Cookbook was published! The Drupal 8 Development Cookbook published just over a year ago, right after Drupal 8.1 was released. I had written the book for 8.0 with "just in case" notes for what might change in Drupal 8.1. What I was not prepared for: how well the minor release system worked and provided rapid gains in feature changes. As I saw Drupal 8.4 approach, I felt it was time to create a second edition to highlight those fixes.

What's changed?

Some chapters did not change much beyond updated screenshots for user interface changes. There were, however, some larger changes.

For starters, any references to using Drush for downloading modules have been removed in favor of using Composer. In fact, so should manual download. Why? In Drush 9 downloading of modules and themes has been removed.

A new recipe was added to the Extending Drupal chapter to cover event subscribers. Something I missed the first time and noticed many Drupal Commerce implementers asking questions about.

In Plug and Play with Plugins was revised for the Creating a custom plugin type recipe. The Physical module, featured in the first edition, was finally ported and does not use plugins. The second edition covers the GeoIP module which utilizes plugins.

Two chapters received major overhauls and rewrites thanks to efforts accomplished in the 8.1, 8.2, and 8.3 releases. The Entity API chapter has been updated to remove boilerplate items now provided by Drupal core -- making the creation of custom entities much simpler. The Web Services chapter was completely written. Thanks to the API-first initiative the RESTful Web Services module was heavily improved. I also added a recipe on how to work with Contenta CMS, the headless distribution for Drupal 8.

Oh, and a new cover!

Drupal 8 Development Cookbook - second edition

Where can you get it?

You can, of course, get the book from Packt Publishing. The book is available in print, ebook, and their Mapt service: http://packtpub.com/web-development/drupal-8-development-cookbook-second-edition

The book is also available on Amazon https://www.amazon.com/dp/1788290402

Sep 11 2017
Sep 11

My personal site is now officially migrated onto Drupal 8! I had first attempted a migration of my site back when Drupal 8.0 was released but had a few issues. With Drupal 8.3 it was nearly flawless (maybe even 8.2, but I had put the idea back burner.) I did have some interesting issues to workaround

Missing filter plugins

My migration process was halted and littered with errors due to missing plugins, specifically around my text formats. The culprits were:

  • Media 2.x and media filter
  • GitHub Gist filter
  • OEmbed filter
  • Twitter links and usernames

These plugins were missing from my code base so I had to replicate them. The Twitter module has been ported, but I didn't have a need for it anymore, so I wrote skeleton Filter plugins that had no functionality. However, I relied heavily on the others.

For the Media filter, I had tried to follow the solutions in Kalamuna's blog post. However, it did not quite work (not a custom migration.) So I opted for a custom Filter plugin which provides the same functionality as the Drupal 7 filter: convert a shortcode to an image embed.

The Gist Filter module has yet to be ported, so I quickly wrote my own version.
 

The same was true for the OEmbed module. It has a port to Drupal 8 started but was not stable. Luckily I was able to use the alb/oembed library and the OEmbed module's Filter plugin. All I needed to do was change library references. I made sure to document this https://www.drupal.org/node/2884829#comment-12203367.

A full example of my workarounds can be found in https://github.com/mglaman/glamanate.com/tree/master/web/modules/custom/

Executing the migration

Other than that, it was as simple as running the following command provided by Migrate Tools:

drush migrate-upgrade \ 
  --legacy-db-url=mysql://root:root,@mysql.dev/glamanate_d7 \
  --legacy-root=http://glamanate.com

I also found it useful to provide a custom installation profile during the trial and error process. I patched Drupal 8 to allow a profile to be installed from existing config. When you run a migration all configuration will be migrated. This allowed me to run my migrations, tweak config, export, and then re-import when the process was done.

Summary and what is next

Overall it was fairly smooth. I spent more time porting over the Media embed filter. I didn't bother to fix the actual post content but port the functionality verbatim. I'd say I spent roughly a day overall on this migration. More time was spent rewiring a new theme, which is not finished.

My main motivation was to have an improved authoring experience, thanks to Drupal 8 and CKEditor. I'd like to expand my publishing efforts to provide more tutorials and deep dives into topics.

Jun 04 2017
Jun 04

One of the reasons that I love Drupal 8 is the fact it is object orientated and uses the Dependency Injection pattern with a centralized service container. If you’re new to the concept, here’s some links for some fun reading.

But for now the basics are: Things define their dependencies, and a centralized thing is able to give you an object instance with all of those dependencies provided. You don’t need to manually construct a class and provide its dependencies (constructor arguments.)

This also means we do not have to use concrete classes! That means you can modify the class used for a service without ripping apart other code. Yay for being decoupled(ish)!

Why is this cool?

So that’s great, and all. But let’s actually use a real example to show how AWESOME this is. In Drupal Commerce we have the commerce_cart.cart_session service. This is how we know if an anonymous user has a cart or not. We assume this service will implement the \Drupal\commerce_cart\CartSessionInterface interface, which means we don’t care how you tell us, just tell us via our agreed methods.

The default class uses the native session handling. But we’re going to swap that out and use cookies instead. Why? Because skipping the session will preserve page cache while browsing the site catalogs and product pages.

Let’s do it

Let’s kick it off by creating a module called commerce_cart_cookies. This will swap out the existing commerce_cart.cart_session service to use our own implementation which relies on cookies instead of the PHP session.

The obvious: we need a commerce_cart_cookies.info.yml

name: Commerce Cart Cookies 
description: Uses cookies for cart session instead of PHP sessions 
core: 8.x 
type: module 
dependencies:
  - commerce_cart

Now we need to create our class which will replace the default session handling. I’m not going to go into what the entire code would look like to satisfy the class, but the generic class would resemble the following. You can find a repo for this project at the end of the article.

<?php

namespace Drupal\commerce_cart_cookies;

use Drupal\commerce_cart\CartSessionInterface;
use Symfony\Component\HttpFoundation\RequestStack;

/**
 * Uses cookies to track active carts.
 *
 * We inject the request stack to handle cookies within the Request object,
 * and not directly.
 */
class CookieCartSession implements CartSessionInterface {

  /**
   * The current request.
   *
   * @var \Symfony\Component\HttpFoundation\Request
   */
  protected $request;

  /**
   * Creates a new CookieCartSession object.
   *
   * @param \Symfony\Component\HttpFoundation\RequestStack $request_stack
   *   The request stack.
   */
  public function __construct(RequestStack $request_stack) {
    $this->request = $request_stack->getCurrentRequest();
  }

  /**
   * {@inheritdoc}
   */
  public function getCartIds($type = self::ACTIVE) {
    // TODO: Implement getCartIds() method.
  }

  /**
   * {@inheritdoc}
   */
  public function addCartId($cart_id, $type = self::ACTIVE) {
  }

  /**
   * {@inheritdoc}
   */
  public function hasCartId($cart_id, $type = self::ACTIVE) {
    // TODO: Implement hasCartId() method.
  }

  /**
   * {@inheritdoc}
   */
  public function deleteCartId($cart_id, $type = self::ACTIVE) {
    // TODO: Implement hasCartId() method.
  }

}

Next, we’re going to make our service provider class. This is a bit magical, as we do not actually register it anywhere. It just needs to exist. Drupal will look for classes that end in ServiceProvider within all enabled modules. Based on the implementation you can add or alter services registered in the service container when it is being compiled (which is why the process is called rebuild! not just cache clear in Drupal 8.) The class must also start with a camel-cased version of your module name. So our class will be CommerceCartCookiesServiceProvider.

Create a src directory in your module and a CommerceCartCookiesServiceProvider.php file within it. Let’s scaffold out the bare minimum for our class.

<?php

namespace Drupal\commerce_cart_cookies;

use Drupal\Core\DependencyInjection\ServiceProviderBase;

class CommerceCartCookiesServiceProvider extends ServiceProviderBase { }

Luckily for us all, core provides \Drupal\Core\DependencyInjection\ServiceProviderBase for us. This base class implements ServiceProviderInterface and ServiceModifierInterface to make it easier for us to modify the container. Let’s override the alter method so we can prepare to modify the commerce_cart.cart_session service.

<?php

namespace Drupal\commerce_cart_cookies;

use Drupal\Core\DependencyInjection\ContainerBuilder;
use Drupal\Core\DependencyInjection\ServiceProviderBase;
use Symfony\Component\DependencyInjection\Reference;

class CommerceCartCookiesServiceProvider extends ServiceProviderBase {
  /**
   * {@inheritdoc}
   */
  public function alter(ContainerBuilder $container) {
    if ($container->hasDefinition('commerce_cart.cart_session')) {
      $container->getDefinition('commerce_cart.cart_session')
        ->setClass(CookieCartSession::class)
        ->setArguments([new Reference('request_stack')]);
    }
  }
}

We update the definition for commerce_cart.cart_session to use our class name, and also change it’s arguments to reflect our dependency on the request stack. The default service injects the session handler, whereas we need the request stack so we can retrieve cookies off of the current request.

The cart session service will now use our provided when the container is rebuilt!

Apr 27 2017
Apr 27

DrupalVM is a tool created by Jeff Geerling that “makes building local Drupal development environments quick and easy” for Drupal. It is built using Vagrant and provisioned with Ansible. Since it uses Ansible, it also provides a means to support a production environment deployment. This allows for a repeatable and determinable environment when developing and deploying to remote servers.

In fact, I currently use DrupalVM to power a Drupal Commerce 2 environment that I run locally and in production. The production environment is updated using continuous deployment via CircleCI. The wonderful roles created by Jeff Geerling and his playbook in DrupalVM handle my production deployment.

Want the tl;dr, skip to bottom for the example.

Setting up DrupalVM within your project

First things first, you’ll need to use DrupalVM. In this example we will add DrupalVM as a project dependency, using Composer. This is important. It ensures that any developer using this project, the CI/CD environment, and the final destination all have the same version of files. It also makes it easier to manage and receive upstream fixes.

For my project I followed Jeff’s blog post Soup to Nuts to configure DrupalVM and setup my DigitalOcean droplet to be my production target. You might want to read that over if you have yet to see how DrupalVM can go to production. I won’t copy those steps here, I’ll show how you can get CircleCI to deploy your DrupalVM configuration to a remote host.

See the article for full details, or my example linked later. For now I’ll do a quick review of some basic config.

The inventory file:

[drupalvm]
192.168.1.12 ansible_ssh_user=drupalvm

The config.yml file:

drupal_domain: "example.com"
vagrant_hostname: "{{ drupal_domain }}"

apache_vhosts:
  - servername: "{{ drupal_domain }}"
    documentroot: "{{ drupal_core_path }}"
    extra_parameters: "{{ apache_vhost_php_fpm_parameters }}"


The vagrant.config.yml file:

# Note, {{ drupal_domain }} overridden for Vagrant to use local.* prefix.
drupal_domain: "local.example.com"
The prod.config.yml file:

drupal_deploy: true
drupal_deploy_repo: "[email protected]:organization/repo.git"
drupal_deploy_dir: "/var/www/drupal"

Those are the only tidbits in my production configuration. I treat the config.yml as primary with specific non-production overrides in vagrant.config.yml.

Adding Circle CI

NOTE! You’ll need to be able to deploy from CircleCI to a remote server, which means adding SSH permissions. With CircleCI you must create a key on your server and give the private key to project configuration.

Okay! So DrupalVM is running, you have accessed your site. Now, let’s run this sucker with CircleCI to do testing and deployment. Create a circle.yml file and let us walk through writing it.

First, we need to specify what language we’re using. In this case, I am using a PHP 7.0 image.

machine:
  php:
    version: 7.0.7

Defining dependencies

Next we want to set up our dependencies. Dependencies are things that we will need to actually run our tests, and they can be cached to speed up future runs. I’ll annotate specific steps using comments within the YAML.

dependencies:
  # Cache the caches for Composer and PIP, because package download is always a PITA and time eater.
  cache_directories:
    - ~/.composer/cache
    - ~/.cache/pip
  pre:
    # Install Ansible
    - pip install ansible
    - pip install --upgrade setuptools
    - echo $ANSIBLE_VAULT_PASSWORD > ~/.vault.txt
    # Disable xdebug (performance) and set timezzone.
    - echo "date.timezone = 'America/Chicago'"  > /opt/circleci/php/7.0.7/etc/conf.d/xdebug.ini
  override:
    # I save a GitHub personal OAuth token for when Composer beats down GitHub's API limits
    - git config --global github.accesstoken $GITHUB_OAUTH_TOKEN
    - composer config -g github-oauth.github.com $GITHUB_OAUTH_TOKEN
    - composer install --prefer-dist --no-interaction

Running tests

Before deploying anything to production or staging, we should make sure it does not deploy broken code.

test:
  pre:
    # Containers come with PhantomJS, so let's use it for JavaScript testing.
    # We specify it to run in background so that Circle CI saves output as build artifacts for later review.
    - phantomjs --webdriver=4444:
        background: true
    # Sometimes the test script can get cranky when this directory is missing, so make it.
    - mkdir web/sites/simpletest
    # User the PHP built-in webserver for tests, also run in background for log artifacts.
    - php -S localhost:8080 -t web:
        background: true
  override:
    # Run some tests
    - ./bin/phpunit --testsuite unit --group commerce
    - ./bin/phpunit --testsuite kernel --group commerce
    - ./bin/behat -f junit -o $CIRCLE_TEST_REPORTS -f pretty -o std

Saving artifacts

In the event that our tests do fail, it would be great to see additional artifacts. Any process that had background: true defined will have its logs available. Adding the following lines will let us access HTML output from the PHPUnit Functional tests and any Behat tests.

general:
  # Expose test output folders as artifacts so we can review when failures happen.
  artifacts:
    # Folder where Functional/FunctionalJavascript dump HTML output
    - "web/sites/simpletest/browser_output"
    # Folder where Behat generates HTML/PNG output for step failures
    - "tests/failures"

Setting up deployment

Now, it is time to tell CircleCI about our deployment process. Nothing too magical here. We are simplify defining that when the master branch passes to run a specific command. This command will load our production config and run the DrupalVM provisioning playbook on our production server. We specify the drupal tags to limit the scope of deployment.

deployment:
  prod:
    branch: master
    commands:
    # Specify DrupalVM environment and fire off provisioning.
    # DRUPALVM_ENV specifies which *.config.yml to load.
    - DRUPALVM_ENV=prod ansible-playbook \
      -i vm/inventory \ 
      vendor/geerlingguy/drupal-vm/provisioning/playbook.yml \
      # Points to directory containing your config.yml files.
      -e "config_dir=$(pwd)/vm" \
      --sudo
      --tags=drupal
      # I exported my Ansible user sudo password to environment variables to make provisioning work.
      --extra-vars "ansible_become_pass=${ANSIBLE_VAULT_PASSWORD}"

If your tests fail then the deployment will not run.

Example

Want to see an example of the full config? See my

Apr 27 2017
Apr 27

DrupalVM is a tool created by Jeff Geerling that “makes building local Drupal development environments quick and easy” for Drupal. It is built using Vagrant and provisioned with Ansible. Since it uses Ansible, it also provides a means to support a production environment deployment. This allows for a repeatable and determinable environment when developing and deploying to remote servers.

In fact, I currently use DrupalVM to power a Drupal Commerce 2 environment that I run locally and in production. The production environment is updated using continuous deployment via CircleCI. The wonderful roles created by Jeff Geerling and his playbook in DrupalVM handle my production deployment.

Want the tl;dr, skip to bottom for the example.

Setting up DrupalVM within your project

First things first, you’ll need to use DrupalVM. In this example we will add DrupalVM as a project dependency, using Composer. This is important. It ensures that any developer using this project, the CI/CD environment, and the final destination all have the same version of files. It also makes it easier to manage and receive upstream fixes.

For my project I followed Jeff’s blog post Soup to Nuts to configure DrupalVM and setup my DigitalOcean droplet to be my production target. You might want to read that over if you have yet to see how DrupalVM can go to production. I won’t copy those steps here, I’ll show how you can get CircleCI to deploy your DrupalVM configuration to a remote host.

See the article for full details, or my example linked later. For now I’ll do a quick review of some basic config.

The inventory file:

[drupalvm]
192.168.1.12 ansible_ssh_user=drupalvm

The config.yml file:

drupal_domain: "example.com"
vagrant_hostname: "{{ drupal_domain }}"

apache_vhosts:
  - servername: "{{ drupal_domain }}"
    documentroot: "{{ drupal_core_path }}"
    extra_parameters: "{{ apache_vhost_php_fpm_parameters }}"


The vagrant.config.yml file:

# Note, {{ drupal_domain }} overridden for Vagrant to use local.* prefix.
drupal_domain: "local.example.com"
The prod.config.yml file:

drupal_deploy: true
drupal_deploy_repo: "[email protected]:organization/repo.git"
drupal_deploy_dir: "/var/www/drupal"

Those are the only tidbits in my production configuration. I treat the config.yml as primary with specific non-production overrides in vagrant.config.yml.

Adding CircleCI

NOTE! You’ll need to be able to deploy from CircleCI to a remote server, which means adding SSH permissions. With CircleCI you must create a key on your server and give the private key to project configuration.

Okay! So DrupalVM is running, you have accessed your site. Now, let’s run this sucker with CircleCI to do testing and deployment. Create a circle.yml file and let us walk through writing it.

First, we need to specify what language we’re using. In this case, I am using a PHP 7.0 image.

machine:
  php:
    version: 7.0.7

Defining dependencies

Next we want to set up our dependencies. Dependencies are things that we will need to actually run our tests, and they can be cached to speed up future runs. I’ll annotate specific steps using comments within the YAML.

dependencies:
  # Cache the caches for Composer and PIP, because package download is always a PITA and time eater.
  cache_directories:
    - ~/.composer/cache
    - ~/.cache/pip
  pre:
    # Install Ansible
    - pip install ansible
    - pip install --upgrade setuptools
    - echo $ANSIBLE_VAULT_PASSWORD > ~/.vault.txt
    # Disable xdebug (performance) and set timezzone.
    - echo "date.timezone = 'America/Chicago'"  > /opt/circleci/php/7.0.7/etc/conf.d/xdebug.ini
  override:
    # I save a GitHub personal OAuth token for when Composer beats down GitHub's API limits
    - git config --global github.accesstoken $GITHUB_OAUTH_TOKEN
    - composer config -g github-oauth.github.com $GITHUB_OAUTH_TOKEN
    - composer install --prefer-dist --no-interaction

Running tests

Before deploying anything to production or staging, we should make sure it does not deploy broken code.

test:
  pre:
    # Containers come with PhantomJS, so let's use it for JavaScript testing.
    # We specify it to run in background so that CircleCI saves output as build artifacts for later review.
    - phantomjs --webdriver=4444:
        background: true
    # Sometimes the test script can get cranky when this directory is missing, so make it.
    - mkdir web/sites/simpletest
    # User the PHP built-in webserver for tests, also run in background for log artifacts.
    - php -S localhost:8080 -t web:
        background: true
  override:
    # Run some tests
    - ./bin/phpunit --testsuite unit --group commerce
    - ./bin/phpunit --testsuite kernel --group commerce
    - ./bin/behat -f junit -o $CIRCLE_TEST_REPORTS -f pretty -o std

Saving artifacts

In the event that our tests do fail, it would be great to see additional artifacts. Any process that had background: true defined will have its logs available. Adding the following lines will let us access HTML output from the PHPUnit Functional tests and any Behat tests.

general:
  # Expose test output folders as artifacts so we can review when failures happen.
  artifacts:
    # Folder where Functional/FunctionalJavascript dump HTML output
    - "web/sites/simpletest/browser_output"
    # Folder where Behat generates HTML/PNG output for step failures
    - "tests/failures"

Setting up deployment

Now, it is time to tell CircleCI about our deployment process. Nothing too magical here. We are simplify defining that when the master branch passes to run a specific command. This command will load our production config and run the DrupalVM provisioning playbook on our production server. We specify the drupal tags to limit the scope of deployment.

deployment:
  prod:
    branch: master
    commands:
    # Specify DrupalVM environment and fire off provisioning.
    # DRUPALVM_ENV specifies which *.config.yml to load.
    - DRUPALVM_ENV=prod ansible-playbook \
      -i vm/inventory \ 
      vendor/geerlingguy/drupal-vm/provisioning/playbook.yml \
      # Points to directory containing your config.yml files.
      -e "config_dir=$(pwd)/vm" \
      --sudo
      --tags=drupal
      # I exported my Ansible user sudo password to environment variables to make provisioning work.
      --extra-vars "ansible_become_pass=${ANSIBLE_VAULT_PASSWORD}"

If your tests fail then the deployment will not run.

Example

Want to see an example of the full config? See my Drupal Commerce 2 composer project template

Mar 20 2017
Mar 20

I can proudly say that we have been on top of our test coverage in Drupal Commerce. Back in June of 2016 we had removed any trace of Simpletest based tests and moved over to PHPUnit Unit, Kernel, Functional, and FunctionalJavascript tests. Once using PhantomJS for JavaScript testing landed in core we jumped ship. Test coverage is great for the individual project because we can ensure that we ship an (assumedly, mostly) bug-free product. But I believe we should do more than that. So I built my own commerce-project-template.

What is a project template? Well you can pass it to Composer and have a set up Drupal 8 project skeleton. You'd run something like

composer create-project mglaman/commerce-project-template some-dir --stability dev --no-interaction

The end result is a built Drupal 8 site, with Drupal Commerce. You will also have a configuration for using Behat testing out of the box, with existing Drupal Commerce coverage provided. This means you can just tweak and add along the way. I have also added CircleCI and TravisCI integration, providing an example of how to ship your Drupal Commerce project with continuous integration to make sure you deliver a functioning project.

Running Tests

The project comes with a phpunit.xml.dist which has been set up to allow you to run any PHPUnit tests provided by Drupal or contrib from the root directory. Here's an example to how to run the Commerce Unit and Kernel test

./bin/phpunit --testsuite unit --group commerce
./bin/phpunit --testsuite kernel --group commerce

This makes it simpler for you to write your own PHPUnit tests for client code. The PHPUnit file shipped with Drupal core assumes it'll say in the root core directory, meaning it can get lost on any Drupal core update. Which is annoying. I use this setup to provide basic unit and kernel tests for API integrations on our Drupal Commerce projects.

The best part is Behat, of course!

  Scenario: Anonymous users can access checkout
    When anonymous checkout is enabled
      And I am on "/product/1"
      Then I should see "Commerce Guys Hoodie"
    When I press "Add to cart"
      Then I should see "Commerce Guys Hoodie - Cyan, Small added to your cart."
      And I click "your cart"
    Then I press "Checkout"

This allows us to make sure a user can visit the product and add it to cart and reach the checkout. It's obviously quite simple but is also an important check. You can see more examples here: https://github.com/mglaman/commerce-project-template/tree/master/tests/…

Docker ready

In order to have a reproducible testing environment, the repository also contains my Docker setup. It is contained in a docker-composer.yml.dist so that it can be modified and changed. The config/docker directory contains the PHP, nginx, and MariaDB configurations. It ships with MailHog as an SMTP server so that you can debug emails easily. I used the MailHog SMTP server when working on the order receipts we provide in Drupal Commerce 2. And customer communication is a big deal with e-commerce.

Docker also provides a simpler way to ship a way to test Search API backed by Solr.

A way to provide a demo

The project has a script to install my mglaman/commerce_demo project, which provides base products and other configuration to try out Drupal Commerce. This is the base content for the Behat tests. So, if you want to try out Drupal Commerce 2 or pitch it to a client, CxO, or a friend this project makes it pretty simple to spin up an example Drupal Commerce 2 site.

What's next?

Next steps are to add an example catalog backed by Search API into the demo module using the database storage. Once that's set I'll work to have it using Solr as storage and test that, along with custom Solr configuration examples. I'd also like to show some deployment step examples in circleci.yml .

Mar 12 2017
Mar 12

DrupalCamp London is, according to various sources and rumors, the biggest camp in Europe. It comes up in size next to BADCamp and DrupalCamp Mumbai, coming in the 400 to 600 attendee range. That is quite a feat, and I am honored to have gotten to experience this DrupalCamp.

In January I was asked to be one of the keynotes. After a mini jump around and freak out, I, of course, said yes. Then I had a moment of "what in the world am I going to talk about." I've presented at camps before, and I have been able to copresent at the past two DrupalCons about Drupal Commerce alongside Ryan and Bojan.

But a keynote can't just be another "here's Drupal Commerce" or "Here's how this development process goes!" To me, at least, I look at keynotes to plant a seed of motivation that gets me kicked in gear for the conference. It reminds me of why I was excited to go to the camp and see all these amazing people again.

This was my first keynote. It was my first time going to London. It was my second time going to Europe, following up to DrupalCon Dublin. Two known names in the Drupal and open source community were slated as the Sunday keynotes. I spoke on Saturday, the mood setter. I guess a slight panic can describe my mentality at first.

So, I sat and I thought. And then I remembered the Dries keynote at Dublin. How he highlighted Drupal changing lives. And I thought, "well, Drupal sure as hell did make some changes my life." So I wanted to tell my story. Drupal opened opportunities for me and I made some lucky choices at the right place and time. That made me think: what if I shared my story and how open source, specifically Drupal, made this impact. Maybe it'd catch a handful of newcomers in the crowd and inspire them like many people did at my first DrupalCamp in Atlanta.

So I shared my story. I showed how the progression of finding Drupal, becoming part of the community, and having the community support provided a career and very big changes in my life.

Five years ago I was slinging half barrels, full of dyed green beer for St. Patricks day, into bar basements. Four years ago I built my first major Drupal (via Drupal Commerce) site. Three years ago I got to speak at DrupalCamp Florida. Two years ago I went to my first DrupalCon in Los Angeles. Now I'm here, shocked and awed at this journey which is only beginning.

The conference was great. The organization was spot on, great venue and session rooms. Unfortunately, I did not catch any other sessions. Right after the keynote, I hit the sprint room to finish up my slides and wrap up some pull requests. Right after lunch, I gave my Drupal Commerce session. And right afterward Scott Hooker gave his site builder Drupal Commerce session. Then it was party / social time!

I think we also rocked the social night pub just a bit. We drained them of Guinness and spent over £5000. Granted, some of that was food. Well, maybe a little. 

It was a great camp. I am more than looking forward to attending it once more.

Oct 22 2016
Oct 22

Developers have many tools. We have version control systems, we have dependency management tools, we have build and task automation tools. What is one thing they all have in common? They are command line tools.

However, the command line can be daunting to some and create a barrier to adoption. We’re currently experiencing this in the Drupal community. People are considering Composer a barrier to entry, and I believe it’s just because it’s a command line tool and something new.

Providing a user interface

User interfaces can make tools more usable. Or at least lower the barrier to entry. I like to think of Atlassian's SourceTree. SourceTree is an amazing Git user interface and was my entry into learning how to use Git.

If you work with Java, your IDE provides some sort of user interface for managing your dependencies via Maven.

The PhpStorm IDE provides rudimentary Composer support - initializing a project and adding dependencies - but doesn’t support entire workflows. It’s also proprietary.

Here’s where I introduce Conductor: a standalone Composer user interface built on Electron. The purpose of Conductor is to give users working with PHP projects an interface for managing their projects outside of the command line.

Hello, Conductor

Conductor interfaces

Conductor provides an interface for:

  • Creating a new project based on a Composer project
  • Managing projects to install, update, add, and remove dependencies
  • View dependencies inside of a project
  • Ability to update or remove individual dependencies by reviewing them inside of the project.
  • Run Composer commands from the user interface and review console output.

The project is in initial development, granted by the downtime the Dyn DDoS attack created.

The initial application is now in an above minimal viable product. It works. It can be used. But now it needs feedback from users who feel a barrier by having to use Composer, and code improvements as well.

Head over to GitHub and check out the project: https://github.com/mglaman/conductor. You can download an initial alpha release from https://github.com/mglaman/conductor/releases/latest, or clone the repo and run it in development mode.

Oct 10 2016
Oct 10

Drupal Commerce was started without writing any Drupal code. Our libraries set Drupal Commerce off the island before Drupal was able to support using third party library not provided by core.

Drupal now ships without third party libraries committed, fully using Composer for managing outside dependencies. However, that does not mean the community and core developers have everything figured out, quite yet.

YNACP: Yet Another Composer Post. Yes. Because as a co-maintainer of Drupal Commerce we're experiencing quite a lot of issue queue frustration. I also want to make the case of "let's make life eaiser" for working with Drupal. As you read compare the manual sans-Composer process for local development and remote deployment versus the Composer flows.

Before we begin

We're going to be discussing Composer. There's specific terminologies I'll cover first.

  • composer.json1: defines metadata about the project and dependencies for the project.
  • composer.lock2: metadata file containing computed information about dependencies and expected install state.
  • composer install3: downloads and installs dependencies, also builds the class autoloader. If a .lock file is available it will install based off of the metadata. Otherwise it will calculated and resolve the download information for dependencies.
  • composer update4: updates defined dependencies and rebuilds the lock file.
  • composer require5: adds a new dependency, updates the JSON and .lock file.
  • composer remove6: removes a dependency, updates the JSON and .lock file.

All Composer commands need to run in the same directory as your composer.json file.

Installing Drupal

There are multiple ways to install Drupal. This article focuses on working with Composer, for general installation help review the official documentation at https://www.drupal.org/docs/8/install

Install from packaged archive

Drupal.org has a packaging system which provides zip and tar archives. These archives come with all third party dependencies downloaded.

You download the archive, extract the contents and have an installable Drupal instance. The extracted contents will contain the vendor directory and a composer.lock file.

Install via Composer template

A community initiative was started to provide a Composer optimized project installation for Drupal. The Drupal Composer project provided a version of Drupal core which could be installed via Composer and a mirror of Drupal.org projects via a Composer endpoint (This has been deprecated in favor of the Drupal.org endpoint).

To get started you run the create-project7 command. 

composer create-project drupal-composer/drupal-project:8.x-dev some-dir --stability dev --no-interaction

This will create some-dir folder which holds the vendor directory and a web root directory (Drupal.) This will allow you to install Drupal within a subdirectory of the project, which is a common application structure.

This also keeps your third party libraries out of access from your web server.

Review the repository for documentation on how to use the project, including adding and updating core/projects: https://github.com/drupal-composer/drupal-project.

Adding dependencies to Drupal

Without Composer

Modules, themes, and profiles are added to Drupal my placing them in a specific directory. This can be done by visiting Drupal.org, downloading the packaged archive and extracting it to the proper location.

There's a problem with this process: it's manual and does not ensure any of the project's dependencies were downloaded. Luckily Composer is a package and dependency manager!

With Composer

To add a dependency we use the composer require command. This will mark the dependency, download any of its own. 

Note if you did not use project base: Currently there is no out of the box way to add Drupal.org projects to a standard Drupal installation. You will need to run a command to the endpoint.

composer config repositories.drupal composer https://packages.drupal.org/8

Let's use the Panels module as an example. Running the following command would add it to your Drupal project.

composer require drupal/panels

This will install the latest stable version of the Panels version. If you inspect your composer.json file you should see something like the following

"require": {
  "drupal/panels": "3.0-beta4",
}

One of the key components is the version specification. This tells Composer what version it can install, and how it can update.

  • 3.0 will be considered a specific version and never update.
  • ~3.0 will consider any patch version as a possible installation option, such as new betas, RCs.
  • ~3 will allow any minor releases to be considered for install or update.
  • ^3.0 will match anything under the major release — allowing any minor or patch release.

You can specify version constraints when adding a dependency as well. This way you can define of you will allow minor or patch updates when updating.

composer require drupal/panels:~3.0

This will allow versions 3.0-beta5,3.0-rc1, 3.0 to be valid update versions.

Know what! The same versioning patterns exist in NPM and other package managers.

Updating dependencies

Without Composer

As stated with installing dependencies, it could be done manually. But this requires knowing if any additional dependencies need to be updated. In fact, this is becoming a common issue in the Drupal.org issue queues.

With Composer

Again, this is where Composer is utilized and simplifies package management.

Going from our previous example, let's say that Panels has a new patch release. We want to update it. We would run

composer update drupal/panels --with-dependencies

This will update our Drupal project and any of its dependencies. Why is this important? What if Panels required the newest version of Entity Reference Revisions for a critical fix? Without a package manager, we would have not known or possibly updated.

Why we need --with-dependencies

When Composer updates a dependency, it does not automatically update its dependencies. Why? No idea, apparently the maintainers do not believe it should.

Updating Drupal core

Without the Composer template, documented method

If you installed Drupal through the normal process, via an extracted archive, you have to manually update in the same fashion. You will need to remove all files provided by Drupal core — *including your possibly modified composer.json file*.

Rightly so, you can move your modified .htaccess, composer.json, or robots.txt and move them back. However, you’ll need to make sure your composer.json matches the current Drupal core’s requirements and run composer update.

That’s difficult.

The official documentation: https://www.drupal.org/docs/7/updating-your-drupal-site/update-procedur…
 

UPDATE: Without the Composer template, still using just Composer.

Bojan alerted me that there is a way to update Drupal core without the Composer template using just composer. It requires a minor modification of the composer.json file that is shipped with Drupal core. You just need to move the drupal/core definition out of replace and into require.

The end result should look like the following

    "require": {
        "composer/installers": "^1.0.21",
        "wikimedia/composer-merge-plugin": "~1.3",
        "drupal/core": "~8.2"
    },
    "replace": { },

You can then run composer update drupal/core --with-dependencies and have an up to date Drupal. However, none of the root files (index.php, etc) will be modified.

Updating Drupal core via the Composer template

If you have setup Drupal with the Composer template or any Composer based workflow, all you need to do is run the following command (assuming you’ve tagged the drupal/core dependency as ^8.x.x or ~8, ~8.1, ~8.2)

composer update drupal/core --with-dependencies

This will update Drupal core and its files alongside the drupal-composer/drupal-scaffold project.

Using patches with Composer

I have been a fan of using build tools with Drupal, specifically  using Drush and makefiles to manage Drupal platforms. However, when I first used Composer I was concerned on how to use patches or pull requests not yet merged with the project — without maintaining some kind of fork.

Cameron Eagans create the composer-patches  project. This will apply patches to your dependencies. The project’s README fully documents its use, so I’ll cover it quickly here.

Patches are stored in a patches portion of the extra schema of the JSON file.

  "extra": {
    "patches": {
      "drupal/commerce”: {
        "#2805625: Add a new service to manage the product variation rendering": "https://www.drupal.org/files/issues/add_a_new_service_to-2805625-4.patch"
      }
    }
  }

This patches Drupal Commerce with a specific patch. 
   

Using GitHub PRs as a patch

Patches are great, as they let you use uncommitted functionality immediately. A problem can arise when you need code from a GitHub pull request (or so it seems.) For instance, Drupal Commerce is developed on GitHub since DrupalCI doesn’t support Composer and contributed projects yet.

Luckily we can take the PR for the issue used in the example https://github.com/drupalcommerce/commerce/pull/511 and add .patch to it to retrieve a patch file: https://github.com/drupalcommerce/commerce/pull/511.patch

We could then update our composer.json to use the pull request’s patch URL and always have up to date versions o the patch.

  "extra": {
    "patches": {
      "drupal/commerce”: {
        "#2805625: Add a new service to manage the product variation rendering": "https://github.com/drupalcommerce/commerce/pull/511.patch"
      }
    }
  }
Jul 12 2016
Jul 12

When we talk about Drupal Commerce 2, the biggest we questions we get are not about features, but when you can start building with it. Well, the answer is and has been now! Drupal Commerce 2 has been in alpha and nearing beta. What does this mean? In alpha we might have some schema changes, requiring a reinstall of your site. Luckily Drupal 8 has that fancy new configuration management system to export your site, right?

But what about products! That's data you lose on each reinstall. Luckily, we have Migrate in core and Commerce Migrate has been kicked off for Drupal 8. Generally speaking, most e-commerce sites have some sort of CSV or other file format containing product information. You can use Migrate to import that data and begin building your Drupal Commerce 2 site.

Imperceivable?! Nay! See the Commerce Demo sandbox I have created. This provides a T-Shirt product type with some and color attributes. It also imports sample products from a CSV. The CSV mimics the flat format you might receive from am ERP, you hopefully you can re-use it!

Here's how you can add it to your Drupal site using composer.

    "repositories": [
        {
            "type": "vcs",
            "url": "https://github.com/mglaman/commerce_demo"
        }
    ],

Otherwise, download it from GitHub: https://github.com/mglaman/commerce_demo/archive/master.zip

What's next? This module will showcase the new flexibility and control you have over your Drupal Commerce site, with best practices in mind.

May 23 2016
May 23

As part of the push to deprecate SimpleTest and use PHPUnit as the test runner in Drupal 8, there is the \Drupal\Tests\BrowserTestBase class. The BrowserTestBase provides a Mink runner that tests web pages in Drupal. Unlike kernel tests, which require a database and can be run via PHPUnit as well, browser tests use your default database connection. I prefer to run my tests with SQLite as I do not need to have my Docker containers running.

The simplest solution I have found, thus far, is to check the HTTP_USER_AGRENT

if ($_SERVER['HTTP_USER_AGENT'] == 'Drupal command line') {
    $databases['default']['default'] = array(
      'driver' => 'sqlite',
      'database' => '/tmp/test.sqlite',
      'namespace' => 'Drupal\\Core\\Database\\Driver\\sqlite',
    );
}

With this, and running php -S localhost:8080 I'm able to write my Drupal Commerce browser tests without having my normal Docker containers running.

May 19 2016
May 19

First Steps

Plans were made back in December 2015 to put effort into the ability to support Migrate with Drupal Commerce to speed up adoption of Drupal Commerce 2.0. Commerce Migrate for Drupal 8 will provide migrations from Drupal Commerce 1.x, Ubercart for D6, and Ubercart for D7. Ideally, this module will also support other vendors, such as Magento and WooCommerce.

Before official work began on the 8.x branch, we had a contributor start with an Ubercart 6 port! Contributor creativepragmatic created a sandbox fork and commenced on the Drupal 8 work. The code as mentioned earlier has been working for creativepragmatic to continue development for their Drupal 6 to Drupal 8 site migration.

Midwest Drupal Camp

MidCamp kicked off the official start of the Commerce Migrate 8.x branch. This was the merger of creativepragmatic's work. I sprinted on creating a database test fixture for Commerce Migrate's tests. I chose the Commerce Kickstart 2 demonstration store as our test base! So that means all tests are proving we can migrate a Commerce Kickstart 2 demo site to Drupal Commerce 2.x. Work was somewhat slow and stopped short, as 8.1.x was pending to be released and saw a change to how Migrate worked: "Migrations are plugins instead of configuration entities." We left MidCamp, however, with the database test fixture and initial tests and migration components.

DrupalCon New Orleans

Work on Commerce Migrate remained on pause until DrupalCon New Orleans. By this time Drupal 8.1.1 was released and the Migrate module was slightly more mature. Our focus during the conference was to push forward the Commerce 1.x to Commerce 2.x migration path since there is a method to test it.

During DrupalCon, a few conference goers approach the booth with questions about Ubercart D6 to Commerce 2.x sites. As mentioned previously, creativepragmatic wrote the initial code. Until there is a sanitized sample dataset, we cannot fully work on the Ubercart migrations or guarantee them. (If you have data you would like to contribute, please contact me!)

Headway was made, however, on the Commerce 1.x migration front. Tests have updated to the kernel test format, a change in Migrate. These tests are now passing on billing profile, line item, and product (variations in 2.x) and product type entities. A process plugin to handle migrating Commerce Price fields from 1.x to 2.x was added and is running on product and line item values. Other fields do not yet to have a supported migration.

What is next?

The next steps are to provide a process plugin to migrate Addressfield field data to the field provided by Address. We also must create process plugins for each of the reference fields provided by Commerce 1.x: product, profile, and line item. With these items completed, orders will be able to be completely migrated.

The largest task will be the migration of Commerce 1.x product displays to Commerce 2.x product entities. This requires finding nodes that have a product reference field.

While migrating from an existing site and data might not quite work, yet, you can start using Migrate to import data. See the Commerce Demo module I am working on: https://github.com/mglaman/commerce_demo. It provides an example of importing a CSV that you might receive from an ERP/PIM and creates Drupal Commerce products, variations, and attributes.

Also, check out creativepragmatic's original work, which has some documentation on initial migration gotchas: https://github.com/creativepragmatic/commerce_migrate

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web