Feeds

Author

Jul 07 2019
Jul 07
ReactPHP code

ReactPHP is an event-driven non-blocking PHP framework that allows to you work in a long-running script through an event loop. At its core, ReactPHP provides an event loop and utilities to trigger events at specific intervals and run your code. This is different than normal PHP script execution which of a short lifecycle and per individual requests. 

ReactPHP has been used to build web server applications, web socket servers and more. But, what if we used ReactPHP to execute operations and tasks on a Drupal application?

Technically this could be feasible with a set of cron jobs scheduled at specific intervals which invoke Drush or Drupal Console commands. But there is a limitation there: The ability to manipulate cron jobs entries for the user whenever a deployment occurs. Hosting providers like Platform.sh support this, but only on the main application container. Worker containers do not support cron job definitions. Also, what about handling errors?

We can combine the ReactPHP event loop with the ReactPHP ChildProcess library to run our command line tools. The child process attaches to the event loop and allows us to stream output from STDOUT and STDERR. This allows us to log output from the command or handle errors which occur during these background processes.

It is easiest to create a function that executes a new child process, provides an event loop for it, and binds events to the output streams.  

function run_command(string $command): void {
  $loop = React\EventLoop\Factory::create();
  $process = new React\ChildProcess\Process($command);
  $process->start($loop);
  $process->on('exit', function ($exitCode) use ($command) {
    // Trigger alerts that the command finished.
  });
  $process->stdout->on('data', function ($chunk) {
    // Optinally log the output.
  });
  $process->stdout->on('error', function (Exception $e) use ($command) {
    // Log an error.
  });
  $process->stderr->on('data', function ($chunk) use ($command) {
    if (!empty(trim($chunk))) {
      // Log output from stderr
    }
  });
  $process->stderr->on('error', function (Exception $e) use ($command) {
    // Log an error.
  });
  $loop->run();
}

I am a big fan of Rollbar and have logs sent there.

Now, let's create our main event loop which will run our commands at different intervals. We can run cron every twenty minutes

$loop = React\EventLoop\Factory::create();
// Run cron every twenty minutes.
$loop->addPeriodicTimer(1200, function () {
  run_command('drush cron');
});
$loop->run();

If you're using a queuing system to process jobs asynchronously via the Advanced queue module, you may want a more continuous processing that acts as a daemon.


$loop = React\EventLoop\Factory::create();
// Every thirty seconds, process jobs from queue1
$loop->addPeriodicTimer(30, function () {
  run_command(sprintf('drush advancedqueue:queue:process queue1'));
});
// Every two minutes, process jobs from queue2
$loop->addPeriodicTimer(120, function () {
  run_command(sprintf('drush advancedqueue:queue:process queue2'));
});
$loop->run();

This has allowed us to run Drush as a child process in a ReactPHP event loop to run tasks. What if we actually bootstrapped Drupal and invoked our code directly instead of through child processes? We can!

To start, we need to bootstrap Drupal. This requires the creation of a request object and a mocked route. The DrupalKernel and other components are coupled to the request containing some meta information about a route. Luckily, Drupal supports a <none> route.

We require the autoloader and create our request object. I usually have my PHP scripts in a scripts directory at my project root, so my autoloader is in ../vendor/autoload.php.

$autoloader = require __DIR__ . '/../vendor/autoload.php';

$request = Symfony\Component\HttpFoundation\Request::createFromGlobals();
$request->attributes->set(
    Symfony\Cmf\Component\Routing\RouteObjectInterface::ROUTE_OBJECT,
    new Symfony\Component\Routing\Route('<none>')
);
$request->attributes->set(
    Symfony\Cmf\Component\Routing\RouteObjectInterface::ROUTE_NAME,
     '<none>'
);

Next, we bootstrap the DrupalKernel. We need to run the bootEnvironment method, this sets up some required information for Drupal. Next, we need to specify the site path which contains the settings.php we want to use; generally, it is sites/default. Then we just boot the kernel and run the pre-handle of the request to get everything up and running.

$kernel = new Drupal\Core\DrupalKernel('prod', $autoloader);
$kernel::bootEnvironment();
$kernel->setSitePath('sites/default');
Drupal\Core\Site\Settings::initialize($kernel->getAppRoot(), $kernel->getSitePath(), $autoloader);
$kernel->boot();
$kernel->preHandle($request);

Now we can have our event loop tick away and execute our code as needed.

$loop = React\EventLoop\Factory::create();
$loop->addPeriodicTimer(10, function () {
    $cron = \Drupal::service('cron');
    $cron->run();
  });
$loop->run();

Here is a link to a Github gist of the complete files exampled here: https://gist.github.com/mglaman/6f5b0b2194d2f5ec7f1a40d00dd5ca6c

Jul 02 2019
Jul 02

The end of May brought two exciting releases for PHPStan and the PHPStan Deprecation Rules extension. With the version of PHPStan v0.11.8, descriptions added to the @deprecated tag can be parsed and returned in rule checks. The v0.11.2 of the PHPStan Deprecation Rules extension implements this feature and returns that description instead of a generic "Call to deprecated X Y" messaging.

Profile module deprecated message output

It is exciting to see this functionality in the wild. The original feature request, from November 2018, required feature development on PHPStan's phpdoc-parser and PHPStan itself. The first bits of code landed in mid-March and the complete functionality released by the end of May. Gábor Hojtsy has already updated the Upgrade Status module to take advantage of these features.

Drupal Check was a little behind, but the ability to display messages now exists. The v1.0.11 release will display deprecated code usage descriptions. The tool currently requires a development tag of PHPStan until v0.11.9 is released. Drupal Check performs some unique loading techniques to support various ways users are installing the tool which broke in PHPStan v0.11.8.

Here are those commits, for those interested:

  • Ignore files using a stream wrapper, loading configurations which prefixed with phar:// were causing errors when using the Drupal Check binary
  • Support PhpAdapter for configurations, to support project-level installations and global Composer installs, PHP configuration is used to discover the vendor directory dynamically. Support for the PhpAdapter had to be added back.

Contributing to another project has been a fun and challenging experience. PHPStan uses the Nette framework, and this has been my first experience working with its components. It also involved contributing to different projects to ensure a piece of functionality was available. My initial work was done in PHPStan itself, which was hacky. Ondřej identified it should belong in the phpdoc-parser library. I have briefly worked with PHP in an abstract-syntax tree format or as tokens but never trying to extract description messages based off a tag in a document block. All I can say is thank you Derick Rethans for creating and maintaining Xdebug. Without the ability to use Xdebug for step debugging, I would have spent many more days working through this code.

Support Open Source

If you are using any of these open source tools, I highly recommend supporting their maintainers.

All of my work for Drupal Check and PHPStan has been supported by Centarro for our Centarro Toolbox + Support, as part of our Quality Monitor service.

Support PHPStan development through its Patreon page: https://www.patreon.com/join/phpstan

Support Xdebug through Derick Rethans' Patreon page: https://www.patreon.com/bePatron?u=7864328

May 12 2019
May 12

You may have heard of Drupal Check. You may wonder what in the world it is or how it even came to be. I realized this went from an internal research and development task for a product, to open source contribution and then to an essential tool in the march toward Drupal 9. The timeline from January to DrupalCon in April has been pretty crazy, and I realized I have never done a proper blog post about Drupal Check.

In January I wrote a blog post about the PHPStan Drupal extension for writing better Drupal code with static code analysis. That quickly exploded and turned into a way to detect code deprecations on the journey to Drupal 9. Then that also took off. The problem was setting up PHPStan and making it more automated and repeatable.

The result of this dive into PHPStan and Drupal?

  • The blogs and buzz formed the creation of the command line tool Drupal Check. I released Drupal Check as a way to have a no-fuss (spoiler alert: there was still dragons to be found) bundled configuration for running PHPStan against a Drupal site to check deprecations.
  • At MidCamp the contributors scanned 1,692 modules. 1,692. Dwayne McDaniel from Pantheon has summarized it on their blog: https://pantheon.io/blog/your-module-ready-drupal-9-click-here-find-out.
  • There was the development of a new Drupal 8 version of the Upgrade Status module. The new Drupal 8 version uses PHPStan for in-site deprecation detection and reporting as a non-command line solution.
  • At DrupalCon Seattle, Dries even talked about Drupal Check in his State of Drupal presentation's How to Prepare for Drupal 9 section.

That is a lot that happened in just a few months.

How did this all get started?

Well, if you missed the news, Commerce Guys have rebranded as Centarro. At DrupalCon Seattle we unveiled Centarro Toolbox, a collection of SaaS products and support packages that help Drupal Commerce teams build with confidence. One of the products in our Centarro Toolbox is the Quality Monitor.

Quality Monitor performs proactive analysis on your codebase to detect problems in development before you deploy to production. It notifies you of necessary code updates and inappropriate API usage.

Drupal Check is, essentially, an open source version of Quality Monitor without our defined static analysis rules. I think this is awesome. We are an open source company. We are a team of seven. During the Driesnote, I realized four of us are in the Top 50 Contributors list. That's huge.

I love that our company's evolution is having an even more significant impact in open source than we expected.

Want to learn more about Centarro Toolbox? Check it out: https://www.centarro.io/products/centarro-toolbox

What does Drupal Check do?

Drupal Check is a customized runner for PHPStan. Instead of running PHPStan directly you run Drupal Check. It includes PHPStan, PHPStan's Deprecation Rules, and PHPStan Drupal and configurations for them. 

Here's an example of running it against the Address module and Drupal 8.8.x with just deprecation detection:

Here we go, again, with the default PHPStan static analysis rules:

Why is this useful? Because documenting configuration is hard. It also needs to be taught and takes time at contribution events. The configuration files are available at https://github.com/mglaman/drupal-check/tree/master/phpstan

However, here's a quick overview of what is setup when running deprecation checks:

parameters:
    # Don't warn about rules not being executed.
        customRulesetUsed: true
        reportUnmatchedIgnoredErrors: false
        # Ignore things which cause problems.
        excludes_analyse:
                - */tests/Drupal/Tests/Listeners/Legacy/*
                - */tests/fixtures/*.php
                - */settings*.php
        # Ignore phpstan-drupal extension's rules.
        ignoreErrors:
                - '#\Drupal calls should be avoided in classes, use dependency injection instead#'
                - '#Plugin definitions cannot be altered.#'
                - '#Missing cache backend declaration for performance.#'
                - '#Plugin manager has cache backend specified but does not declare cache tags.#'
services:
    # mglaman/phpstan-junit formatter (pre-checkstyle output)
        errorFormatter.junit:
                class: PHPStan\Command\ErrorFormatter\JUnitErrorFormatter
includes:
        - ../vendor/phpstan/phpstan-deprecation-rules/rules.neon

What can't Drupal Check do?

Drupal Check, via PHPStan, can bootstrap Drupal's code and inspect it for errors or deprecations, but it cannot fix code errors. At the end of April, Dezső Biczó wrote an interesting blog: Drupal 9: Automated Deprecated Code Removal - A Proof of Concept. The library announced in the blog post integrates Rector with custom code fixing definitions. It'd be interesting to add this functionality in Drupal Check.

The current features of Drupal Check are dependent on those in PHPStan. PHPStan's deprecation rules do not output the deprecation messages; it only identifies something that is deprecated. Better display of deprecation messages is an open feature request which I have been actively working on. Descriptions for deprecations are now available but do not support multiple line descriptions. If you're interested in following along, development is actively happening in this phpstan/phpdoc-parser pull request: https://github.com/phpstan/phpdoc-parser/pull/26

Dries, at DrupalCon Seattle, proposed the Driesnote. Drupal Check does not support JavaScript and Twig deprecations. Drupal does not even have a deprecation policy for JavaScript. I know there were discussions at DrupalCon Seattle, but I am not sure the status of that discussion. Twig has listed deprecations; however, I don't know if they are detected. There would likely need to be a Twig extension for PHPStan.

What's new since Seattle?

Here are some things which are new since DrupalCon Seattle.

PHPStan Drupal improvements:

Drupal Check improvements:

  • Test coverage! Added multiple CircleCI jobs to perform integration tests
  • Fix various installation problems (Phar, composer global require, cgr) which broke autoloading of Drupal (still to do: composer local install)

What's next?

The next significant feature is to finalize the deprecation description features. Being able to explain why something was deprecated and how to fix it (part of the required deprecation messaging for Drupal core) is vital. Things also get deprecated that cannot be immediately corrected. For instance, things deprecated on Drupal 8.7.x cannot be fixed in contributed modules until Drupal 8.7 is no longer supported, technically.

I would also like to fix some PHPStan issues. There is an "Easy fixes" milestone on the PHPStan project: https://github.com/phpstan/phpstan/milestone/8

May 11 2019
May 11

It seems that I do a roughly annual update for ContribKanban and what I plan on doing with it. This year I evaluated its future and roadmap and how it can be more useful for the community at large.

Hosting!

ContribKanban has been hosted on one of my own DigitalOcean VMs for a while now. The project now has sponsored hosting from amazee.io on their open source Lagoon platform infrastructure. This allows me to no longer need to worry about a server's maintenance and gives development environments for better testing and review.

I have also started to work on a command line tool for interacting with Lagoon: https://github.com/mglaman/lagoon-cli

Issue Collection Boards

Last year I built a way to create a board based on a collection of issues. It was inspired by Shawn McCabe 'smccabe' at AcroMedia who had a running spreadsheet of contrib tasks employees could work on. Issue Collection boards allow you to take issue IDs and make a custom board. They can be private, shared, or public. Issue collection boards can only be shared if given a link and are not browseable except from your user account page.

  • A public board allows anyone to edit and add issues.
  • A shared board can be viewed by anyone with the link.
  • A private board is for just you!

The Ofer Shaal 'shaal' of the Out of the Box Initiative has adopted ContribKanban as a sprint planning tool for each Drupal release.

See it over at https://contribkanban.com/node-board/91839946-8af5-45b0-822b-35b4375743f0

To create an issue collection board, you need an account on ContribKanban. My next goal is to improve the user experience for creating and managing these boards. It is very basic. 

Some of the things I would like to do:

Browser Extension

Stemming from the issue collection boards, I would like to make a browser extension that allows adding an issue while browsing Drupal.org to a board. I am imagining something like Pinterest or Noun Project. Click "Add to board" and select the board it should be added to or create an existing one.

I would also like the widget to display if an issue has already been added to a board.

OpenCollective and CodeFund

ContribKanban has always been a side project. It originated as a way to learn AngularJS. Then Drupal with VanillaJS. Then React with a Drupal backend. I have added ethical advertising by CodeFund to ContribKanban. CodeFund is an open source and ethical ad platform that funds contributors to the open source ecosystem.

I have also created a ContribKanban collective on Open Collective. This makes it easier to accept contributions and comes from recommendations from several community members.

To be honest, I work on ContribKanban very sporadically - partly because it works, and partly because I work on things which make an impact on my revenue. For instance, I worked on ContribKanban in April 2019, then the last activity was January 2019. Before that? October 2018. I would like to stabilize this and the funding options (funded or not) make me feel morally obligated to keep an active touch on the project.

Logo

One thing I would like to do is get a logo for the project: https://github.com/mglaman/contribkanban.com/issues/128. I have no idea. But it would be more useful since there is an Open Collective collective. 

Thanks!

Thanks for reading. Thank you to everyone who has used it and provided feedback. I'd love to hear how you're using ContribKanban.

Feb 03 2019
Feb 03

Last month I wrote about writing better Drupal code with static analysis using PHPStan. One of the more practical uses I saw for PHPStan and Drupal was the discovery of deprecated code usages through the phpstan/phpstan-deprecation-rules package. I had not fully tested it, until this week. 

I'm really excited for Drupal 9 and to trim down old code by updating to Symfony 4 (or 5) and cutting all of our deprecated APIs. When I started writing the Drupal 8 Development Cookbook there was the entity_manager. The entity_manager was deprecated in 8.0.0 (! so before Drupal 8 came out) and still lingers in a lot of core's code. Of course, this is how software "just goes." But Drupal 8 had a lot of churn as things from Drupal 7 were rewritten and hard lessons in OOP PHP learned.

The need to track deprecations is more than cleaning our own code up. It's ensuring our compatibility with our dependencies, namely Symfony and PHPUnit. One of the main tasks for Drupal 9 readiness is resolving usage of the deprecated code. In fact, Lee Rowlands (larowlan) and Gábor Hojtsy run office hours for this topic.

I have wanted to contribute, but 9 am UTC is 3 am CST. I am also pretty busy at Commerce Guys with our own codebase for Drupal Commerce and numerous contrib. But, our platform is built on Drupal, so having Drupal chug forward is a big deal.

You can disable PHPStan's analysis tools and use it for just discovering deprecated code. And so I found a way I could help contribute to the effort by building some tooling! 

Automating deprecation testing for Drupal core

The Drupal 9 group has needed a way to automate deprecated code tracking and improve their tooling. So I set up a sample Drupal 8 repository that runs PHPStan with depreciation rules! The repository can be found at: https://github.com/mglaman/drupal-deprecation-testing

Currently, it is set up with TravisCI and CircleCI (needs updating.) Without a composer.lock committed and Drupal set to a constraint of drupal/core:8.7.x-dev the latest HEAD for Drupal core will always be tested.

Due to the sheer size of Drupal core's codebase and the memory requirements, I had to chunk up the jobs and batch run directories at a time.

Update!

Since writing this post, using PHPStan to find deprecations in Drupal has taken off. You can go ahead and follow the rest of the blog post, but I would like to reference a tool called Drupal Check. This is a standalone binary (PHP Phar) that will run deprecation checks without needing to install anything with Composer. You can find it, and instructions, over here: https://github.com/mglaman/drupal-check

Testing your Drupal codebase for deprecations

You can test your code, too! Whether you want to test your client's codebase or your own contribs, it is easy to set up deprecation code testing.

To get started, you need to add mglaman/phpstan-drupal and phpstan/phpstan-deprecation-rules as a developer dependency.

composer require mglaman/phpstan-drupal phpstan/phpstan-deprecation-rules --dev

Then create a phpstan.neon file. To only test deprecations, use this setup:

parameters:
        customRulesetUsed: true
        reportUnmatchedIgnoredErrors: false
        # Ignore phpstan-drupal extension's rules.
        ignoreErrors:
                - '#\Drupal calls should be avoided in classes, use dependency injection instead#'
                - '#Plugin definitions cannot be altered.#'
                - '#Missing cache backend declaration for performance.#'
                - '#Plugin manager has cache backend specified but does not declare cache tags.#'
includes:
        - vendor/mglaman/phpstan-drupal/extension.neon
        - vendor/phpstan/phpstan-deprecation-rules/rules.neon

This disables the level argument for PHPStan's analysis rules and also ignores errors from the Drupal extension for PHPStan. I opened an issue to try and streamline this: https://github.com/mglaman/phpstan-drupal/issues/30

Running deprecation tests on TravisCI for contrib

I set up a sample repository that runs Drupal Commerce and our dependencies against PHPStan for deprecation checks. You can find it here: https://github.com/mglaman/commerce-deprecation-testing

It runs a job for each module: https://travis-ci.com/mglaman/commerce-deprecation-testing

Here is a sample of the .travis.yml. It will cache Composer and PHPStan's generated output (makes later analysis faster.) There's also a weird warning about APCu constants if APCU is not enabled, so it is added.

language: php
dist: trusty
sudo: false
cache:
  directories:
    - $HOME/.composer/cache/files
    - $HOME/.composer/cache/repo
    - $TMPDIR/phpstan/cache
php:
  - 7.2

env:
  matrix:
    - ANALYZE=address
    - ANALYZE=commerce
    - ANALYZE=entity
    - ANALYZE=entity_reference_revisions
    - ANALYZE=inline_entity_form
    - ANALYZE=profile
    - ANALYZE=state_machine

before_install:
  - echo 'sendmail_path = /bin/true' >> ~/.phpenv/versions/$(phpenv version-name)/etc/conf.d/travis.ini
  - echo 'memory_limit = -1' >> ~/.phpenv/versions/$(phpenv version-name)/etc/conf.d/travis.ini
  - echo "extension = apcu.so" >> ~/.phpenv/versions/$(phpenv version-name)/etc/conf.d/travis.ini

  - phpenv config-rm xdebug.ini

install:
  - composer global require "hirak/prestissimo:^0.3"
  - composer install -n --prefer-dist --no-suggest

script:
  - ./vendor/bin/phpstan analyse web/modules/contrib/$ANALYZE

Jan 14 2019
Jan 14

MidCamp, the Midwest Drupal Camp, is coming around the corner! March 20th through the 23rd, hundreds of Drupalistas will converge in Chicago for training workshops, contribution sprints, and sessions! This is one of my favorite conferences. The organizers put together so much thought and effort into each detail.

What else makes MidCamp special? It's a DrupalCamp right before DrupalCon (April 8th.) This makes MidCamp a prime contribution sprint camp. Need to plan for an initiative? Need to get together with other sprint leads? Want to ease into contribution so you can rock it at DrupalCon? Come to MidCamp!

Drupal 8.7.0 is planned to have its first alpha released on March 18th. The alpha release is a feature freeze of the 8.7.x branch. Let's use MidCamp to help squash bugs and triage the issue queues to help make all of the Drupal 8.7.0 goals in preparation for DrupalCon Seattle!

This year I am working to coordinate the contribution sprints. If you're interested, reach out on the Drupal Slack or join MidCamp's Slack as well. I am going to start reaching out to initiative leads to see how we can help during MidCamp.

Mentors will be available to help new contributors! Mentors help new contributors navigate Drupal.org, how the issue queues work, and more. Although they have not been announced, I am fairly certain there will be relevant pieces of training available for those looking to get started with contributions.

The goal of the documentation initiative is to improve the Drupal evaluator, developer, and site builder experiences through improved documentation on Drupal.org.

This is a great initiative for non-developers.

The Claro admin theme has been added to Drupal.org. This will be a great way for designers and front end developers to contribute.

We’re proposing to update the administration look and feel through a new design system for the Drupal administration UI. A design system consists of visual and behavioural components and patterns that can be combined into user-friendly user interfaces.

Dries has recently written about this as well: https://dri.es/refreshing-the-drupal-administration-ui

Drupal 9 readiness

There are quite a few issues that are working to get Drupal 9 readiness underway. Most of this is handling Symfony 4/5 updates.

Looking to get involved? Check out the #d9readiness channel in Drupal Slack.

Out of the box is always growing and needs assistance. A strong out of the box demo makes Drupal easier to sell and showcase.

Jan 09 2019
Jan 09

PHP is a loosely typed interpreted language. That means we cannot compile our scripts and find possible execution errors without doing explicit inspections of our code. It also means we need to rely on conditional type checking or using phpDoc comments to tell other devs or IDE what kind of value to expect. Really there is no way to assess the quality of the code or discover possible bugs without thorough test coverage and regular review.

If you use PhpStorm, you will notice all of their helpers which analyze your code and add static analysis. Such as "Watch out, there's a good chance you did not catch this exception!" PhpStorm reads the phpDoc annotations to help provide type checking as well. This gets kicked up to level 100 using the Php Inspections (EA/SA) plugin.

That's awesome. It's pretty amazing that PhpStorm and a few plugins can give us some stability in our PHP code that allows acting like we might be working in a compiled language (okay, that's a stretch, but the point is there.)

There are a few problems with this approach, though

  1. Writing custom inspections, like for Drupal, requires learning Java and building a PhpStorm plugin.
  2. Everyone on the development team needs to have PhpStorm
  3. You can't execute this over a CI process and make it codified.

But, wait! What about PHP_CodeSniffer? PHPCS gives us some benefits, but it is not a full static analysis tool that would help us find bugs in our software. When running PHPCS, individual files are tokenized and then parsed. This allows for a line-by-line analysis of an individual file. It does not, however, let you check if the class you referenced actually exists or not. Or that your method expects MySpecificObjectInterface but in reality, several calls will pass  SomeOtherObjectInterface. 

There are a quite few static analysis tools out there, but none will work with Drupal out of the box. Why? Because Drupal has a magical autoloading system that does not get dumped into Composer's autoloader. The tool needed to extendable. And that's how I found PHPStan. What I really like about PHPStan is that it is able to inspect your entire codebase and find out if a class does not exist, if it is called incorrectly, like you actually compiled your PHP project and didn't even run it.

So, what happens when you try to run PHPStan and analyze a Drupal module (or core) out of the box?

PHPStan executed against the Address module, without the Drupal extension

Nothing. Because PHPStan has no idea how to load any of the files. PHPStan relies on being able to load classes or functions through Composer's generated autoload information. When you enable a module or a theme, none of the information on how to load its files is added to the Composer autoload information. All of that data is set up when Drupal's container is built. 

A Drupal extension for PHPStan

I would like to introduce phpstan-drupal. I spent two weeks in December working on this extension so that I could analyze Drupal Commerce, our dependencies, and Drupal core itself. It was definitely a fun challenge and even uncovered a bug in Drupal core due to a duplicate function name in a test module.

PHPStan run against the state_machine module, with the Drupal extension

To get started, you need to add mglaman/phpstan-drupal as a developer dependency.

composer require mglaman/phpstan-drupal --dev

Then create a phpstan.neon file. Here's an example that I am using

parameters:
        # Ignore tests
        excludes_analyse:
                - *Test.php
                - *TestBase.php
        # PHPStan Level 1
        level: 1
includes:
        # Add the phpstan-drupal extension
        - vendor/mglaman/phpstan-drupal/extension.neon

Now, let's dive into some more details

Bootstrapping Drupal's autoloading and namespaces without a database

Everything about Drupal's bootstrap and container requires the database. At first, I had tried initializing DrupalKernel and only touching methods which did not reach into the database (or try to at least mock it.) That was a big failure. I wish I had started writing this blog as I went down those rabbit holes.

The extension supports discovering Drupal in the following scenarios vanilla Drupal project setup and Composer project template setups with either the web or docroot directory.

The first task was to make copies of the extension discovery classes. Since Drupal core has a dependency on Symfony 3 it could not be added as a developer dependency -- PHPStan uses Symfony 4's Console component.

$this->extensionDiscovery = new ExtensionDiscovery($this->drupalRoot);
$this->extensionDiscovery->setProfileDirectories([]);
$profiles = $this->extensionDiscovery->scan('profile');
$profile_directories = array_map(function ($profile) {
  return $profile->getPath();
}, $profiles);
$this->extensionDiscovery->setProfileDirectories($profile_directories);
$this->moduleData = $this->extensionDiscovery->scan('module');
$this->themeData = $this->extensionDiscovery->scan('theme');

There's a lot to walk through, but the source is here: https://github.com/mglaman/phpstan-drupal/blob/master/src/Drupal/Bootst…. This loads legacy include files, adds namespaces, ensures extension files can be loaded, and hook_hook_info files.

Return typing from the service container

When you fetch a service from the container nothing defines what should be returned. In PhpStorm, you can use the Drupal Symfony Bridge to provide typing from the services container. For example, entity_type.manager would knowingly be EntityTypeManagerInterface.

The extension loads all available services.yml from core and modules. This is then parsed and put into a ServicesMap. I borrowed the concepts from the PHPStan Symfony extension. However, the Symfony container gets dumped and is easier to load and parse, not nearly as dynamic as Drupal

foreach ($extensionDiscovery->scan('module') as $extension) {
  $module_dir = $this->drupalRoot . '/' . $extension->getPath();
  $moduleName = $extension->getName();
  $servicesFileName = $module_dir . '/' . $moduleName . '.services.yml';
  if (file_exists($servicesFileName)) {
    $serviceYamls[$moduleName] = $servicesFileName;
  }
  $camelized = $this->camelize($extension->getName());
  $name = "{$camelized}ServiceProvider";
  $class = "Drupal\\{$moduleName}\\{$name}";

  if (class_exists($class)) {
    $serviceClassProviders[$moduleName] = $class;
  }
}

The PHPStan Drupal extension implements a DynamicMethodReturnTypeExtension rule that will return the proper class object type based on the requested service.

There are some gotchas. It does not work well for services which define calls, factory, and configurator.

Dynamic return typing for entity storage from the entity type manager

The extension also provides a DynamicMethodReturnTypeExtension for the entity type manager service. Since modules have custom entities and custom storages, this is something which can be configured in your phpstan.neon file. The extension provides some defaults

drupal:
        entityTypeStorageMapping:
                node: Drupal\node\NodeStorage
                taxonomy_term: Drupal\taxonomy\TermStorage
                user: Drupal\user\UserStorage

What else does it have?

  • GlobalDrupalDependencyInjectionRule: don't call \Drupal::service when dependency injection is possible (performance, code standards)
  • DiscouragedFunctionsRule: copied from phpcs
  • PluginManagerSetsCacheBackendRule: catch a plugin manager which does not set a cache backend for its definitions (performance.)
  • EnhancedRequireParentConstructCallRule: improves handling of empty parent constructor calls. YAML plugin managers do not call their instructor. Need to abstract our some other plugin manager assertions.

What's next?

I have no idea!

Jan 06 2019
Jan 06

2018 was a weird year. It felt like it just zoomed by and nothing eventful happened. In fact, I feel like most people I have talked to this year felt the same exact way. But, then I sat down to my end of year write up and realized that this year was way more packed than I thought. Work was fun and full of travel. Our sons turned 7 and 3. We spent many Tuesday summer nights at the Kenosha Velodrome and our weekends at the Kenosha HarborMarket (and the accompanying Winter HarborMarket!)

2018 kicked off by learning about modern JavaScript development using ECMAScript 6 features, transpile with Babel and learned how to work with ReactJS and Redux. ContribKanban.com was converted into a progressively decoupled Drupal application. That was a fun adventure and gave the application some more configurability. Essentially boards contain lists and enough metadata to be a query builder against Drupal.org's API. Each list is its own API call to contain a list of issues, an inherits "defaults" from the board. 

Thumbnail

My head dive into ReactJS was inspired by the initial work of Matthew Grill and Daniel Wehner, which would be the beginnings of the Admin UI / JavaScript Modernisation Initiative. In a coincidence of events, Sally Young and Daniel were at Florida Drupal Camp early in the year. That's when I learned all about the JavaScript Modernization Initiative in full and got motivated to see what we at Commerce Guys could do to bring some fancy features to Drupal Commerce.

I spent a lot of 2018 going over Drupal core's RESTful Web Services module and the fledging JSON API module. The result was a Cart API to allow for pure API driven interaction with Drupal Commerce and cart management. The next step was to build our Cart Flyout module which adds a progressively decoupled replacement for the add to cart form and cart block. Originally, I had built the flyout experience using ReactJS. But in discussion with Bojan, we needed to prevent our users from having to inject yet another dependencies (and a frontend one, at that.) So, I dove backward into JavaScript of yore - Backbone.js and Underscore.js the frontend libraries shipped with Drupal core. It made me appreciate the JavaScript and frameworks we have now but was also an interesting experience in working with different design patterns.

I still love watching this recording.

Florida Drupal Camp also introduced me to DDEV. Back then, Feb 2018, the project was still in its infancy but I really liked where it was going. It matched parts of my existing local development stack, which I was failing to maintain. Now I could just contribute to someone else's project and not have to maintain my own! DDEV is written in Go. I have attempted to poke at Go before, but nothing serious. When I got home from Florida, I setup GoLand IDE and got hacking away, pushing a few pull requests and input on issues.

DDEV has become my default local development environment stack, and for projects at Commerce Guys. I wrote some articles throughout the year for using DDEV and they can be found with the "ddev" tag: https://glamanate.com/tags/ddev

"For 2018 I resolve to explore more coffee roasts and more whisky - but over books and not code!"

One of my resolutions was to read more books. And enjoy more whiskey. I sure did well on the latter but not as much on the former. I think I read about 10 books. My goal was twelve. However, I did get a lot of Audible time in and made my way through quite a few audiobooks and daily New York Time briefings. 

I also wanted to blog more. I think I pulled that one off. I also wanted to follow Dries in his POSSE movement. Posts will auto-queue to Buffer and automatically publish on Medium. I wrote more. And I tried to focus on specific technologies and write multiple blogs about them (beyond Drupal tidbits.) This was mostly around DDEV and PhpStorm.

What's in store for 2019?

Here are a few goals for 2019. 

Keep reading and writing more.

I want to keep reading and writing more. I started and ended strong in 2018, but fell apart in the middle. I track my reading with GoodReads and have managed to use its API to generate my Christmas book wishlist easily. I am thinking about integrating my GoodReads data onto my website, as I found out it's pretty hard to share books you want to read and do not own.

A cleaner computer.

I have no idea. But I do have some hopes and "resolutions". Like, I need to buy a stupid amount of electric wipes and keep them with me so my Macbook screen doesn't always look sloppy. I feel like a hot mess when I'm around others and I bring out my laptop. Electric wipes. Or microfibers. Something. Ditto for the keyboard.

Drupal.org contributions.

According to a blog post by Dries, I made it to #8 in the top individual contributors on Drupal.org (disclaimer: this is based on issue credits and generally code contributions, it does not account for the immense efforts for documentation contributors and event organizers.) What's interesting is this the rank and issue counts.

  • 2018 #8, 414 issues
  • 2017 #6, 334 issues
  • 2016 #9, 292 issues

Each year the number of issues grew, but the rank jumped around. I'm hoping to see in 2019 I drop to below #15 or more. Not that I am going to stop working on Drupal and Drupal Commerce less. But that the contribution space becomes more competitive. Drupal core initiatives and Drupal Camps have started using the issue queues to attribute users who attend meetings or make an impact. It's a way to "hijack" the issue credit system and give notice to non-code contributions.

Gettin' decoupled with it.

I would like to keep pushing forward with the decoupled items. Some of my experiments in 2018 included using GatsbyJS to consume a Drupal Commerce product catalog and add eCommerce components to allow add to cart and perform check out over an API. At the beginning of this year, I tried out Flutter for mobile apps. I would love to try and make a sample native app that allows shopping a Drupal Commerce store. I definitely prefer Flutter over React Native and I am excited to hack on it during my spare time.

Cut out the noise.

Pennoyer Park in Kenosha, looking out to Lake Michigan.

Sometime in October, I removed the Drupal Slack from my Slack application. A few times in the year I had added it on my phone, which is a mistake. But, it is useful when at a conference. But having it always open when trying to do work was a huge distraction. Especially with some of the comments.

I also took a month or so off of social media, which was awesome. I need to do that again. Too much consumption and overloading. Too much noise. Reading and the audiobooks helped cut this out, and morning runs outside versus "morning indoor media indulgence."

Integrate Webmention.io fully

I have Webmention.io basics setup. I need to check out the IndieWeb module next. However, as of this writing, I somehow broke my IndieWeb authentication capabilities. Maybe I broke something when I changed my theme.

Oct 19 2018
Oct 19

This is the third, and final post in my series on running Drupal’s various test suites using the DDEV local development stack. Previously I covered running Drupal’s Unit, Kernel, and Functional tests and then running Chromedriver to execute the FunctionalJavascript test suite. In this post, I will talk about running the newly introduced Nightwatch.js test framework.

What is Nightwatch? Straight from the website: "Write End-to-End tests in Node.js quickly and effortlessly that run against a Selenium/WebDriver server.” Added in 8.6, Nightwatch makes it easier to test Drupal’s JavaScript and expands on testing limitations found in the PHPUnit FunctionalJavascript tests — it’s not exactly practical to test JavaScript with PHP! I won’t talk about writing tests in Nightwatch, just running them. You can head over to the documentation to learn more about writing JavaScript tests in Nightwatch.

In order to run Nightwatch, you need to have a few dependencies available: Node.js, Chrome (and Chromedriver), and Yarn. Luckily for us, the web container for DDEV comes with Node.js and Yarn preinstalled! My last post covered getting up and running with Chromedriver as an additional service in your DDEV project, but I’ll reiterate that quickly.

We are going to use a Chromedriver Docker image provided by DrupalCi — the one used when executing tests on Drupal.org. To do this, we need to define a custom service for DDEV. It is petty simple: we just need to add a file called docker-compose.chromedriver.yml to our project's .ddev directory.

version: '3.6'
services:
  chromedriver:
    container_name: ddev-${DDEV_SITENAME}-chromedriver
    image: drupalci/chromedriver:production
    labels:
    # These labels ensure this service is discoverable by ddev
      com.ddev.site-name: ${DDEV_SITENAME}
      com.ddev.approot: $DDEV_APPROOT
      com.ddev.app-url: $DDEV_URL
  # This links the Chromedriver service to the web service defined
  # in the main docker-compose.yml, allowing applications running
  # in the web service to access the driver at `chromedriver`.
  web:
    links:
      - chromedriver:$DDEV_HOSTNAME

This file goes right into the project’s .ddev directory. Now, restart your project and bring the new service online. The Docker image will be pulled down and the container started, managed by DDEV.

Now that you have Chromedriver available, let's get ready to run Nightwatch.

To get setup we need to install the Node.js dependencies for Drupal core. Inside of the core directory run yarn install to download the Node.js dependencies. 

ddev ssh
cd core
yarn install

You should see something similar for the output:

Thumbnail

Next, we need to configure Nightwatch by copying .env.example to .env.

cp .env.example .env

We need to edit the DRUPAL_TEST_BASE_URL variable and set it to be http://web, just like our phpunit.xml in the previous posts. By default, Nightwatch will try to use SQLite. DDEV did not provide SQLite until its latest version (LINK). So, just in case, we will configure it to use the database container. Change DRUPAL_TEST_DB_URL to mysql://db:[email protected]/db just like our phpunit.xml configuration. Finally, we need to modify the hostname for Chromedriver. Change DRUPAL_TEST_WEBDRIVER_HOSTNAME from localhost to chromedriver.

Thumbnail

We are almost ready. We have two more environment variables to change. Modify DRUPAL_TEST_CHROMEDRIVER_AUTOSTART from true to false. This tells Nightwatch to use the Chromedriver we have running as a service instead of trying to run it on the web container. We also need to pass some arguments to ensure Chrome runs headless. Uncomment DRUPAL_TEST_WEBDRIVER_CHROME_ARGS and set it to --disable-gpu --headless --no-sandbox

Here’s an example of what a finished .env would look like (comments removed):

#############################
# General Test Environment #
#############################
DRUPAL_TEST_BASE_URL=http://web
DRUPAL_TEST_DB_URL=mysql://db:[email protected]/db

#############
# Webdriver #
#############
DRUPAL_TEST_WEBDRIVER_HOSTNAME=chromedriver
DRUPAL_TEST_WEBDRIVER_PORT=9515

################
# Chromedriver #
################
DRUPAL_TEST_CHROMEDRIVER_AUTOSTART=false
DRUPAL_TEST_WEBDRIVER_CHROME_ARGS="--disable-gpu --headless --no-sandbox"

##############
# Nightwatch #
##############
DRUPAL_NIGHTWATCH_OUTPUT=reports/nightwatch
DRUPAL_NIGHTWATCH_IGNORE_DIRECTORIES=node_modules,vendor,.*,sites/*/files,sites/*/private,sites/simpletest

We are ready to go! Still, in the core folder, run yarn test:nightwatch --tag core to execute the tests.

Thumbnail

Reports are generated in core/reports to see the outcomes of the tests executed. This directory contains JUnit XML output and console logs from the various tests — useful for debugging.

Thumbnail

There you go, we have run the Nightwatch tests! It is not much yet, but the tests will be growing. I am hoping to write some of my first Nightwatch tests for Drupal soon, and write about the process.

Oct 16 2018
Oct 16

This is part two of a series on running Drupal’s testing suites on DDEV. We left off last time with trying to execute a FunctionalJavscript test and having every test case skipped because no browser emulator was available. In this post, we will run through getting set up to execute Drupal’s FunctionalJavascript tests inside of your DDEV web container.

Thumbnail

Drupal 8.3 saw the ability to write tests that interacted with JavaScript when PhantomJS was added to Drupal’s testing capabilities. If you are not familiar with PhantomJS it is a headless web browser that was built on QtWebKit. And then in early 2018, a headless version of Chrome became available, essentially pushing PhantomJS into deprecated status. With Drupal 8.5 we now have the ability to execute WebDriver tests that run on headless Chrome. Honestly, this is a great improvement. The WebDriver API allows for using any client which respects its API (ie: FireFox) and not one-off integrations as was required for PhantomJS.

With JavascriptTestBase deprecated in favor of WebDriverTestBase, you only need a WebDriver client to run Drupal core’s FunctionalJavascript tests. You may need PhantomJS to run some contributed project tests (like Drupal Commerce.) With that said, next we will get Chromedriver up and running so we can run some tests!

If you’re writing a JavaScript enabled test as of Drupal 8.5 and extending JavascriptTestBase you will be using the new WebDriver base class. Some core or contrib tests might extend LegacyJavascriptTestBase and still use PhantomJS. One reason: PhantomJS returned response status codes and WebDriver does not — so tests may have not been ported yet to support the new testing API.

To keep things simple, we are going to use a Chromedriver Docker image provided by DrupalCi — the one used when executing tests on Drupal.org. To do this, we need to define a custom service for DDEV. It is petty simple: we just need to add a file called docker-compose.chromedriver.yml to our project's .ddev directory.

version: '3.6'
services:
  chromedriver:
    container_name: ddev-${DDEV_SITENAME}-chromedriver
    image: drupalci/chromedriver:production
    labels:
    # These labels ensure this service is discoverable by ddev
      com.ddev.site-name: ${DDEV_SITENAME}
      com.ddev.approot: $DDEV_APPROOT
      com.ddev.app-url: $DDEV_URL
  # This links the Chromedriver service to the web service defined
  # in the main docker-compose.yml, allowing applications running
  # in the web service to access the driver at `chromedriver`.
  web:
    links:
      - chromedriver:$DDEV_HOSTNAME

This file goes right into the project’s .ddev directory, as shown.

Thumbnail

Now, restart your project and bring the new service online. The Docker image will be pulled down and the container started, managed by DDEV.

Thumbnail

Now that we have Chromedriver running, we need to configure phpunit.xml once more. We need to change the value of SIMPLETEST_BASE_URL and add an environment variable called MINK_DRIVER_ARGS_WEBDRIVER. Previously we defined SIMPLETEST_BASE_URL as http://localhost. There is just one problem: Chrome is going to try and visit localhost and not receive a website! Luckily, via Docker’s network, it can reach the web container through the hostname web. So we will need to change that value. The second environment variable passes WebDriver parameters to our service so let it know how it should be executed.

<php>
  <ini name="error_reporting" value="32767"/>
  <ini name="memory_limit" value="-1"/>
  <!-- Changed to http://web for Chromedriver access -->
  <env name="SIMPLETEST_BASE_URL" value="http://web"/>
  <env name="SIMPLETEST_DB" value="mysql://db:[email protected]/db"/>
  <env name="BROWSERTEST_OUTPUT_DIRECTORY" value=""/>
  <!-- Parameters pass to Chromedriver. -->
  <env name="MINK_DRIVER_ARGS_WEBDRIVER" value='["chrome", {"browserName":"chrome","chromeOptions":{"args":["--disable-gpu","--headless", "--no-sandbox"]}}, "http://chromedriver:9515"]'/>
</php>

You can keep SIMPLETEST_BASE_URL as http://web for running your Functional tests, as well. Inside of the web container, http://web resolves as localhost.

For me, Chromedriver kept crashing until I added the --no-sandbox flag. DrupalCi does not pass this, so I am not sure the difference. But, I kept it for those who may have the same problem.

Time to give it a run! Open a terminal and let’s run the Big Pipe module's FunctionalJavascript tests.

ddev ssh
../vendor/bin/phpunit -c core core/modules/big_pipe/tests/src/FunctionalJavascript

And you should see test passes!

Thumbnail

If the tests come back as skipped, communication with the Chromedriver had failed. If this happens, run this command and run the tests again. This will display log output from Chrome.

docker exec -it {YOUR_PROJECT_NAME}-ddev-chromedriver tail -f /tmp/chromedriver.log

Nightwatch.js

All right! We have now run all of Drupal's PHPUnit test suites. In my next post we will explore Drupal's newest testing framework: Nightwatch.js.

Sep 24 2018
Sep 24

I moved over to DDEV for my local development stack back in February. One of my favorite things is the ease of using Xdebug. You can configure Xdebug to always be enabled, or turn it on and off as needed (my preferred method.) When you have Xdebug enabled, it also enables it for any PHP scripts executed over the command line. That means you can debug your Drush or Drupal Console scripts like a breeze!

This article is based on using Xdebug within PhpStorm, as it is my primary IDE.

Thumbnail

When using PhpStorm with any non-local stack you will have to set up a mapping. This allows PhpStorm to understand what files on your local stack match the contents within the PhpStorm project. For example, DDEV serves files from /var/www/html within its containers but the project files are actually /Users/myuser/Sites/awesomeproject  on your machine.

If you haven't yet, set up the configuration for this mapping. Generally, I never do this upfront and wait until the first time I end up using Xdebug over a web request and PhpStorm prompts me to. In this configuration, you will provide a server name. I set this as the DDEV domain for my project.

Thumbnail

Now, to get Xdebug over the CLI to work we need to ensure a specific environment variable is present: PHP_IDE_CONFIG. This contains the server name to be used, which PhpStorm maps to the configured servers for your project.

An example value would be

PHP_IDE_CONFIG=serverName=myproject.ddev.local

Now, you could export this every time you want to profile a PHP script over the command line -- but that is tedious. To ensure this variable is always present, I use an environments file for DDEV. To do this, create a docker-compose.env.yaml file in your .ddev directory. DDEV will load this alongside its main Docker Composer file and merge it in.

version: '3.6'

services:
  web:
    environment:
    - PHP_IDE_CONFIG=serverName=myproject.ddev.local

And now you can have PHP scripts over the command line trigger step debugging via Xdebug!

Sep 22 2018
Sep 22

If you want to have multiple Solr indexes using Search API, you need to have a core for each index instance. For my local development stack, I use DDEV. The documentation has a basic example for setting up a Solr service, but I had quite the fun time figuring out how to ensure multiple cores.

After reading the Solr image for Docker, I discovered you can override the container's command and pass additional commands as well. I found the issue "Create multiple collections at startup" from Sep 2017 which summed up my desired outcome. You just need to add this as the command for your container to generate additional cores:

bash -e -c "precreate-core collection1; precreate-core collection2; solr-foreground"

This ensures collection1 and collection2 and could create however many are needed. 

To add Solr to your DDEV with multiple cores, here is the docker-composer.solr.yaml you can add to your project

version: '3.6'

services:
  solr:
    container_name: ddev-${DDEV_SITENAME}-solr
    image: solr:6.6
    command: 'bash -e -c "precreate-core collection1; precreate-core collection2; solr-foreground"'
    restart: "no"
    ports:
      - 8983
    labels:
      com.ddev.site-name: ${DDEV_SITENAME}
      com.ddev.approot: $DDEV_APPROOT
      com.ddev.app-url: $DDEV_URL
    environment:
      - VIRTUAL_HOST=$DDEV_HOSTNAME
      - HTTP_EXPOSE=8983
    volumes:
      - "./solr:/solr-conf"
  web:
    links:
      - solr:$DDEV_HOSTNAME
      # Platform.sh hostname alias
      - solr:solr.internal

Sep 17 2018
Sep 17

At Drupal Europe, Dries announced the release cycle and end of life for Drupal's current and next version. Spoiler alert: I am beyond excited, but I wish the timeline could be expedited. More on that to follow.

Here is a quick breakdown:

  • Drupal 8 uses Symfony 3. Symfony 3 will be at its end-of-life at the end of 2021, forcing Drupal 8 into end-of-life
  • Drupal 9 will use Symfony 4 or 5 and be released in 2020 (a year before end-of-life.)
  • Drupal 7 will also end-of-life at the end of 2021, matching Drupal 8, simplifying security and release management.
Drupal 7, 8, 9 End of Life timeline

Timeline copied and cropped from https://dri.es/drupal-7-8-and-9

I feel some will use this as an excuse to bash and argue against our adoption of Symfony within Drupal core. Instead, I see it as a blessing. It is forcing Drupal to adopt new technology and force our hosting platforms and providers to do the same.

At a minimum, here are the aging technologies we are forced to support:

MySQL 5.5.3/MariaDB 5.5.20/Percona Server 5.5.8

MySQL 5.6 provides many improvements over 5.5, alone. Yes, you can use Drupal with MySQL 5.6, but the core product itself cannot harness any of its benefits. Or, what about MySQL 5.7 which includes support for JSON fields? In the 8.6 release, support for MySQL 8 was added -- but the only benefit is performance gains, not new technology.

PHP 5.5.9+ and 5.6

This is one of my biggest pain points. If you look at Drupal's PHP requirements page we support 5.5.9+, 5.6, 7.0, 7.1, and 7.2 However, Drupal 8 will drop support for PHP 5.5 and 5.6 on March 6, 2019. PHP 5.5 was end-of-life 2 years ago and 5.6 will at the end of 2018. Not to mention, 7.0 is end-of-life before 5.6!

We need modern technology in our stacks.

I am aware that hosts were slow to adopt PHP 7. Know why? Because our software was slow to adopt. We never caused them to update technology stacks. Symfony 4 requires PHP 7.1.3 or higher. Meanwhile, a stable Symfony 5 is planned to be released Nov 2019 (one year before Drupal 9 release.) I wasn't able to find information on the minimum PHP requirement, but I would assume it is at least 7.2 which means 7.3 will have full support.

I am excited for 7.3, like beyond excited. Why? Because garbage collection is severely improved. And Drupal loves to create a lot of objects and trigger a garbage collection during a request. For example, in Drupal 7 we had to disable it during a request to improve performance (was worth the tradeoffs.) Here's a link to the PR which hit 7.3.0beta3 and looks to improve the process by 5x its current speeds.

Having MySQL 5.7 would allow us to fully use JSON fields. This would be immensely useful in Drupal Commerce.

And now we wait (well, don't wait, Contribute!)

I wish we could make this process happen earlier. But that would be pretty unreasonable. There is a lot to accomplish in Drupal core to make this happen. The release managers and core maintainers have done an amazing job at managing Drupal 8 and shipping stable releases. They now have had a whole additional release branch and process thrown at them.

I encourage you to embrace this change and find ways to contribute to the 9.0.x branch, which might be easier than you think. Part of the 9.0.x effort is removing deprecated code (which is pretty easy to find.) I'm not sure if a plan has been cooked up yet, but there are issues on the 9.x branch https://www.drupal.org/project/issues/drupal?version=9.x

Jul 06 2018
Jul 06

Today I was working on a custom Drupal 8 form where I needed an option to purge existing entities and their references on a parent entity before running an import. It seemed pretty straightforward until I saw "ghost" values persisting on the parent entity's inline entity form. Here's my journey down the rabbit hole to fix broken entity reference values.

The first thing I did was pretty straightforward. It's in a batch, so I grabbed a few entities and just deleted them, hoping magic would happen.

$limit = 25;   
$price_list_item_ids = array_slice($price_list->getItemsIds(), 0, $limit);
$price_list_items = $price_list_item_storage->loadMultiple($price_list_item_ids);
$price_list_item_storage->delete($price_list_items);

That didn't work. When I would view the price list the references still existed, albeit broken.

So then I tried to run the filter empty items method, thinking "well, the value is empty." Fingers crossed, I did the following:

$price_list->get('items')->filterEmptyItems();

No dice. So I dig in and see why this isn't working. The \Drupal\Core\Field\FieldItemList::filterEmptyItems method invokes \Drupal\Core\TypedData\Plugin\DataType\ItemList::filter and checks if the field item reports that it has an empty value.

  /**
   * {@inheritdoc}
   */
  public function filterEmptyItems() {
    $this->filter(function ($item) {
      return !$item->isEmpty();
    });
    return $this;
  }

Well, it should be empty, right? WRONG AGAIN. The EntityReferenceItem class doesn't care if the reference is valid or not, only if it has values in its properties. In my case, there was always a target_id set and sometimes $this->entity was populated from a previous access.

  /**
   * {@inheritdoc}
   */
  public function isEmpty() {
    // Avoid loading the entity by first checking the 'target_id'.
    if ($this->target_id !== NULL) {
      return FALSE;
    }
    if ($this->entity && $this->entity instanceof EntityInterface) {
      return FALSE;
    }
    return TRUE;
  }

So I took another approach. I checked to see if validating the field would return validation constraints and let me remove field values that way. Running validate gave me a list of all the broken references... except it was near impossible to traverse them and discover what field value delta was incorrect.

$price_list->get('items')->validate();

So then I backtracked a bit. I noticed that the filter method for the item list was public. The filterEmptyItems method wasn't working for me, so why not roll my own! And I did. And it failed.

$items->filter(function (EntityReferenceItem $item) {
  return !$item->validate()->count();
});

Why? Because the ValidReference constraint is on the entity reference list class and not on the entity reference item class itself. There were never any violations at the field value level, only for the entire list of values.

Since I couldn't trust validations, I could trust that the computed entity field would be null when it was passed through the entity adapter. And that's how I got to my final version which removes all invalid entity reference values:

// The normal filter on empty items does not work, because entity
// reference only cares if the target_id is set, not that it is a viable
// reference. This is only checked on the constraints. But constraints
// do not provide enough data. So we use a custom filter.
$price_list->get('items')->filter(function (EntityReferenceItem $item) {
  return $item->entity !== NULL;
});
$price_list->save();

Hopefully, this aids someone else going down a unique rabbit hole of programmatically removing entities and mending references.

May 13 2018
May 13

I have been spending the month of May cleaning out my basement office. I have found a treasure trove of old papers and random things bringing up memories and other thoughts. Just like my last blog post on an organization's values and the culture it creates. This time I found two legal pads which had my notes and planning for my first Drupal Meetup talk and first came session.

Thumbnail

My first talk about Drupal was at the Milwaukee Meetup. I have a talks explaining the differences between the Omega theme's versions 3 and 4. Which I decided would be like comparing apples to oranges. 

I had just started my own Meetup in Kenosha and had not yet made it up to Milwaukee. I had been meaning to but failed to commit. Then one day David Snopek sent out an email saying the speaker for that month couldn't make it and asked if anyone wanted to talk about building theme' with Omega. All of our sites were built using Omega 3 or Omega 4, so I decided to jump in and offer to talk. Before that talk, I had just presented shortly on Drupal at my own meetup, mostly amongst friends.

The slides are still available on SlideShare: https://www.slideshare.net/MattGlaman/gettin-responsive-using-omega-3-a…

Thumbnail

A few months later I was encouraged by Mike Herchel to propose a session for Florida DrupalCamp. At work, I had begun using Panels more and more on each site builder over things like Context and Delta. It took a while, but I decided to talk about ways we were using Panels and custom layouts to build responsive and adaptive content.

Florida DrupalCamp 2014 was my second camp and first time speaking at a Drupal conference. I also had some Mediacurrent employees in my session who showed their work Classy Panel Styles, which was a project to try and simplify customization of a pane's display.

My slides for that are still available on SlideShare as well: https://www.slideshare.net/MattGlaman/rockin-responsive-content-with-pa…

May 01 2018
May 01

This is a follow up to my early blog post about enabling RESTful web services in Drupal 8Jesus Olivas, one of the creators of Drupal Console, brought up that you can actually manage your RESTful endpoint plugin configurations direct from the command line!

If you are not familiar, Drupal Console is a command line utility built for managing Drupal 8. I use it mostly for code generation and some other basic management features, such as debugging the routing system. I talk about Drupal Console it in the "The Drupal CLI" chapter of the Drupal 8 Development Cookbook. However, the tool itself could have a book all on its own for all of its features.

Get Drupal Console

Drupal Console needs to added to your Drupal project, for each project, using Composer. The command is below, and more details in the project documentation: https://docs.drupalconsole.com/en/getting/composer.html

composer require drupal/console:~1.0 \
--prefer-dist \
--optimize-autoloader

Here's the sample output from running the command on the Drupal 8 instance used in my last article.

Thumbnail

Review the current state of RESTful endpoints

Listing the currently available REST endpoint plugins we just need to run the following command

$ ./vendor/bin/drupal debug:rest

This will list all of the REST module plugins, their URL endpoints, and status.

Thumbnail

Using this command we can also inspect individual plugins. Here's the output from inspecting the Node (content) resource plugin.

./vendor/bin/drupal debug:rest entity:node
Thumbnail

Full documentation can be found at https://docs.drupalconsole.com/en/commands/debug-rest.html

We can then use two other commands to enable or disable endpoints.

Enabling a RESTful endpoint using Drupal Console

Now, let's use the command line to enable the Node (content) endpoint.

./vendor/bin/drupal rest:enable entity:node

It will prompt you to select methods that you would like to enable. In this example we'll focus on just enabling the GET resource, for consuming content elsewhere.

$ ./vendor/bin/drupal rest:enable entity:node

 commands.rest.enable.arguments.methods:
  [0] GET
  [1] POST
  [2] DELETE
  [3] PATCH
 > 0

Then you decide what formats you would like to accept. You can choose between JSON or XML out of the box with Drupal core. Other modules can provide more formats (YAML, CSV.)

$ ./vendor/bin/drupal rest:enable entity:node

 commands.rest.enable.arguments.methods:
  [0] GET
  [1] POST
  [2] DELETE
  [3] PATCH
 > 0

Selected Method GET

 commands.rest.enable.arguments.formats:
  [0] json
  [1] xml
 > json

And, then we choose an authentication provider. We will choose the cookie authentication provider.

$ ./vendor/bin/drupal rest:enable entity:node

 commands.rest.enable.arguments.methods:
  [0] GET
  [1] POST
  [2] DELETE
  [3] PATCH
 > 0

Selected Method GET

 commands.rest.enable.arguments.formats:
  [0] json
  [1] xml
 > json

commands.rest.enable.messages.selected-format json

 Available Authentication Providers:
  [0] basic_auth
  [1] cookie
 > cookie

And then, success!

Selected Authentication Providers cookie
 Rest ID "entity:node" enabled

Here's a screenshot of the entire command process.

Thumbnail

Modifying a RESTful endpoint using Drupal Console

Now, there is not a command to quick edit the configuration for a RESTful endpoint plugin. To do so, we will have to use the provided configuration management commands to discover the configuration name and edit it. So, we will use the debug:config command to list our options. It's a large output. So I recommend using grep to limit the results.

$ ./vendor/bin/drupal debug:config | grep "rest.resource"
 rest.resource.entity.node 

Now that we have the configuration name, rest.resource.entity.node, we can use the configuration edit command to modify it. Running the command will open a terminal based text editor to modify the configuration YAML.

./vendor/bin/drupal config:edit rest.resource.entity.node

When you save changes, they will be imported. Be warned, however: you could ruin your configuration if you are not sure about what you're doing.

Thumbnail

You can rerun the rest:enable command to add new HTTP methods to an endpoint, it seems. However, you will need to edit it to remove. For example, let's say you enabled the DELETE method but want to remove it.

Apr 27 2018
Apr 27

Recently I ran a good ole composer update on a client project. This updated Drupal and PHPUnit. I ran our PHPUnit tests via PhpStorm and ran into an odd error:

Fatal error: Class 'PHPUnit_TextUI_ResultPrinter' not found in /private/var/folders/dk/1zgcm66d4vqchm8x5w4q57z40000gn/T/ide-phpunit.php on line 253

Call Stack:
    0.0019     446752   1. {main}() /private/var/folders/dk/1zgcm66d4vqchm8x5w4q57z40000gn/T/ide-phpunit.php:0

PHP Fatal error:  Class 'PHPUnit_TextUI_ResultPrinter' not found in /private/var/folders/dk/1zgcm66d4vqchm8x5w4q57z40000gn/T/ide-phpunit.php on line 253
PHP Stack trace:
PHP   1. {main}() /private/var/folders/dk/1zgcm66d4vqchm8x5w4q57z40000gn/T/ide-phpunit.php:0

If you didn't know, PhpStorm creates an ide-phpunit.php script that it invokes. Depending on the PHPUnit version it will generate the required code. In my case, it generated for 4.8.36

//load custom implementation of the PHPUnit_TextUI_ResultPrinter
class IDE_Base_PHPUnit_TextUI_ResultPrinter extends PHPUnit_TextUI_ResultPrinter
{
    /**
     * @param PHPUnit_Util_Printer $printer
     */
    function __construct($printer, $out)
    {
        parent::__construct($out);
        if (!is_null($printer) && $printer instanceof PHPUnit_TextUI_ResultPrinter) {
            $this->out = $printer->out;
            $this->outTarget = $printer->outTarget;
        }
    }

    protected function writeProgress($progress)
    {
        //ignore
    }
}

The quick fix: Let PhpStorm know that the PHPUnit version was updated. The error comes from the scripts PhpStorm generates for running PHPUnit. Once I clicked the refresh button, PHPUnit was recognized as 6.5.8, and the error was resolved.

Thumbnail
Apr 26 2018
Apr 26

Drupal 8 ships with the RESTful Web Services module which allows you to expose various API endpoints for interacting with your Drupal site. While the community is making a push for the JSON API module, I have found the core' RESTful module to be pretty useful when I have custom endpoints or need to implement Remote Procedure Calls (RPC) endpoints. However, using the module and enabling endpoints is a bit rough. So, let's cover that! Also note, this blog covers the content from the introduction of the Web Services chapter from the Drupal 8 Development Cookbook.

For the curious: RESTful stands for Representational state transfer. It's about building APIs limited to the constraints of the HTTP definition itself. It's stateless and allows does not require you to build a super custom client when working with the API. If you want to nerd out, read Roy Thomas Fielding's dissertation which describes the REST architecture.

Let's get started!

Thumbnail

So, there is one major problem with the core RESTful Web Services module. It has no user interface. Yep, that's right. A Drupal component without a user interface out of the box, I didn't believe when I wrote my book the first time nor the second. Luckily, there is a module for that (the Drupal way.) You can either manually edit config through the means of your choice, or grab the REST UI module. The REST UI module provides a user interface to enable and modify API endpoints provides by the RESTful Web Services module.

First things first, we need to add the REST UI module to our Drupal site.

cd /path/to/drupal8
composer require drupal/restui

If you are new to Composer, here's an example screenshot of the output from when I ran the command.

Thumbnail

Now that the module has been added to Drupal, let's go install the modules. Go on and log into your Drupal site. Then head over to Extend via the administrative toolbar and install the following Web services modules: Serialization, RESTful Web Services, and REST UI. Although not needed, I'm also going to install HTTP Basic Authentication to show off what it looks like to configure authentication providers (covered properly in the book.)

Thumbnail

Click on Install to get the modules all settled into Drupal and available. Once you see that green message 4 modules have been enabled: HTTP Basic Authentication, RESTful Web Services, Serialization, REST UI. we are good to go to move to the next step!

Configuration: enabling and modifying endpoints

Yay! We have modules installed, we can begin to get all decoupled, or progressively decoupled. Go to Configuration and click on REST under Web Services to configure the available endpoints. To expose content (nodes) over the API, click on Enable for the Content row.

Thumbnail

With the endpoint enabled, it must be configured. Check the GET method checkbox to allow GET requests. Then, check the json checkbox so that data can be returned as JSON. All endpoints require a selected authentication provider. Check the cookie checkbox, and then save it. This gives us an endpoint to read content that returns a JSON payload.

Thumbnail

Go ahead and click Save configuration to finish enabling the endpoint. Also note, any RESTful resource endpoint enabled will use the same create, update, delete, and view permissions that have been already configured for the entity type. In order to allow anonymous access over GET for content,

Testing it out!

Give it a test run! Using cURL on the command line, a piece of content can now be retrieved using the RESTful endpoint. You must add ?_format=json to the node's path to ensure that the proper format is returned.

Here's an example command

curl http://drupal8.ddev.local/node/1?_format=json

And its response (super truncated for space.)

{
    "nid": [
        {
            "value": 1
        }
    ],
    "uuid": [
        {
            "value": "1b92ceda-135e-4d7e-9861-bbe9a3ae1042"
        }
    ],
    "title": [
        {
            "value": "This is so stateless"
        }
    ],
    "uid": [
        {
            "target_id": 1,
            "target_type": "user",
            "target_uuid": "421f721e-d342-4505-b1b4-57849d480ace",
            "url": "\/user\/1"
        }
    ],
    "created": [
        {
            "value": "2018-04-26T02:44:54+00:00",
            "format": "Y-m-d\\TH:i:sP"
        }
    ],
    "changed": [
        {
            "value": "2018-04-26T02:45:23+00:00",
            "format": "Y-m-d\\TH:i:sP"
        }
    ],
    "body": [
        {
            "value": "\u003Cp\u003EIn my experience, there is no such thing as luck. What?! Hey, Luke! May the Force be with you. What good is a reward if you ain\u0027t around to use it? Besides, attacking that battle station ain\u0027t my idea of courage. It\u0027s more like\u2026suicide.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ERemember, a Jedi can feel the Force flowing through him. The Force is strong with this one. I have you now.\u0026nbsp;\u003Cstrong\u003EHey, Luke!\u003C\/strong\u003E\u003Cem\u003EMay the Force be with you.\u003C\/em\u003E\u0026nbsp;He is here.\u003C\/p\u003E\r\n\r\n\u003Ch2\u003EDon\u0027t be too proud of this technological terror you\u0027ve constructed. The ability to destroy a planet is insignificant next to the power of the Force.\u003C\/h2\u003E\r\n\r\n\u003Cp\u003EDon\u0027t be too proud of this technological terror you\u0027ve constructed. The ability to destroy a planet is insignificant next to the power of the Force. He is here. Ye-ha! Ye-ha! I have traced the Rebel spies to her. Now she is my only link to finding their secret base.\u003C\/p\u003E\r\n",
            "format": "full_html",
            "processed": "\u003Cp\u003EIn my experience, there is no such thing as luck. What?! Hey, Luke! May the Force be with you. What good is a reward if you ain\u0027t around to use it? Besides, attacking that battle station ain\u0027t my idea of courage. It\u0027s more like\u2026suicide.\u003C\/p\u003E\n\n\u003Cp\u003ERemember, a Jedi can feel the Force flowing through him. The Force is strong with this one. I have you now.\u00a0\u003Cstrong\u003EHey, Luke!\u003C\/strong\u003E\u003Cem\u003EMay the Force be with you.\u003C\/em\u003E\u00a0He is here.\u003C\/p\u003E\n\n\u003Ch2\u003EDon\u0027t be too proud of this technological terror you\u0027ve constructed. The ability to destroy a planet is insignificant next to the power of the Force.\u003C\/h2\u003E\n\n\u003Cp\u003EDon\u0027t be too proud of this technological terror you\u0027ve constructed. The ability to destroy a planet is insignificant next to the power of the Force. He is here. Ye-ha! Ye-ha! I have traced the Rebel spies to her. Now she is my only link to finding their secret base.\u003C\/p\u003E\n",
            "summary": ""
        }
    ]
}

Woo! we did it! You can now consume content from your Drupal site from anything your dreams desire.

Want to know more?

There is a whole chapter in the Drupal 8 Development Cookbook on Web Services which covers the core RESTful Web Services module: GET, POST, PATCH, using Views, and Authentication. It also gives a brief introduction to the JSON API module.

Apr 18 2018
Apr 18

DrupalCon is always something I look forward to, ever since attending my first one at DrupalCon Los Angeles 2015. As I wrote over a week ago, I drove down from Wisconsin with my wife and two boys to Nashville. We came down for the weekend before and stayed for the weekend after to do some touristing and vacationing. I tried to write one blog about DrupalCon but realized I couldn't really condense everything I had to say. So I plan on pushing out a few post-Nashville blogs.

Thumbnail

My wife is a country music fan, so we for sure had to head on over to the Country Music Hall of Fame museum. Outside and engraved in on the corner of the building there were some quotes from famous country musicians. The Hank Williams quote outside of the Country Music Hall of Fame museum caught my eye.

You ask what makes our kind of music successful. I'll tell you. It can be explained in just one word. Sincerity.

The quote stuck with me. It helped set the tone of DrupalCon for me. While Hank may have been talking about music, I think it's a recipe that transcends just about anything. When we talk about Drupal, we always talk about the community. Right? Come for the code, stay for the community. That is our mantra. But, what makes us so different?

Maybe it's our community's sincerity. Overall I find the Drupal community to be pretty genuine, trustworthy and have a high level of integrity. In fact, these traits of the community are what ended up bringing me into the community and going Full Drupal™.

While working the Commerce Guys booth, we had a wide array of people come up and talk. I would like to say we had a split audience. About half were wanting to learn about Drupal Commerce (new users), wanting an update (previous users), or current users validating their architecture and other ideas. The other half, however, was our community members coming up and saying "thanks." 

It was a morale boost to have people come up and say "thank you" for support in the Slack channel, Drupal.org issue queues, or just our general work on Drupal Commerce and the other contrib. Being in a distributed workplace and contributing to a distributed open source community can be wearing on the soul. Problems are harder to solve and communication can be misinterpreted.  Regional DrupalCamps are great, especially for local networking. But DrupalCon is the one thing which can bring people all over together for more than a few days. 

I have come back motivated. I love working in open source. I love contributing to Drupal and Drupal Commerce. I love the fact my job is helping businesses solve the hardest parts of selling online via our platform. But, after a while, it seems intangible. Then I get to hear from our end users that are thankful for our help. And that is awesome. It elevates the work I do because it's not just to earn a paycheck and deliver some software artifact. It is making an impact bigger than myself. And I love that.

That is one reason I brought my family down to Nashville. Often times I'm sitting at home geeked about some contribution or some other Act of Awesome by our community, and I wanted my wife to experience it, too. "Look! I'm not completely crazy for getting on the computer at 5 am or staying up until midnight to work on this thing called a patch!"

Our community is so badass I just want to show it off to those who are missing out on the experience and the great connections within it.

You can read about how the Drupal community made an impact in my life and career in these two blogs

Apr 06 2018
Apr 06

It is that time of year, again! It is DrupalCon time! Woooooo. Last year DrupalCon Baltimore saw 3,271 attendees, and I'm betting Nashville will bring in more (because, Nashville.) When this publishes and hits various feed, I will be on the road and (hopefully) an hour into the eight-hour drive to Nashville with my family.

Thumbnail

My wife and two kids joined me at DrupalCon New Orleans - and it was awesome. We enjoyed Frenchman Street, while avoiding Bourbon Street. Our oldest son, then four, loved having beignets and walking through the French Quarter. Cafe Beignet was a nightly affair. I doubt our youngest, only nine months at the time, remembers the trip much.

But! Now, that they are six and two, I bet they will have a blast in Nashville. My wife is a big country fan, too, so I know she will. There is a listening room near our lodging that is all ages friendly, and apparently has the best acoustics in town.

They won't be coming into the conference. But, I'm thankful that I get to bring them along and experience the food, music, and other things Nashville will bring. I spend a lot of my mornings and nights doing extra work, most often my open source "commitments" that fall outside of client time. So, getting to do this is pretty awesome.

A lot has changed in two years, since New Orleans. I hope that I'll have the chance to introduce my family to members of my Drupal family,  all of whom make such a big impact in my day-to-day work.

I cannot wait to see everyone in Nashville!

Apr 02 2018
Apr 02

In prep for DrupalCon Nashville, I was working on our Drupal Commerce demo sites that we'll be showing off. They have been running in silent mode for some time and recently received an overhaul so they use our demo and out of the box theme for Drupal Commerce, Belgrade. For one of the sites I received the following error: To start over, you must empty your existing database and copy <em>default.settings.php</em> over <em>settings.php</em>. Wait, what? When I ran drush si it dropped all the tables. Apparently, not.

Whenever I would use Drush to reinstall the website, I would get the following error message:


<ul>
<li>To start over, you must empty your existing database and copy <em>default.settings.php</em> over <em>settings.php</em>.</li>
<li>To upgrade an existing installation, proceed to the <a href="http://glamanate.com/update.php">update script</a>.</li>
<li>View your <a href="http://example.com">existing site</a>.</li>
</ul> 

Now, I was getting ready to pull my hair out. I set up our demos on a Drupal multisite via DrupalVM. All the other sites were installing fine, except this one. So I started debugging.

When I ran SHOW TABLES; in the MySQL command line... there were no tables. When I ran DROP DATABASE db_name ... I got an error. Instead of everything be happy, I got an error like the following:

Error Dropping Database (Can't rmdir '.test\', errno: 17)

I have never seen that. So I did what all proper software engineers do: go to Google, search the error, and see what the hell other people have done when they run into this issue. Luckily I came across a Stack Overflow question and answer https://stackoverflow.com/questions/4584458/error-dropping-database-can…. Basically, this error happens when there is crud data in the database directory. So I checked what was in the database directory (since there were no tables.)


[email protected]:/var/www/drupal$ sudo ls -la /var/lib/mysql/db_name/
total 40
drwxr-x---  2 mysql mysql 20480 Mar 30 18:35 .
drwxr-xr-x 10 mysql mysql  4096 Mar 30 18:20 ..
-rw-r-----  1 mysql mysql 65536 Nov 23 03:54 drupal_install_test.ibd

There was an InnoDB table file called drupal_install_test.ibd. From the MySQL reference

InnoDB's file-per-table tablespace feature provides a more flexible alternative, where each InnoDB table and its indexes are stored in a separate .ibd data file. Each such .ibd data file represents an individual tablespace. This feature is controlled by the innodb_file_per_table configuration option, which is enabled by default in MySQL 5.6.6 and higher.

So running a manual removal fixed everything.

sudo rm /var/lib/mysql/db_name/drupal_install_test.ibd

Turns out this table is part of the Drupal installation process to check it has all the required access it needs to the database. I found the following in lib/Drupal/Core/Database/Install/Tasks.php and it's also in Drupal 7, it seems. The problem is, those, that a bad install and then re-attempted install breaks, bad.


  /**
   * Structure that describes each task to run.
   *
   * @var array
   *
   * Each value of the tasks array is an associative array defining the function
   * to call (optional) and any arguments to be passed to the function.
   */
  protected $tasks = [
    [
      'function'    => 'checkEngineVersion',
      'arguments'   => [],
    ],
    [
      'arguments'   => [
        'CREATE TABLE {drupal_install_test} (id int NULL)',
        'Drupal can use CREATE TABLE database commands.',
        'Failed to <strong>CREATE</strong> a test table on your database server with the command %query. The server reports the following message: %error.<p>Are you sure the configured username has the necessary permissions to create tables in the database?</p>',
        TRUE,
      ],
    ],
    [
      'arguments'   => [
        'INSERT INTO {drupal_install_test} (id) VALUES (1)',
        'Drupal can use INSERT database commands.',
        'Failed to <strong>INSERT</strong> a value into a test table on your database server. We tried inserting a value with the command %query and the server reported the following error: %error.',
      ],
    ],
    [
      'arguments'   => [
        'UPDATE {drupal_install_test} SET id = 2',
        'Drupal can use UPDATE database commands.',
        'Failed to <strong>UPDATE</strong> a value in a test table on your database server. We tried updating a value with the command %query and the server reported the following error: %error.',
      ],
    ],
    [
      'arguments'   => [
        'DELETE FROM {drupal_install_test}',
        'Drupal can use DELETE database commands.',
        'Failed to <strong>DELETE</strong> a value from a test table on your database server. We tried deleting a value with the command %query and the server reported the following error: %error.',
      ],
    ],
    [
      'arguments'   => [
        'DROP TABLE {drupal_install_test}',
        'Drupal can use DROP TABLE database commands.',
        'Failed to <strong>DROP</strong> a test table from your database server. We tried dropping a table with the command %query and the server reported the following error %error.',
      ],
    ],
  ];

I have no idea how the site got into such a corrupt state - and installed. The only relevant issue I could find was https://www.drupal.org/project/drupal/issues/1017050.

Mar 30 2018
Mar 30

Back in February, I automated some of my content workflows. I use the Scheduler module to publish posts and have them automatically pushed into Buffer to be shared across my social networks. I'm attempting a new experiment once this node publishes. This should show up at my Medium account, https://medium.com/@mglaman.

Thumbnail

I don't agree with using Medium the canonical blogging platform for companies. But I know there are plenty of users on Medium, and it provides its own virality. It also pushes content to platform readers that they may find relevant. So, this is my attempt to bridge to new readers.

The Medium API docs allow you to specify a canonical URL that is "The original home of this content, if it was originally published elsewhere." So, here goes nothing on this experiment!

Mar 17 2018
Mar 17

At the end of October 2017, I wrote about the new and upcoming changes to ContribKanban.com. I decided to migrate off of a MEAN stack and go to a more familiar (aka manageable) stack. I decided upon Drupal 8 to manage my backend. Drupal is what I do, and Drupal is amazing at modeling data. People can moan and whine - it handles data models like a boss. I decided to treat it as a "progressively" decoupled web application.

WTF is "Progressively Decoupled"

I caught myself at Florida DrupalCamp, MidCamp and other conversations calling the new ContribKanban.com a progressively decoupled application. Which made me wonder, what in the hell does that really even mean? Well, according to Dries back in 2015, this description seems to best fit how it all works:

where the skeleton of the page loads first, then expensive components such as "songs I listened to most in the last week" or "currently playing" are sent to the browser later and fill out placeholders.

Drupal 8 handles routing, data, and initial rendering. Then the client picks it up in the browser. And that is how ContribKanban.com works. Drupal is a data storage for Drupal.org API query configurations. When you visit a board, Drupal gets that information and bubbles it up to the client via the JavaScript API's drupalSettings. Then the client side scripts take over and render fancy kanban boards off API results from Drupal.org

So, I guess it's a decoupled progressively decoupled application. Data comes from Drupal.org over RESTful interfaces and board configuration comes from the Drupal instance via its render and template system.

Makes sense, right?

Why I chose ReactJS

October 17th through about the 26th I had a sprint of activity rebuilding ContribKanabn on Drupal 8. I had built the entities for board and list storage developed design patterns for representing the board configuration in drupalSettings, had my JavaScript built out - the whole nine yards. In fact, I had even managed to kill the requirement of jQuery and have it nearly fully functional. While that made me feel like a badass for not needing to depend on jQuery and use VanillaJS, I hit a brick wall. I need to support a few very specific requirements

  • I needed to be able to allow users to filter by issue priority and issue type when viewing a board, causing the lists to re-render and filter by this constraint
  • I needed to be able to take a project node ID and translate it to the project title within a card
  • I needed to be able to take a user ID and translate it to their username when assigned to a card.

In AngularJS I knew exactly what to do. Use a for loop and then two directives. Boom. Done. In plain old JavaScript, I was not sure the approach to take. I had read a lot of hubbub about Polymer and web components, especially thanks to content by Bryan Ollendyke. I had played around with Angular but didn't really love the next generation of its development (maybe blame TypeScript?) I knew the popular kids on the scene were React and Vue.js One is developed by Facebook and has a huge community following and the other seems to be React filled with those who want to flip the bird to corporate-sponsored open source projects*. Plus it appeared (and it true) that the Drupal core Javascript Modernization Initiative would pick React and get started at improving Drupal's ability to decouple.

So I picked it back up and decided to move forward with React so I could solve my user interface woes. I blogged about some of my first experiences using ReactJS, which was pretty enjoyable. I'm happy with my choice.

* I know I probably offended a handful of readers, there. But when I've asked people the difference between Vue.js and React there have been no defining differences. You have JSX and Nuxt. I understand where Vue.js is different than Angular. But the micro differences between React and Vue.js seem very minuscule.

What's under the new hood?!

So! What's under the new hood? Now that I don't need to manage running a MEAN stack and can just do what I do best, I can focus on features and general improvements.

Manageable data model for boards and lists.

In the original version of ContribKanban, boards and lists were one giant JSON blob in MongoDB with some overrides in a JSON file for how the board should appear. There was no administrative user interface. When Parse.com went away it made my life a living hell. The project became stagnant and undeveloped. So this brings new life and management. yay!

Thumbnail

Super duper custom boards and configurations

One of the best things about this switch is the fact it is a user interface for Drupal.org's API. An example of such a board can be found here: https://contribkanban.com/board/1. That board is for Lullabot's AMP projects on Drupal.org. It combines the module and theme issue queue into a single board.

Another example is the Drupal Core Novice board, https://contribkanban.com/board/DrupalCoreNovice. It contains issues for the currently active Drupal 8 branches that are tagged as Novice, but only of "normal" and "minor" priority.

User accounts for more awesome things

I wanted to also allow for people register user accounts so that there can be user-based data. One such thing is to display issues of projects that a user maintains, or at least the recently updated issues that they follow. Creating accounts also allows me to figure out privileges for creating custom boards and owning project boards. As a maintainer of Commerce Reports may I want all issues to filter on a parent issue which I use to track as goals for the next release.

This is an example of the "Your Issues" tab on your Drupal.org profile parsed into a kanban board when you have an account and provide your Drupal.org username.

Thumbnail

A board of custom issues

One of my first AngularJS projects was a Chrome app to track all the issues we needed to be resolved for our SaaS product built on top of Drupal. Shawn McCabe also pinged me about a similar requirement, but a way to track their "Looking for Work" board to give guided down-time contribution work at AcroMedia, Inc.

You add issue node IDs and create custom boards. Pretty easy way to build a sprint board without needing issue tags, especially if it's a private project.

Thumbnail

Next steps

Next... I need to fix some lost functionality - such as browsing by branch. Browsing boards is lacking finess, but analytics show most people enter the application via direct links than discovering boards.

I would also like to add OAuth2 authentication. The idea would be to provide a browser extension that lets you authenticate to ContribKanban and add issues from Drupal.org to a node board inline. Similar to "following" an issue, it would let you add an issue to an existing board or create new boards.

I would also like to experiment with GitHub integration, as some projects use GitHub as well.

Mar 16 2018
Mar 16

Every software release needs to have release notes. End users need to be able to understand what it is that they are upgrading to and any associated risks, or risks mitigated by upgrading and receiving bug fixes. Across the board proprietary and open source software projects either hit or miss on delivering decent release notes. During MidCamp I decided to help fix that problem for Drupal.org projects.

What makes good release notes?

Probably the first question is: what makes good release notes? It depends on the audience of the product and scale of the product. As an Android user, I'm used to my perpetual weekly "Bug fixes and improvements" message. As a developer that kills me because I want to know. But for my wife, she probably doesn't care or read the reason why there is an update, she just lets the auto-updater run.

Take a look at Spotify versus Buffer, for instance.

Thumbnail

Spotify, being a music app, has a more general audience and has a larger user base. Buffer is for social media savvy and marketing oriented users, it's base is also more niche. Release notes should educate the end-user about what they are getting into and what problems will be fixed. Maybe your end audience does not care that much (Shopify) so you just maintain your internal release logs and keep an eternal generic release notes message up. Or you make a blend, like Buffer. Or you go all out with technical details and documentation like Drupal, WordPress, and others.

Making better Drupal module release notes

When a module rolls out a new release and the contents are just "some bugs fixed" or no content, I get sad inside. I know most of us are maintaining Drupal modules on the side without true sponsorship or business-oriented motivation. But, as maintainers, or job is to ship dependable software. Dependable software doesn't mean just committing bugs and release fixes. It means communicating those to our end users.

If you look at the Drupal 8.5.0 release notes, you see a lot of detail. There has to be! It's Drupal core! Drupal core has release managers and product managers that can work to generate these notes that communicate important updates, major bugs fixed or any features added, and any known issues. This allows for proper evaluation of an incoming release and the impact it will make on operations. It's a large product with a technical and management level audience.

Now, let's look at another module that has nearly 150,000 sites using it - Superfish. The 8.x-1.2 and 8.x-1.1 release notes look like a quick manual copy and paste grab from the commit log. Since most module maintainers follow the standard commit message practices, I could have just reviewed the commit log and reviewed for my self. I have no context on what those issues were for or if they are bugs or features. Please note: this is no knock on the maintainer mehrpadin. I am just making an observation and stating my motivation. Release notes take time to curate, and most open source maintainers use their spare time to manager issue queues. It's why we have a documentation problem as well, and a whole other topic.

Next let's check out the Drupal Commerce release notes, which has had two format changes since 2.2. If you look at the 8.x-2.2 release notes, we did the same as Superfish - a blob of text which was our commit log and requires manual work to learn more about the included changes. In 8.x-2.4 there was a change. Changes link to their proper issue queue and we link to the users who helped. There's a tool for this, and I'll cover that next. In 8.x-2.5 we have an even better format, in my opinion. Not only are issues linked, but they are grouped by the type of fix: bug, feature, task. An end user of Drupal Commerce can quickly evaluate if the release is mostly features or bug fixes and determine possible impact. There's a tool for this, too. The one I worked on at MidCamp.

Thumbnail

There are tools for that

Through means now lost to memory, I came across the Git Release Notes for Drush project. It takes the commit log of a Drupal.org module and turns it into release notes which automatically link to the issue and users. The drush rn module was used to generate the Drupal Commerce 8.x-2.4 release notes. I also use it heavily on all of my other modules. However, after a few years of use, I wanted something a little bit more. I didn't want to keep using a Drush command (especially since this pre-Drush 9 command, back in Drupal 6 days.) 

I have a Drupal.org CLI tool that I use for my day to day maintainership and contribution items. While at MidCamp I hacked on a drupalorg maintainer:release-notes command. It's heavily based off of the GRN Drush command. It compares tags, takes the commits, does some processing, makes pretty release notes.

Thumbnail

The command provides some additional features. I want to thank Mike Lutz who gave some ideas! Having issues broken apart by their issue type makes it more readable and understandable. I also wanted to call out our contributors, because they are essential to our project.

Thumbnail

The end result provides a nice release note summary. I would like to find even more improvements that give Drupal.org project maintainers the ability to create insightful release notes without the management overhead.

You can find the project at https://github.com/mglaman/drupalorg-cli. Currently, it's only available via a global Composer install, which can cause conflicts with Drush. I'm working on providing a Phar for download.

Mar 11 2018
Mar 11

At DrupalCon Dublin I caught Fabianx’s presentation on streaming and other awesome performance techniques. His presentation explained how BigPipe worked to me, finally. It also made me aware of the fact that, in Drupal, we have mechanisms to do expensive procedures after output has been flushed to the browser. That means the end user sees all their markup but PHP can chug along doing some work without the page slowing down.

What is the Kernel::TERMINATE event

The Kernel::TERMINATE event is provided by the Symfony HttpKernel. As documented in Symfony\Component\HttpKernel\KernelEvents:

The TERMINATE event occurs once a response was sent. This event allows you to run expensive post-response jobs.

Thumbnail

Drupal core uses this event, usually to write caches.

  • path_subscriber writes the alias cache.
  • request_close_subscriber write the module handler’s cache.
  • automated_cron.subscriber runs cron
  • user_last_access_subscriber updates the user’s last access timestamp

If you worked with Drupal 7, consider this to be like drupal_page_footer()

A practical real-life (client) use case

I use this in a client project to work with Salesforce. The Salesforce API is pretty slow (no, it’s extremely slow.) Especially when you need to do a lot of operations:

  • create an account
  • create contact references
  • generate an opportunity
  • provide the line items for that opportunity

You don’t want the customer to sit on a forever loading page while they wait for the checkout complete page to load. People expect their eCommerce experience to be fast. The Kernel::TERMINATE event is a very special trick. It can allow you to do remote API calls or other length procedures without slowing down the user.

The checkout process we have ends up working something like this, thanks to the Kernel::TERMINATE event.

  1. The customer enters payment information.
  2. Upon validation, the user is brought to a provisioning step
  3. API requests are made to provision their product account and subscription. The order is queued for Salesforce
  4. Upon success, the user is brought to the checkout complete page
  5. The Customer sees their confirmation page, account information, and a link to log into the product.
  6. The server begins running Salesforce API integration calls on the same PHP FPM worker that flushed the content for the checkout complete page.

Watching the logs stream the entire event is pretty amazing. The end user experience is only blocked on business critical validations and not internal business operations (CRM and reporting.)

Here’s how we make the magic happen

We have a service called provision_handler. This is a class which helps us run various provisioning calls for each scenario. The service allows us to set the order which should be queued and provisioned after the checkout complete page has been flushed to the end user.

  /**
   * Sets the queued order to run on Kernel::TERMINATE.
   */
  public function setQueuedOrder(OrderInterface $order) {
    $this->queuedOrder = $order;
  }

  /**
   * Get the queued order to finish provisioning.
   */
  public function getQueuedOrder() {
    return $this->queuedOrder;
  }

The queued order is just a property on our provision handler. At no time during any single request will we be handling multiple orders, a known operation constraint. And thanks to the way that PHP works, we know that this data remains stable during our request. After we have executed our business critical validations and provisioning we set the queued order from our checkout flow code.

$provision_handler->setQueuedOrder($this->order);

Then our event subscriber picks up the order and ensures all that order information is synchronized into Salesforce.

  /**
   * Ensures system paths for the request get cached.
   */
  public function onKernelTerminate(PostResponseEvent $event) {
    $order = $this->handler->getQueuedOrder();
    if ($order) {
      $this->handler->finalizeProvisioning($order);
    }
  }

  /**
   * {@inheritdoc}
   */
  public static function getSubscribedEvents() {
    return [
      KernelEvents::TERMINATE => ['onKernelTerminate'],
    ];
  }

Other use cases

Drupal Commerce has many uses for this event. Many times when an order has completed checkout many different things need to happen. Send emails, send the order to an ERP, update stock, change promotion usage stats, etc. Or, what if you are generating order reports?

In Commerce Reports 8.x–1.x I decided to use the Kernel::TERMINATE event to generate order reports. This offloads an expensive procedure to, essentially, process itself in the background after the user has seen that their order was paid and completed.

The module listens on two events: the order to go through the placed transition and then for Kernel::TERIMATE

  /**
   * {@inheritdoc}
   */
  public static function getSubscribedEvents() {
    $events = [
      'commerce_order.place.pre_transition' => 'flagOrder',
      KernelEvents::TERMINATE => 'generateReports',
    ];
    return $events;
  }

When the place transition runs, the flagOrder method is called to identify the order for later processing. Note: the current method isn’t great, hence the @todo.

  /**
   * Flags the order to have a report generated.
   *
   * @todo come up with better flagging.
   *
   * @param \Drupal\state_machine\Event\WorkflowTransitionEvent $event
   *   The workflow transition event.
   */
  public function flagOrder(WorkflowTransitionEvent $event) {
    $order = $event->getEntity();
    $existing = $this->state->get('commerce_order_reports', []);
    $existing[] = $order->id();
    $this->state->set('commerce_order_reports', $existing);
  }

This adds the order to be processed in Drupal’s state along with any other queued orders. When the time comes, we load all the queued orders and run the reporting plugins against them.

  /**
   * Generates order reports once output flushed.
   *
   * This creates the base order report populated with the bundle plugin ID,
   * order ID, and created timestamp from when the order was placed. Each
   * plugin then sets its values.
   *
   * @param \Symfony\Component\HttpKernel\Event\PostResponseEvent $event
   *   The post response event.
   *
   * @throws \Drupal\Core\Entity\EntityStorageException
   */
  public function generateReports(PostResponseEvent $event) {
    $order_ids = $this->state->get('commerce_order_reports', []);
    $orders = $this->orderStorage->loadMultiple($order_ids);
    $plugin_types = $this->reportTypeManager->getDefinitions();
    /** @var \Drupal\commerce_order\Entity\OrderInterface $order */
    foreach ($orders as $order) {
      foreach ($plugin_types as $plugin_type) {
        /** @var \Drupal\commerce_reports\Plugin\Commerce\ReportType\ReportTypeInterface $instance */
        $instance = $this->reportTypeManager->createInstance($plugin_type['id'], []);
        $order_report = $this->orderReportStorage->create([
          'type' => $plugin_type['id'],
          'order_id' => $order->id(),
          'created' => $order->getPlacedTime(),
        ]);
        $instance->generateReport($order_report, $order);
        // @todo Fire an event allowing modification of report entity.
        // @todo Above may not be needed with storage events.
        $order_report->save();
      }
    }

    // @todo this could lose data, possibly as its global state.
    $this->state->set('commerce_order_reports', []);
  }

Making this easier

I have been wanting to expose this pattern within Drupal Commerce. My vision is that Drupal Commerce will flag orders on the place transition and provide a subscriber on Kernel::TERMINATE. Then all custom and contributed code can subscribe to our own even which files in Kernel::TERMINATE that provides the order to be processed.

This would make Drupal Commerce more performant out of the box and remove operations performed before the output is flushed to the browser. This would mitigate performance bottlenecks for operations non-essential to the customer’s client-side experience. For instance:

  • Generating log entry for order completion
  • The order receipt email
  • Registering promotion and coupon usage

When it comes down to the code execution, the process is merely shifted to runtime milliseconds later. But the end user is guaranteed to see their page’s content and not be blocked by these operations.

It is an easy win to boost perceived performance within Drupal.

Feb 20 2018
Feb 20

DrupalCamp London is coming around the corner! If you have the chance to go, I highly recommend it. The organizers put on a top-notch event. Last year I had the privilege of giving my first keynote at the conference. I firmly believe that open source is a creator of opportunity. There is no such thing as free software. In open source, we donate our time to provide software that has no monetary cost. This lowers the barrier to entry by removing a layer economic limitations. This is why I love working in open source, and Drupal.

I realized I never wrote up my keynote or shared it beyond the recording. A year later it seems like a great reflection point.

Thumbnail

I had been playing with code since middle school. I learned PHP because our Battle.net and Counter-Strike teams wanted websites, somewhere we could post news and stuff. Thanks to XTemplate and (I think?) PHPNuke. I then met some friends who played Ultima Online and began writing C# so we could hack on RunUO and have our own shard. For the record, C# is probably my favorite to work with and is one reason I'm glad to see OOP in Drupal 8.

What is open source?

The term "open source" refers to something people can modify and share because its design is publicly accessible.

This definition is from opensource.com. I like it because it is not limited to code. Having ideas open, shareable, and hackable leads to opportunities that would not be found in a locked down environment.

This is what allowed me to grow my hobby. It was a career possibility back then. It was just something fun I could do. I got to learn how to code by reading open source software and sandboxing around. I wrote a C# desktop client that communicated with our PHP website - security holes and glaringly bad code included.

Thumbnail

Fast forward some years and I still wrote code as a hobby. I built a CMS for a World of Warcraft guild so we could manage raid times and membership. But I had met my wife and needed to get a real job that didn't involve making sandwiches and provided insurance. So I got a job at the local beer distributed and eventually became a driver, throwing kegs of beer around for thirsty college students.

Sometimes I think things happen for a reason. It was November of 2012 and I slipped going down some stairs with my dolly and two barrels. I hurt my back and had a doctors note that I could not work for a few days. At the same time, a local marketing agency had opened a job posting for a web developer. So I took the chance. I knew I had some skill, but I didn't think enough. But I went for it.

The job interview was to take a Photoshop mockup and convert it into a WordPress theme. I had worked with WordPress once or twice before, but not much. I had always opted to hack things on my own so I could fiddle around.

Thanks to open source...

I could review and learn.

I was able to review other existing themes and WordPress documentation to accomplish my task. I firmly believe you gauge someone's skill by the ability to solve problems. Not by how well they have memorized a process.

No training or formal education

I have a bachelors in Software Engineering, now. At that time I had an Associates degree in IT Networking. All of my code writing had been self-taught and learning by examples. 

I was as knowledgeable as I chose to be.

Working with open source, my only limitation was my willingness to debug and reach the Google nethers for an answer.

Thumbnail Photo by Heymo_V: https://flic.kr/p/G5RzEC

Roughly 5 years ago, now, I started my first Drupal site. With Commerce Kickstart 2. I installed the demo store. I had to rebuild the site because you cannot uninstall the demo. It was a fun first ride. While we all of our sites on WordPress, we had a large eCommerce project. WooCommerce was in its infancy and there were many half-baked WordPress solutions. Through research, I found Drupal via Commerce Kickstart.

Why did I get hooked?

I got hooked, initially, out of the fact it was free and was stable compared to my other options. Magento felt too closed and heavy, requiring too much of an ask on hosting requirements. 

Drupal had centralized issue queues, an active IRC channel and a free spirit of help. With basic research, I felt I could take on this task by just experiencing the static parts of the Drupal community.

The Community.

There is something special about the people we work and connect with through Drupal. It still feels special today as it did those first few stressful weeks of figuring out what the hell I was doing while using Drupal. The code did not give me all the answers. Reading hook documentation only takes you so far when you're new. 

  • StackExchange question and answers came from people.
  • Documentation on Drupal.org and blogs came from people
  • IRC (now Slack, too) is full of people willing to donate some time and answer questions.

This community had sold me on Drupal, and it was from just the tip of the iceberg experience I had. 

Thumbnail

What makes the community?

At its basis, I think a community is made of people who share ideas. Is that not how open source projects start? Someone shares an idea or code. Someone else thinks it is a great idea or has a similar idea. These individuals work together to create more ideas and code. Now there is a project attracting more individuals and it becomes a community, regardless if anyone has ever met.

However, a community does not get far that way. What makes Drupal special is the friendship that seems to span all these individuals in different timezones, nations, and cultures. We make these friendships and bonds at user groups meetings and conferences (Drupal Camps for the win!)

Drupal & the community brought new opportunities

So, I got hooked on Drupal because of Drupal Commerce. We shifted away from using WordPress to using Drupal. Drupal's features allowed us to handle custom content models customers wanted, with less code to maintain. Community support made it easier to solve hard problems. Working with Drupal you feel like you have a support team behind you. It may not be immediate, but there are community members who will go above and beyond to help you solve problems (if you're polite!)

Drupal helped me land my first freelance gig!

In the past, I had done some freelance here and there. Nothing big, unless you count the two lost Bitcoin from when they were worth $5. Oops.

I had started contributing patches to some of the modules we were using. Specifically the USPS and UPS modules, just to help improve them. We were using them at work and I wanted to give something back in exchange. My activity in the queue turned into freelance work from the module maintainer, helping me pay down some bills.

DrupalCamp Atlanta 2013!

Thumbnail

Andy Giles, said maintainer of the USPS and UPS modules, offered to fly me down to attend DrupalCamp Atlanta. I accepted the offer, with my mind yet again blown that this opportunity was happening. It's the career-defining moment of my life. I was shell-shocked by the openness of the community and the discussions being made. This was before Weather.com had officially launched on Drupal, and people were discussing those challenges. I couldn't believe it.

Thanks to Mike Anello, Ryan Szrama, and many others I spoke to at the conference

  • I offered to co-maintain Commerce Reports and opened the 7.x-4.x branch to overhaul it
  • I created a local Drupal user group
  • I chipped away at my imposter syndrome
  • I realize I could make a career out of Drupal
Thumbnail

Networking and meeting similar individuals

Drupal Camps and local user group meetings introduced me to people from all over the world. This networking allows us to continue to share ideas, work on our projects, and introduce and nurture new members. I love the fact that the handful of times I needed to go to New York for a client that I could attend the NYC Drupal meet up.

Provides a way to show skillset and interact with others

I had horrible imposter syndrome. It is why I never pursued a career until I realized I was going to break my body down if I continued to throw barrels of beer around. The community allowed me beat back that imposter syndrome and realize we're all normal folk.

Creates demand for skills and people with those skills

If Dries has never released Drupal, hundreds of thousands of us would not have jobs. If those other individuals had not gathered around Drupal, those same hundreds and thousands of us would not have jobs. Even consider Drupal Commerce. If Ryan had not made Ubercart, ventured to Commerce Guys and built Drupal Commerce... I would not be where I am today. Everything I have today rests on a foundation laid by the Drupal community and the open source software we build.

Thumbnail

As a community, we contribute Support, Inspiration, and Jobs. These are my motivators. For example

  • If we build a better Drupal Commerce, we build a product people can sell
  • If we build a better Drupal Commerce, more code samples and bug fixes in Drupal
Thumbnail

The Driesnote at DrupalCon Dublin really resonated with me. I highly recommend reading his Drupal's collective purpose blog post. As open source users and contributors, we are making an impact on peoples lives. We are creating opportunities for people and lowering barriers. We may not be solving world problems, but we are doing something a little bit bigger than ourselves.

Some days I miss my CDL and driving a truck. Shooting the breeze with bar owners. But I also don't miss getting my truck ready at 5:30 AM and stacking barrels in a basement. So, thanks, Drupal and everyone who makes it!

Thumbnail
Feb 17 2018
Feb 17

Five years ago I went to my second Drupal Camp and first Drupal Camp that I presented at. It has been four years, but I am finally going back to Florida Drupal Camp. I am pretty excited, as a lot has changed since then.

My first Drupal event was DrupalCamp Atlanta in 2013. Thanks for making it a possibility, Andy Giles! I met some amazing people there and fell in love with the Drupal community. In fact, it's where, after talking to Ryan Szrama for a brief moment I decided to opt in and offer a co-maintainer of Commerce Reports. I met some amazing people from the Southeast community. One of them was Mike Herchel.

I cannot remember how it exactly came about, but he said I should propose a session for Florida DrupalCamp. I had no idea what I could talk about, or why it would even make sense for someone from Wisconsin to propose a session for a conference in Drupal. But I did, and I am so happy I took that chance.

I decided to propose a session on how I was using Panels as my workplace. I worked at a marketing agency who focused on great content that would be well received on any device. I found Panels to be our solution - Rockin` Responsive Content with Panels. In fact, the sample code I wrote is still available on my GitHub.

My session got accepted. Thanks, Mike. I'm sure you played a big part in making that happen. But, I had to figure out how to go there. The idea still seemed ludicrous to me. I was a novice at my trade, going to conferences was a new concept. The fact I had just been to a conference in Atalanta blew my mind. But, then things lined up.

Short interjection: I want to give a huge shout out to my wife who supported me on these adventures and continues to.

I believe you need to think big and strive to achieve above and beyond what is currently available to you. Otherwise, you miss chances. And that is exactly what allowed me to make my first visit to Florida Drupal Camp.

My employer did a lot of work for several local chapters of a national non-profit. There just so happened to be invited to a conference the group was hosting -- in Florida. The conference was before Florida Drupal Camp, but they saw value in attending, especially since they would be down there. If I had not taken the risk, decided "why not, what can I lose?" I doubt the dots would have connected for us to attend Florida Drupal Camp, leaving me at home in the Wisconsin post-winter slush.

Just as Drupal Camp Atlanta was an eye-opening experience, so was Florida Drupal Camp. After my session, I got to learn that Derek "Hawkeye Tenderwolf" DeRaps and Kendall Totten were working on Classy Panel Styles to give more power to the content editor who was working in panels. I got to experience my first true community collaboration.

Back at Atlanta, my mind was merely blown by the community's willing ability to share and help. Now I got to witness first-hand collaboration and idea sharing. I was being exposed to my ability to make an impact with some other effort.

People also liked my session! This blew away the layers of imposter syndrome I had sitting on my shoulders. By this time I had been improving Commerce Reports - but that was still in my bubble. I had gotten the ability to open a new Git branch and make fixes which made our client happy. I still had a lingering feeling that I wasn't successful.

I also wanted to get to Florida so I could tell Mike Anello I started a Drupal user group. Back in Atlanta, at dinner after the conference, he told me I should "plant a flag" for Drupal and just start a Drupal user group. Which, I did and ran for almost a year.

So I'm pretty happy and nostalgic as I am on this flight to Orlando (well, now in a hotel room editing and posting.) Florida Drupal Camp 2014 made a major impact in my career. It opened my eyes to new opportunities and careers choices available. It broke down my imposter syndrome to show we all are awesome at some things and equally horrible at others. But that's okay; this is how people work.

I want to give credit to those who made such an impact that caused a jumpstart to my career and deep dive into the Drupal community

  • Andy Giles
  • Mike Herchel
  • Mike Anello
  • Every single attendee that I have talked to in Atlanta and Florida.

Making an impact in someone's journey is not always done in big statements. It is the little things.

Feb 07 2018
Feb 07
Thumbnail

Drupal 8 has a robust routing system built on top of Symfony components. Robust can be taken in many ways. It's powerful and yet magical, making it confusing at times. There is a lot of great documentation on Drupal.org which covers the routing system, route parameters, and highlights of the underlying functionality. But the magic isn't fully covered.

If you are not familiar with the Drupal 8 routing system, head over to the docs and dive in. I'm also assuming you have written routes using parameters and have experienced the magic of parameter conversions. If you haven't, again head over to the docs and read up on using parameters in routes

I don't want to dive completely into how routes work with ParamConverter services to do parameter upcasting. I want to cover more on how Drupal knows to pass proper arguments to your controller method. Parameter upcasting topic is covered fairly well in the documentation on parameter upcasting in routes

I want to discuss how the controller's callback arguments are resolved and put into proper order in our method. For example, how I can take this custom 404-page route I created for ContribKanban.com

contribkanban_pages.not_found:
  path: '/not-found'
  defaults:
    _controller: '\Drupal\contribkanban_pages\Controller\PagesController::on404'
    _title: 'Resource not found'
  requirements:
    _access: 'TRUE'

And have a method which knows the exception that was thrown, the request and route match.

class PagesController extends ControllerBase {
  public function on404(\Exception $exception, Request $request, RouteMatchInterface $match) {
     // How did I get the exception injected?
     // What magic knows Request and RouteMatchInterface are in that order ?!
  }
}

Understanding ControllerResolver::doGetArguments

In Drupal 8 there is the controller_resolver service which maps to the \Drupal\Core\Controller\ControllerResolver class. This extends the controller resolver class provided by Symfony. From what I can understand, this is done to use Drupalisms added to the routing system, along with some other goodies. Our main concern, however, is the Drupal override of the doGetArguments method. This method is where the magic happens.

Determining the parameters for a controller callback

The doGetArguments method is a protected helper method that is invoked by ControllerResolver::getArguments. Before calling doGetArguments the controller definition is passed through PHP's Reflection API to detect its required arguments. In our example above, \ReflectionMethod will be used to find out the parameters for our on404 method. The code would look something like this.

$r = new \ReflectionMethod(
  '\Drupal\contribkanban_pages\Controller\PagesController',
  'on404'
);

This allows us to access the parameters of the method and pass them to doGetArguments.

return $this->doGetArguments($request, $controller, $r->getParameters());

This will leave us with an array loosely representing the following as ReflectionParameter objects.

  • A parameter called exception that expects \Exception
  • A parameter called request that expects Request
  • A parameter called route_match that expects RouteMatchInterface

Now, let's dig into how they get resolved into proper values and in their proper parameter order.

Putting the right things in the right place

Now that the system knows the expected parameters, it can loop through them and attempt to provide values. This is the purpose of the overridden doGetArguments method in Drupal. I am going to walk through the lines of the function, but the method in its entirety can be reviewed on the Drupal API documentation page.

Before looping through the array of ReflectionParameter objects, attributes are extracted from the request object.

$attributes = $request->attributes->all();
$raw_parameters = $request->attributes->has('_raw_variables') ? $request->attributes->get('_raw_variables') : [];

Here's an example output, taken from a 404 page. It is worth noting that the content of the request attributes is not consistent. In this case, the exception value is only available on 4xx and 5xx requests.

Thumbnail

With these values primed, each parameter is run against a series of checks.

First, the method checks if the parameter name matches an array key in the request attributes or raw variables. If there is a match, the value of the request is taken as the parameter variable.

if (array_key_exists($param->name, $attributes)) {
  $arguments[] = $attributes[$param->name];
}
elseif (array_key_exists($param->name, $raw_parameters)) {
  $arguments[] = $attributes[$param->name];
}

It is worth noting here that route parameters have been upcasting by ParamConverter services at this point. That is why the value is first pulled from the $attributes array. In the $raw_parameters array, it has not been upcasted. If we were on /node/{node} the node parameter would be a loaded node in $attributes and still the node ID (1234) in $raw_parameters.

The next check is to see if the parameter is the request object itself.

elseif ($param->getClass() && $param->getClass()->isInstance($request)) {
  $arguments[] = $request;
}

The next parameter check allows you to receive the server-side representation of the HTTP request, as a PSR7 ServerRequestInterface object. The object returned is a Zend\Diactoros\ServerRequest instance. This allows you to have an immutable state for inspection and true representation of the request, unmodified by any other scripts.

elseif ($param->getClass() && $param->getClass()->name === ServerRequestInterface::class) {
  $arguments[] = $this->httpMessageFactory->createRequest($request);
}

Next is the check to see if a RouteMatch was requested. The route match makes it easier to work with the current route and parameters passed to it.

elseif ($param->getClass() && ($param->getClass()->name == RouteMatchInterface::class || is_subclass_of($param->getClass()->name, RouteMatchInterface::class))) {
  $arguments[] = RouteMatch::createFromRequest($request);
}

The final check applies the parameter's default value if provided.

elseif ($param->isDefaultValueAvailable()) {
  $arguments[] = $param->getDefaultValue();
}

If the parameter met none of the conditions an exception is raised. An exception is raised because the system cannot reliably provide parameter values for the route callback.

else {
  if (is_array($controller)) {
    $repr = sprintf('%s::%s()', get_class($controller[0]), $controller[1]);
  }
  elseif (is_object($controller)) {
    $repr = get_class($controller);
  }
  else {
    $repr = $controller;
  }

  throw new \RuntimeException(sprintf('Controller "%s" requires that you provide a value for the "$%s" argument (because there is no default value or because there is a non optional argument after this one).', $repr, $param->name));
}

If you reach this point, you will see the cursed white screen of death and a message like the following:

The website encountered an unexpected error. Please try again later.</br></br><em class="placeholder">RuntimeException</em>: Controller &quot;Drupal\contribkanban_pages\Controller\PagesController::on404()&quot; requires that you provide a value for the &quot;$server&quot; argument (because there is no default value or because there is a non optional argument after this one).

Debugging \RuntimeExceptions from parameters

Now, hopefully, parameter resolving for controller callbacks has been cleared up. Whenever you hit bugs you can now know to go and debug that exception in \Drupal\Core\Controller\ControllerResolver::doGetArguments.

  • Is there a typo in your parameter in the controller method?
  • If a route parameter, is there a typo in your routing.yml?
  • Did you request a request attribute that is not available (like exception on a valid route)?
  • Were you expecting an upcasted value that was not upcasted?

I was inspired to write the post based on a debugging session in Drupal Slack. The following was their routing.yml definition

commerce_coupon_batch.data_export:
  path: '/promotion/{commerce_promotion}/coupons/export1'
  defaults:
    _controller: '\Drupal\commerce_coupon_batch\Controller\ExportController::exportRedirect'
    _title: 'Export Coupons'
  options:
    _admin_route: TRUE
    parameters:
      commerce_promotion:
        type: 'entity:commerce_promotion'
  requirements:
    _permission: 'administer commerce_promotion'

The route was throwing an exception, even though the route parameter had the proper entity type provided. In Drupal, if the route parameter matches an entity type, the routing system will try to load an entity with that value (identifier or machine name.) Here's an example of the controller callback.

class ExportController extends ControllerBase {
  public function exportRedirect(Promotion $promotion) {
    // Logic.
  }
}

Did you notice the bug, after going through the parameter resolving process? The method expects a parameter named promotion yet our route parameter is called commerce_promotion. There is no matching route attribute, variable, or default value. Therefore the system crashes.

Resources

Here's a summary of documentation links that I mentioned in the article

Jan 31 2018
Jan 31

In client projects, I push for as much testing as possible. In Drupal 8 we finally have PHPUnit, which makes life grand and simpler. We use Kernel tests to run through some basic integration tests using a minimally bootstrapped database. However, we need Behat to cover behavior and functional testing on an existing site for speed and sanity reasons.

For our client project, we host our repository on GitHub and our environments on Platform.sh. Work for sprints and hotfixes are done on their own branch and handled via pull request. With Platform.sh that gives us a unique environment for each pull request, above and beyond our development, stage, and production.

Thumbnail

There is one slight problem, however. Platform.sh environment URLs are a bit random, especially for pull request environments. There is a pattern: branch-randomness-project. It used to be easier to guess, but they made a change in Q3 2016 to better support GitFlow (yay!). It changes a bit when you're using pull requests. The URL does not contain a branch name, but the PR number.

Luckily, there is the Platform.sh CLI tool which provides a way to look up URLs and other goodies. After the Platform.sh blog Backup and Forget, I realized the same idea can be used for running Behat against our dynamic environments effectively.

Thumbnail

Configuring CircleCI

It is not hard to get set up, there are just a few steps required. 

Add API token to environment variables

In order to use the Platform.sh CLI tool, you will need an API token added to your build environments. Follow the instructions on the Platform.sh documentation page https://docs.platform.sh/gettingstarted/cli/api-tokens.html#obtaining-a-token to configure a token.

Configure your project's environment variables. You will want to add a variable named PLATFORMSH_CLI_TOKEN that contains your token. The CLI will automatically detect this environment variable and use it when communicating with the Platform.sh API.

Thumbnail

Install Platform CLI and get the URL

We are using CircleCI 2.0 and their circleci/php:7.1-browsers image. For our setup, I created a script called prepare-behat.sh in our .circleci directory. I'll break the script down into different chunks and then provide it in its entirety.

The first step is to download and install the Platform.sh CLI tool, which is simple enough.

curl -sS https://platform.sh/cli/installer | php

Next, we determine what environment to get a URL for. If the build is for a regular branch, then we can use the branch name. However, if it is a pull request we have to extract the pull request number from the CIRCLE_PULL_REQUEST environment variable.  CircleCI provides a CIRCLE_PR_NUMBER environment variable, but only when the pull request is from a forked repository. We do not fork the repository and make pull requests off of branches from the project repository.

if [ -z "${CIRCLE_PULL_REQUEST}" ]; then
  PLATFORMSH_ENVIRONMENT=${CIRCLE_BRANCH}
else
  PR_NUMBER="$(echo ${CIRCLE_PULL_REQUEST} | grep / | cut -d/ -f7-)"
  PLATFORMSH_ENVIRONMENT="pr-${PR_NUMBER}"
fi

Now that we have our environment, we can get the URL from Platform.sh! I used the pipe option so it displayed values instead of attempting to launch the URL, and modify the output so we use the first URL returned. We export it to a variable.

BEHAT_URL=$(~/.platformsh/bin/platform url --pipe -p 3eelsfv6keojw -e ${PLATFORMSH_ENVIRONMENT} | head -n 1 | tail -n 1)

Behat will read the BEHAT_PARAMS environment variable to override configuration. We'll pipe our environment URL into the Behat configuration.

export BEHAT_PARAMS="{\"extensions\" : {\"Behat\\\MinkExtension\" : {\"base_url\" : \"${BEHAT_URL}\"}}}"

All together it looks something like this

#!/usr/bin/env bash
curl -sS https://platform.sh/cli/installer | php

if [ -z "${CIRCLE_PULL_REQUEST}" ]; then
  PLATFORMSH_ENVIRONMENT=${CIRCLE_BRANCH}
else
  PR_NUMBER="$(echo ${CIRCLE_PULL_REQUEST} | grep / | cut -d/ -f7-)"
  PLATFORMSH_ENVIRONMENT="pr-${PR_NUMBER}"
fi

BEHAT_URL=$(~/.platformsh/bin/platform url --pipe -p projectabc123 -e ${PLATFORMSH_ENVIRONMENT} | head -n 1 | tail -n 1)
export BEHAT_PARAMS="{\"extensions\" : {\"Behat\\\MinkExtension\" : {\"base_url\" : \"${BEHAT_URL}\"}}}"

Make sure you commit the script as executable.

Start PhantomJS and run tests!

All that is left now is to add steps to boot up PhantomJS (or other headless browsers), prime the environment variables, and run Behat!

      - run:
          name: Start PhantomJS
          command: phantomjs --webdriver=4444
          background: true
      - run:
          name: Get environment URL and run tests
          command: |
            . .circleci/prepare-behat.sh
            ./bin/behat -c behat.defaults.yml --debug
            ./bin/behat -c behat.defaults.yml

I keep a debug call in so that I can see the output, such as the URL chosen. Just in case things get squirrely.

Testing Goodness

We now have tests running. With some caveats.

Thumbnail

The environment will be deployed regardless if testing fails, kind of avoiding the concept of CI/CD.

The workaround would be to kill the GitHub integration and use the environment:push command in a workflow. In this example we'd set up the following flow:

  • Run Unit and Kernel tests via PHPUnit
  • Push to Platform.sh
  • Run Behat tests

There's more to it, like ensuring the environment is active or not. But the command would look something like

platform environment:push -p projectabc123 -e ${PLATFORMSH_ENVIRONMENT}

Sometimes the Behat tests can run before the environment is deployed

The first option is to Implement full on CI/CD flow with the above. Or, in our case, define a flow which runs the PHPUnit stuff first. That happens to take longer than a deploy, so we're pretty much set. Unless the next issue happens.

Sometimes GitHub sends events before the merge commit is updated on a PR, causing the environment to miss a build. But this only seems to happen after a PR receives a lot of commits.

Then we're kind of stuck. We have to make a comment on the pull request to trigger a new event and then a build. This causes Platform.sh to kick over but not the tests.

Final notes

There's more which can be done. I run the preparation script in the same step as our tests to ensure variables are available. I really could export them to $BASH_ENV so they are sourced in later steps. I'm sure there are workarounds to my caveats. But we were able to integrate a continuous integration flow within a few hours and start shipping more reliable code.

Jan 20 2018
Jan 20

This year I joined a coffee exchange for some members of the Drupal community. I had known there was one floating around, but finally got signed up. Over the past two years, I have gotten more and more into coffee - being a coffee snob about roasts and learning brewing techniques. Last week we were paired up. And sent out some roasts.

I sent my match some coffee from one of my favorite Wisconsin roasters, Just Cofee Cooperative. I shipped out the WTF Roast, a single origin Honduras, which is named after the podcast by Marc Maron. And it turns out the recipient is a huge fan of the podcast! Coincidence score!

I received the Sumatra roast from Three Fins out of Cape Cod. Not only did I receive some rad beans, there was a note from the owner.

Grew up in Kenosha, went to Tremper, lived near there. Grab a Spot burger for me.

Talk about a small world. Turns out the owner of the roaster is actually from my town, albeit the Southside.

Thumbnail

In Kenosha, we have two classic drive-in diners. The Spot on the Southside and Big Star on the Northside. For the record, I grew up on the Northside, which is the better half of the city. I'm definitely partial to Big Star, but a good burger is a good burger. I also realized that I haven't been to The Spot in a few years, maybe only once since their second location on my side of town closed years ago.

Thanks for some delicious coffee and a reminder to get a good burger, Ron! I had a burger for you, and it is still damn good.

Thumbnail
Jan 17 2018
Jan 17

It seems like RSS is not quite as a buzz as it once was, years ago. There are reasons for that, but I partly believe it is because more services mask direct RSS feed subscriptions in larger aggregate tools. This change also makes it more interesting to get analytics about where that traffic is coming from, and what feed. When I migrated my site to Drupal 8, I decided to take an adventure on adding UTM parameters to my RSS feeds.

This was not nearly as easy as I had thought it would be. My RSS feeds are generating using Views. My Views configuration is straightforward, pretty much the out of the box setup. It uses the Feed display plugin and the Content row display. The setup I assume just about everyone has.

Blog RSS feed configuration

.

In order to start attributing my RSS links and understanding my referral traffic, I needed to adjust the links in my RSS feed for the content. So my first step was to review the node_rss plugin. The link is generated from the entity for its canonical version and with an absolute path, which is to be expected, but there is no way to alter this output (unless you possibly want to alter every time a URL is generated.)

// \Drupal\node\Plugin\views\row\Rss::render()

    $node->link = $node->url('canonical', ['absolute' => TRUE]);

// ...

    $item->title = $node->label();
    $item->link = $node->link;
    // Provide a reference so that the render call in
    // template_preprocess_views_view_row_rss() can still access it.
    $item->elements = &$node->rss_elements;
    $item->nid = $node->id();

// If only I could alter $item!

    $build = [
      '#theme' => $this->themeFunctions(),
      '#view' => $this->view,
      '#options' => $this->options,
      '#row' => $item,
    ];

    return $build;


My next thought was to just extend the node_rss plugin and add my link generation logic. But that's messy, too. I would have to override the entire method, not some "get the link" helper, meaning I'd have to maintain more code on my personal site.

So I kept digging. My next step was to visit template_preprocess_views_view_row_rss and see if I can do a good ole preprocess, hack it in, and call it a dirty day. And, guess what? I could. I did. And I feel a little dirty about it, but it gets the job done.

function bootstrap_glamanate_preprocess_views_view_row_rss(&$variables) {
  /** @var \Drupal\views\ViewExecutable $view */
  $view = $variables['view'];
  if ($view->id() == 'taxonomy_term') {
    $term = \Drupal\taxonomy\Entity\Term::load($view->args[0]);
    $label = $term->label();
    if ($label == 'drupal') {
      $source = 'Drupal Planet';
    }
    else {
      $source = 'Term Feed';
    }
    $variables['link'] .= '?utm_source=' . $source . '&utm_medium=feed&utm_campaign=' . $term->label();
  }
}

My taxonomy term feed is what funnels into Drupal Planet, so only posts tagged "Drupal" show up. So in my preprocess I check the vie and decide to run or not. I then attribute the source, medium, and campaign.

Ideally, I should make a UTM friendly row display which extends node_rss and allows Views substitutions to populate the UTM parameters, if available when generating the URL object. I might do that later. But hopefully, this helps anyone looking to do something similar.

Jan 09 2018
Jan 09

On New Years Day I sat down over ReactJS and decided to see what all the commotion was about. I primarily work on the backend and have dabbled lightly with AngularJS. Generally, my JavaScript work just falls to basic DOM manipulation using vanilla JS or jQuery. Nothing fancy, no components. ReactJS is fun.

Why is ReactJS so much fun, and popular? It is hackable, easy to manipulate, and it is solving hard problems in a new way. New solutions to hard problems bring a surge of euphoria. In the Drupal world, this is allowing us to harness Drupal as a data and content management system and not the presentation system. Not everyone uses Drupal in this way, but I have always envisioned Drupal as a great backend for any kind of application beyond normal content. The main roadblock I ended up encountering is fighting the render and form API in complex scenarios.

I have some experience in AngularJS and have fiddled with Angular (2 and then 4.) I felt more headache and disappointment when working with Angular (and AngularJS.) When learning Angular it may have been TypeScript instead of just Babel transpiling ES6. One thing I appreciated when working with ReactJS is just handling the rendering of components and not needing to also use its provided services, factories and other patterns forced by Angular.

There is a thing for everything

Since ReactJS is fun and popular, there are also tons of libraries. I was able to find solutions to many problems by just searching the NPM package repository. I do not always believe in dependency stuffing and do not encourage adding a dependency just for a small feature. But, libraries provide a great reference point for learning or quick prototyping.

I learned some tips and tricks by reviewing existing libraries. I would pick something that kind of did what I needed it to, then reviewed how to implement it on my own without the bells and whistles I did not need, or fix what I did need.

The component state

When working with your component there is the state property, which is just a JavaScript object of properties. You can use this for logic checks and provide values in your rendered component. The following example would render the text in a heading element.

import React, { Component } from 'react'

class App extends Component {
  state = {
    header: 'Check out this super duper rad text!',
  }
  render () {
    return (
      <h1>{this.state.header}</h1>
    )
  }
}

The state property can be modified directly, but it should be treated as an immutable object. Whenever you need to change a state value, use the this.setState({}) function. I did not know this at first and had some weird behaviors. If you set the state directly, it actually does get saved. But it is not reflected until some other action forces the component to re-render itself.

To change the heading text, I would run something like the following in a function

this.setState({
  header: 'Mah new awesome header!',
});

State management is not recursive

Once I started to understand state and properly used this.setState, I hit a whole new roadblock. The method does a merge when setting values, but not a recursive merge. When trying to set a nested value it will cause all other nested values to be removed. Take the following state definition.

  state = {
    data: [],
    filterParams: {
      _format: 'json',
      name: ''
    },
  }

We have a data property and then filter parameters we would pass to an API endpoint. For example sake, somewhere on the form an input updates the name to filter by. If we ran the following setState operation whenever the input changed, we'd end up losing some values.

this.setState({
  filterParams: {
    name: 'Widget'
  }
});

After running this, our state object would look like the following.

  state = {
    data: [],
    filterParams: {
      name: ''
    },
  }

We lost our _format key! Luckily in JavaScript here is the Spread Operator. The spread operator is just three dots (...) allows you to expand an expression. In the following example we use ...this.state.filterParams to pass the existing values, and then any specific overrides. This allows us to maintain previous values and provide new ones.


this.setState({
  filterParams: {
    ...this.state.filterParams,
    name: 'Widget'
  }
})

Updating the component state is asynchronous.

Like most things JavaScript, setting the component state is not a synchronous operation. I discovered this after playing around with the paging in the React DB Log prototype. The component state kept track of the current page. When clicking a paging button the page was incremented or decremented and then results fetched from the API endpoint.

The page was updated, but it was not fetching the proper page. That is because it was not using a callback.

// Before
this.setState({
  page: this.state.page - 1
});
this.fetchLogEntries(this.state.page);

// After
this.setState({
  page: this.state.page - 1
}, () => {
  this.fetchLogEntries(this.state.page);
});

Chrome debugger tool is super sauce

The React team has created a Chrome browser extension that adds a React tab to the Chrome Developer Tools. This allows you to inspect the components and their state, which makes debugging a lot easier. Using that tool is how I learned the caveats in the above scenarios.

Get it here: https://chrome.google.com/webstore/detail/react-developer-tools/fmkadmapgofadopljbjfkapdkoienihi

There is a FireFox extension and standalone app that can be found at https://reactjs.org/community/debugging-tools.html

Jan 05 2018
Jan 05

On a client project, we wanted to prevent search engines from accessing pages on the site. It needed to be directly linked to from some various sources, public, but not queryable. So we did the reasonable thing. We modified our robots.txt and added our meta tags. Unfortunately, this didn't seem to work as we would expect and pages still showed up in Google searches.

We received reports that links were showing up in Google search results. And they were. With the page title and the description of  No information is available for this page.

Google Search WTF

But... why?

After digging through Google's support documentation, I came across the topic Block search indexing with 'noindex'  at https://support.google.com/webmasters/answer/93710?hl=en

Important! For the noindex meta tag to be effective, the page must not be blocked by a robots.txt file. If the page is blocked by a robots.txt file, the crawler will never see the noindex tag, and the page can still appear in search results, for example if other pages link to it.

Well, that is conflicting. If you have a robots.txt available, the bot does not crawl any pages and cannot read meta tags. That makes sense. However, if someone links to your page, without attributing nofollow in their page's anchor tag, it will get indexed. So, they do not respect available meta tags when your content is directly linked. That's cool.

X-Robots-Tag to the rescue?

Then, at the bottom of the article, it states that Google supports an X-Robots-Tag tag. There is no disclaimer here about meta information, so I am assuming this might be our fix all. To add the header I created a response event subscriber. The event subscriber adds our header to the response if it is an HtmlResponse.

Services file

services:
  mymodule.response_event_subscriber:
    class: \Drupal\mymodule\EventSubscriber\ResponseEventSubscriber
    tags:
      - { name: 'event_subscriber' }

EventSubscriber class

<?php

namespace Drupal\mymodule\EventSubscriber;

use Drupal\Core\Render\HtmlResponse;
use Symfony\Component\EventDispatcher\EventSubscriberInterface;
use Symfony\Component\HttpKernel\Event\FilterResponseEvent;
use Symfony\Component\HttpKernel\KernelEvents;

/**
 * Class ResponseEventSubscriber.
 */
class ResponseEventSubscriber implements EventSubscriberInterface {

  /**
   * Explicitly tell Google to bugger off.
   *
   * @param \Symfony\Component\HttpKernel\Event\FilterResponseEvent $event
   */
  public function xRobotsTag(FilterResponseEvent $event) {
    $response = $event->getResponse();
    if ($response instanceof HtmlResponse) {
      $response->headers->set('X-Robots-Tag', 'noindex');
    }
  }

  /**
   * {@inheritdoc}
   */
  public static function getSubscribedEvents() {
    $events[KernelEvents::RESPONSE][] = ['xRobotsTag', 0];
    return $events;
  }

}

Results?

It has only been active for a few days, so I do not know the exact results. I am guessing the client will need to perform specific actions within Google and Bing webmaster consoles. 

Jan 02 2018
Jan 02

I spent the last 8 days of 2017 not touching my computer. Except for one night, after a few old fashions in, I decided to upgrade my MacBook to High Sierra "for the hell of it." Then New Years came, and we are riding into 2018. I'm going to also try to focus more on blogging. This was my goal for the end of 2017, but I did not stick to it. However, a tweet sent out by Dries resonated that goal and is something I plan to work more on.

One of the things currently on my mind as we enter 2018: I feel like using social media less, and blogging more. Be part of the change that you wish to see in the world.

— Dries Buytaert (@Dries) January 1, 2018

Learning something new: ReactJS

On New Years Day, I followed my annual tradition: learn and try something new. I don't remember what I did in 2016, but in 2015 I wanted to play work with modern JavaScript and picked up AngularJS. The outcome was the creation of ContribKanban. This time around I spent time learning ReactJS. I had been loosely following the JavaScript Framework Initiative for Drupal core and know the JavaScript maintainers had decided to move forward with ReactJS for initial prototyping. The initial spec: using a ReactJS UI for the database log.

ReactJS

The proof of concept can be found on GitHub at https://github.com/jsdrupal/drupal-react-db-log. I used this as my starting point to learn ReactJS. On a local development site, I created my own Views REST export and built a UI to consume its output. I wanted to take sample code and write my own version, so I understood it. I learned a lot, which I'll cover in a later blog post. I also was able to provide two pull requests

I have been working with Angular here and there since I was familiar with AngularJS. After one day I am much happier with ReactJS. But that's because ReactJS is more hack and play, where Angular has a forced set of opinions. Also, I was introduced to Angular through TypeScript which adds a barrier of its own. ReactJS with ES6 and Babel is still easier to grok, in my opinion.

A year of travel

2017 was heavy on the travel side, and probably the most I have ever traveled. In March I had the honor to keynote my first conference and go to London for the first time. Then MidCamp in Chicago and DrupalCon Baltimore. Then it was back over to Europe for DrupalCon Vienna. I was more excited at the fact our seven-person team at Commerce Guys was able to be together in the same room for the first time. Working in a remote team across five zones is challenging, and the in-person time is invaluable. 2017 closed off with a trip to New York with Ryan and Bojan to speak at the Drupal NYC Meetup.

Bike in Belgrade

In between all of that was a non-Drupal conference that we attended: IRCE. IRCE is the Internet Retailer Conference and Expo. It was a great experience to see what services Drupal Commerce is competing against and how it stacks up. It was a bit difficult, though, explaining to people the lack of licensing fees they are used to.

Focus on the offline

With the push to get Drupal Commerce from alpha at the turn of 2017 and to a stable release by the end of September, there was not much for offline-time. I did not read as many books as I normally would. I listened to plenty of audiobooks and podcasts, but that was due to travel. I do fairly well at preventing work and open source work creeping in on my family time, but I prevent it from giving me legitimate personal time. A goal of mine for 2018 is to read more books, not just listen to them, and find a hobby that isn't writing code... or drinking cofee or drinking whisky.

Whisky and coffee
Dec 02 2017
Dec 02

I recently hit a curveball with a custom migration to import location data into Drupal. I am helping a family friend merge their bespoke application into Drupal. Part of the application involves managing Locations. Locations have a parent location, so I used taxonomy terms to harness the relationship capabilities. Importing levels one, two, three, and four worked great! Then, all the sudden, when I began to import levels five and six I ran into an error: Specified key was too long; max key length is 3072 bytes

Migration error output

False positives

My first reaction was a subtle table flip and bang on the table. I had just spent the past thirty minutes writing a Kernel test which executed my migrations and verified the imported data. Everything was green. Term hierarchy was preserved. Run it live: table flip breakage. But, why?

When I run tests I do so with PHPUnit directly to just execute my one test class or a single module. Here is the snippet from my project's phpunit.xml

    <!-- Example SIMPLETEST_DB value: mysql://username:[email protected]/databasename#table_prefix -->
    <env name="SIMPLETEST_DB" value="sqlite://localhost/sites/default/files/.ht.sqlite"/>

Turns out when running SQLite there are no constraints on the index key length, or there's some other deeper explanation I didn't research. The actual site is running on MySQL/MariaDB. As soon as I ran my Console command to run the migrations the environment change revealed the failure.

Finding a possible fix

Following the wrong path to fixing a bug, I went in a frantic Google search trying to find an answer which asserted by expected fix: there is some bogus limitation on the byte size that I can somehow bypass. So reviewed the fact that InnoDB has the limit of 3072 bytes, and that MyISAM has a different one (later, I found it is worse at 1000 bytes and one reason why we're all using MyISAM.)

I found an issue on Drupal.org which was reported about the index key being too long. The bug occurs by having too many keys listed in the Migration source plugin. Keys identify unique rows in the migration source. They are also used as the index in the create migrate_map_MIGRATION_ID. So if you require too many keys for identifying unique rows, you will experience this error. In most cases, you can probably break up your CSV and normalize it to be it easier to parse.

Indexes were added in Drupal 8.3 to improve performance. So, I had to find a fix. First I tried swapping back to MyISAM and realizing that was a fool's errand, but I was desperate.

I thought about trying to normalize the CSV data and make different, smaller, files. But there was a problem: some locations share the same name but have different parent hierarchies. A location in Illinois could share the same name as a location in a Canadian territory. I needed to preserve the Country -> Administrative Area -> Locality values in a single CSV.

A custom idMap plugin to the rescue

If you have worked with the Migrate module in Drupal you are familiar with process plugins and possibly familiar with source plugins. The former helps you transform data and the latter brings data into the migration. The migrate also has an id_map plugin. There is one single plugin provided by Drupal core: sql. I never knew or thought about this plugin because it is not defined. In fact, we never have to.

  /**
   * {@inheritdoc}
   */
  public function getIdMap() {
    if (!isset($this->idMapPlugin)) {
      $configuration = $this->idMap;
      $plugin = isset($configuration['plugin']) ? $configuration['plugin'] : 'sql';
      $this->idMapPlugin = $this->idMapPluginManager->createInstance($plugin, $configuration, $this);
    }
    return $this->idMapPlugin;
  }

If a migration does not provide an idMap definition it defaults to the core default sql mapping.

Hint: if you want to migrate into a non-SQL database you're going to need a custom id_map plugin!

Once I found this plugin I was able to find out where it created its table and the index information. Bingo! \Drupal\migrate\Plugin\migrate\id_map\Sql::ensureTables

  /**
   * Create the map and message tables if they don't already exist.
   */
  protected function ensureTables() {

In this method, it creates the schema that the table will use. There is some magic in here. By default, all keys are treated as a varchar field with a length of 64. But, then, it matches up those keys with known destination field values. So if you have a source value going to a plain text field it will change to a length of 255.

      // Generate appropriate schema info for the map and message tables,
      // and map from the source field names to the map/msg field names.
      $count = 1;
      $source_id_schema = [];
      $indexes = [];
      foreach ($this->migration->getSourcePlugin()->getIds() as $id_definition) {
        $mapkey = 'sourceid' . $count++;
        $indexes['source'][] = $mapkey;
        $source_id_schema[$mapkey] = $this->getFieldSchema($id_definition);
        $source_id_schema[$mapkey]['not null'] = TRUE;
      }

      $source_ids_hash[static::SOURCE_IDS_HASH] = [
        'type' => 'varchar',
        'length' => '64',
        'not null' => TRUE,
        'description' => 'Hash of source ids. Used as primary key',
      ];
      $fields = $source_ids_hash + $source_id_schema;

      // Add destination identifiers to map table.
      // @todo How do we discover the destination schema?
      $count = 1;
      foreach ($this->migration->getDestinationPlugin()->getIds() as $id_definition) {
        // Allow dest identifier fields to be NULL (for IGNORED/FAILED cases).
        $mapkey = 'destid' . $count++;
        $fields[$mapkey] = $this->getFieldSchema($id_definition);
        $fields[$mapkey]['not null'] = FALSE;
      }

This was my issue. The destination fields are the term name field, which has a length of 255. Great. But there is no way to interject here and change that value. All of the field schemas are coming from field and typed data information.

The solution? Make my own plugin. The following is my sql_large_key ID mapping class.

<?php

namespace Drupal\mahmodule\Plugin\migrate\id_map;

use Drupal\migrate\Plugin\migrate\id_map\Sql;

/**
 * Defines the sql based ID map implementation.
 *
 * It creates one map and one message table per migration entity to store the
 * relevant information.
 *
 * @PluginID("sql_large_key")
 */
class LargeKeySql extends Sql {

  protected function getFieldSchema(array $id_definition) {
    $schema =  parent::getFieldSchema($id_definition);
    if ($schema['type'] == 'varchar') {
      $schema['length'] = 100;
    }
    return $schema;
  }

}

The following is an example migration using my custom idMap definition.

id: company_location6
status: true
migration_tags:
  - company
idMap:
  plugin: sql_large_key
source:
  plugin: csv_by_key
  path: data/company.csv
  header_row_count: 1
 # Need each unique key to build proper hierarchy
  keys:
    - Location1
    - Location2
    - Location3
    - Location4
    - Location5
    - Location6
process:
  name:
    plugin: skip_on_empty
    method: row
    source: Location6
  vid:
    plugin: default_value
    default_value: locations
  # Find parent using key from previous level migration.
  parent_id:
    -
      plugin: migration
      migration:
        - company_location5
      source_ids:
        company_location5:
          - Location1
          - Location2
          - Location3
          - Location4
          - Location5
  parent:
    plugin: default_value
    default_value: 0
    source: '@parent_id'
destination:
  plugin: 'entity:taxonomy_term'
migration_dependencies: {  }

And, voila! 6,000 locations later there is a preserved hierarchy!

Oct 29 2017
Oct 29

The JSON API module is becoming wildly popular for in Drupal 8 as an out of the box way to provide an API server. Why? Because it implements the {json:api} specification. It’s still a RESTful interface, but the specification just helps bring an open standard for how data should be represented and requests should be constructed. The JSON API module exposes collection routes, which allows retrieving multiple resources in a single request. You can also pass filters to restrict the resources returned.

An example would be “give me all blog posts that are published.”

GET /jsonapi/node/article?filter[status][value]=1

But, what about complex queries? What if we want to query for content based on keywords? Collections are regular entity queries, which means they are as strong as any SQL query. If you wanted to search for blogs of a certain title or body text, you could do the following. Let’s say we searched for “Drupal” blogs

GET /jsonapi/node/article?
filter[title-filter][condition][path]=title
filter[title-filter][condition][operator]=CONTAINS
filter[title-filter][condition][value]=Drupal
filter[body-filter][condition][path]=title
filter[body-filter][condition][operator]=CONTAINS
filter[body-filter][condition][value]=Drupal

This could work. However, it starts to break down on complex queries and use cases - outside of searching for a blog.

Providing a collection powered by Search API

I have a personal project that I’ve been slowly working on which requires searching for food. It combines text, geolocation, and ratings for returning the best results. The go-to solution for returning this kind of result data is an Apache Solr search index. The Search API module makes this a breeze to index Drupal entities and index them with this information. My issue was retrieving the data over a RESTful interface. I wanted a decoupled Drupal instance so the app could eventually support native applications on iOS or Android. Also, because it sounded fun.

I was able to create a new resource collection which used my Solr index as its data source. There are some gotchas, however. At the time of when I first wrote this code (a few months ago), I had to manually support query filtering since the properties did not exist as real fields inside of Drupal. In my example, I don’t really adhere to the JSON API specification, but it worked and it’s not finished.

Here is the routing.yml within my module

forkdin_api.search:
  path: '/api/search'
  defaults:
    _controller: '\Drupal\forkdin_api\Controller\Search::search'
  requirements:
    _entity_type: 'restaurant_menu_item'
    _bundle: 'restaurant_menu_item'
    _access: 'TRUE'
    _method: GET
  options:
    _auth:
      - cookie
      -   always
    _is_jsonapi: 1

My Search API index has one data source: the restaurant menu item. JSON API collection routes expect there to be an entity type and bundle type in order to re-use the existing functionality. All JSON API routes are also flagged as _is_jsonapi to get this fanciness.

The following is the route controller. All that is required is that we return a ResourceResponse which contains an EntityCollection object, made from all of the entity resources to be returned. This controller takes the filters passed and runs an index query through Search API. I load the entities from the result and pass them into a collection to be returned. Since I’m not re-using a JSON API controller I made sure to add url.query_args:filter as a cacheable dependency for my response.

<?php

namespace Drupal\forkdin_api\Controller;

use Drupal\Core\Cache\CacheableMetadata;
use Drupal\Core\Controller\ControllerBase;
use Drupal\Core\Entity\EntityTypeManagerInterface;
use Drupal\jsonapi\Resource\EntityCollection;
use Drupal\jsonapi\Resource\JsonApiDocumentTopLevel;
use Drupal\jsonapi\ResourceResponse;
use Drupal\jsonapi\Routing\Param\Filter;
use Drupal\search_api\ParseMode\ParseModePluginManager;
use Symfony\Component\DependencyInjection\ContainerInterface;
use Symfony\Component\HttpFoundation\Request;

class Search extends ControllerBase {

  /**
   * @var \Drupal\search_api\IndexInterface
   */
  protected $index;

  /**
   * The parse mode manager.
   *
   * @var \Drupal\search_api\ParseMode\ParseModePluginManager|null
   */
  protected $parseModeManager;

  public function __construct(EntityTypeManagerInterface $entity_type_manager, ParseModePluginManager $parse_mode_manager) {
    $this->entityTypeManager = $entity_type_manager;
    $this->index = $entity_type_manager->getStorage('search_api_index')->load('food');
    $this->parseModeManager = $parse_mode_manager;
  }

  /**
   * {@inheritdoc}
   */
  public static function create(ContainerInterface $container) {
    return new static(
      $container->get('entity_type.manager'),
      $container->get('plugin.manager.search_api.parse_mode')
    );
  }

  public function search(Request $request) {
    $latitude = NULL;
    $longitude = NULL;
    $fulltext = NULL;

    // @todo move to a Filter object less strict than `jsonapi` one
    //       this breaks due to entity field defnitions, etc.
    $filters = $request->query->get('filter');
    if ($filters) {
      $location = $filters['location'];
      if (empty($location)) {
        // @todo look up proper error.
        return new ResourceResponse(['message' => 'No data provided']);
      }

      // @todo: breaking API, because two conditions under same type?
      $latitude = $location['condition']['lat'];
      $longitude = $location['condition']['lon'];

      if (isset($filters['fulltext'])) {
        $fulltext = $filters['fulltext']['condition']['fulltext'];
      }
    }
    $page = 1;

    $query = $this->index->query();
    $parse_mode = $this->parseModeManager->createInstance('terms');
    $query->setParseMode($parse_mode);

    if (!empty($fulltext)) {
      $query->keys([$fulltext]);
    }

    $conditions = $query->createConditionGroup();
    if (!empty($conditions->getConditions())) {
      $query->addConditionGroup($conditions);
    }
    $location_options = (array) $query->getOption('search_api_location', []);
    $location_options[] = [
      'field' => 'latlon',
      'lat' => $latitude,
      'lon' => $longitude,
      'radius' => '8.04672',
    ];
    $query->setOption('search_api_location', $location_options);
    $query->range(($page * 20), 20);
    /** @var \Drupal\search_api\Query\ResultSetInterface $result_set */
    $result_set = $query->execute();

    $entities = [];
    // @todo are the already loaded, or can be moved to get IDs then ::loadMultiple
    foreach ($result_set->getResultItems() as $item) {
      $entities[] = $item->getOriginalObject()->getValue();
    }

    $entity_collection = new EntityCollection($entities);
    $response = new ResourceResponse(new JsonApiDocumentTopLevel($entity_collection), 200, []);
    $cacheable_metadata = new CacheableMetadata();
    $cacheable_metadata->setCacheContexts([
      'url.query_args',
      'url.query_args:filter',
    ]);
    $response->addCacheableDependency($cacheable_metadata);
    return $response;
  }

}

The request looks like

GET /api/search?
filter[fulltext][condition][fulltext]=
filter[location][condition][lat]=42.5847425
filter[location][condition][lon]=-87.8211854
include=restaurant_id

With an example result:

Results
Oct 23 2017
Oct 23

Back in 2015, I created ContribKanban.com as a tool to experiment with AngularJS and also have a better tool to identify issues to sprint on at my previous employer before Commerce Guys. At the Global Sprint Weekend in Chicago, I received a lot of feedback from YesCT and also ended up open sourcing the project. That first weekend made me see it was more useful than just being a side project.

I was pretty excited to hear it was used by the Drupal 8 Media Initiative (not sure if they actively are) and other projects. But, then it started to become a burden. It is a MEAN stack application and I'm not an expert with MongoDB or using pm2 to manage a node server. Basically, it is at the point where I don't want to touch it, lest it breaks and I burn an afternoon on it.

Next Moves

Instead of letting the project go into retirement, I have decided to revamp it. I'm rebuilding it on top of Drupal 8 and a SQLite database with a light front end (jQuery and Drupal.* functions.) 

Preview of what is next

Moving to Drupal 8 allows me to work with something I know best, and also makes data management much easier. Currently, projects are stored in a MongoDB document and then their boards can be overridden via some JSON in the repository. This allowed projects to have boards limited to a specific tag or a project release parent issue. But it involved some level of effort on my end. It also wasn't super easy adding core-specific boards.

Here's an admin screen for creating and editing a board

Creating a board

The end result is something like this https://nextgen.contribkanban.com/board/Migrationsystem

As you can see by that URL, a preview is available at https://nextgen.contribkanban.com/. Boards and sprints can be added like they used to be - by entering a project's machine name or an issue tag. There are some features missing and small bugs to work out. But this should make it much easier to create custom boards with any kind of list needed.

The idea is that there are the board level configurations and then list level overrides.

I'm excited to move it to Drupal 8 because I hope it provides example code for some others. The code can be found at https://github.com/mglaman/contribkanban.com/tree/nextgen. The code is hosted on GitHub and uses CircleCI to deploy to my DigitalOcean droplet.

Want to support the project?

This is a side project and comes second to my work and duties as a maintainer of Drupal Commerce. There is a Gratipay project page and I have a Patreon account.

I'd like to give a shout out to Joris Vercammen who found my Patreon page and has pledged enough to fund my monthly hosting cost, and kind of helped rekindle the project.

Oct 16 2017
Oct 16

Secure sites. HTTPS and SSL. A topic more and more site owners and maintainers are having to work with. For some, this is a great thing and others it is either nerve-wracking or confusing. Luckily, for us all, getting an SSL and implementing full site HTTPS is becoming easier.

Why does an SSL matter?

First. Let's talk about why having an SSL and wrapping your site in HTTPS matters. For instance, there are people who think it is fine to have their e-commerce site behind regular HTTP because their payment gateway is PayPal.

Beyond security, consider the fact that in Google announced in 2014 that HTTPs would be used as a ranking signal. Three years ago Google made the push for a more secure web by making this choice. According to the Internet, Bing has stayed away from this sort of decision. But, if you care (or your customer) cares about SEO, I hope this helps make a case.

Google's search rankings are not your only worry. Chrome and FireFox are starting to alert users that the site is not secure if they fill in sensitive form data: passwords, credit card fields. The Google Security Blog announced the move last year, and Firefox did the same in early 2017.

Isn't HTTPS slow?

Years ago it was thought that SSL was slow due to the handshakes involved. The fact is that it is actually faster. If you do not believe me, go to http://www.httpvshttps.com/.

Getting an SSL is easier, now.

I remember when having to go purchase and then install and SSL was a drag. It cost extra money, even if a paltry amount is broken down to monthly costs (~$5 a month), and required time to install. Thanks to the service Let's Encrypt it has become easier to get an SSL certificate. Let's Encrypt is a free and open certificate authority (CA), which means they can provide signed and authorized SSL certificates. You won't get Organization Validation (OV) or Extended Validation (EV) certificates; but, generally, you do not need those.

Let's Encrypt logo

Let's Encrypt is great, but it requires you to run some tools to help automate certificate renewal and installation. Luckily, there are more hosting platforms and CDNs providing free or bundled SSL certificates.

Let's roll through some options. Please comment and share corrections or services that I have missed! The following items are services I use or found to be great on cost and ease of use.

Content Delivery Networks (CDN)

Putting a CDN in front of your website is one of the simplest ways to get your site wrapped around HTTPS without having to change your server or hosting information. This is your best option if your own servers and don't want to mess with certificates directly or your current host does not provide free/cheap SSL certificate support. It also improves visitor performance of your website.

CloudFlare is the CDN solution I use for this site in order to provide fully wrapped HTTPS support. CloudFlare has a great free plan that provides DDoS mitigation, CDN, SSL certificate and some other goodies. This is my go-to solution.

Hosting providers

More and more hosting providers are providing free SSL certificates. I've done some basic research, but these are based on services I have used or are familiar with.

Pantheon is a managed hosting service for Drupal and WordPress. Starting at $25 a month you get a managed hosting service, three environments (development, test, production), and a CDN with free SSL. If you want to install a custom SSL certificate, though, you will need to jump up to the Professional offering at $100 a month. Before Pantheon announced their global CDN and free SSL I had never considered them due to the price of the monthly service when you have an SSL. Next to using CloudFlare, it's your best bet for the "hands off" and ease of mind approach.

Platform.sh is my favorite and general go-to for price and value. You can host your PHP, Node.js, Python, and Ruby projects.  Plans start at $50 for a production site, which seems a bit expensive. But that gets you an automatic free SSL and you can still install custom SSL certificates without additional charge. You also get other goodies such as the ability to use Solr, Redis caching and more.

Gandi.net is a hosting provider that was brought to my attention when finding homes for ContribKanban. For $7.50 a month you can get their Simple Hosting with free SSL. You can run your PHP, Node.js, Ruby or Python apps on their hosting powered by a web administrative interface.

Using Let's Encrypt itself

You can of course use Let's Encrypt yourself on your own hosting - using certbot on your DigitalOcean droplet. Or just generating certificates and adding them to your existing host.

Oct 08 2017
Oct 08

During DrupalCon Vienna, the second edition of the Drupal 8 Development Cookbook was published! The Drupal 8 Development Cookbook published just over a year ago, right after Drupal 8.1 was released. I had written the book for 8.0 with "just in case" notes for what might change in Drupal 8.1. What I was not prepared for: how well the minor release system worked and provided rapid gains in feature changes. As I saw Drupal 8.4 approach, I felt it was time to create a second edition to highlight those fixes.

What's changed?

Some chapters did not change much beyond updated screenshots for user interface changes. There were, however, some larger changes.

For starters, any references to using Drush for downloading modules have been removed in favor of using Composer. In fact, so should manual download. Why? In Drush 9 downloading of modules and themes has been removed.

A new recipe was added to the Extending Drupal chapter to cover event subscribers. Something I missed the first time and noticed many Drupal Commerce implementers asking questions about.

In Plug and Play with Plugins was revised for the Creating a custom plugin type recipe. The Physical module, featured in the first edition, was finally ported and does not use plugins. The second edition covers the GeoIP module which utilizes plugins.

Two chapters received major overhauls and rewrites thanks to efforts accomplished in the 8.1, 8.2, and 8.3 releases. The Entity API chapter has been updated to remove boilerplate items now provided by Drupal core -- making the creation of custom entities much simpler. The Web Services chapter was completely written. Thanks to the API-first initiative the RESTful Web Services module was heavily improved. I also added a recipe on how to work with Contenta CMS, the headless distribution for Drupal 8.

Oh, and a new cover!

Drupal 8 Development Cookbook - second edition

Where can you get it?

You can, of course, get the book from Packt Publishing. The book is available in print, ebook, and their Mapt service: http://packtpub.com/web-development/drupal-8-development-cookbook-second-edition

The book is also available on Amazon https://www.amazon.com/dp/1788290402

Sep 11 2017
Sep 11

My personal site is now officially migrated onto Drupal 8! I had first attempted a migration of my site back when Drupal 8.0 was released but had a few issues. With Drupal 8.3 it was nearly flawless (maybe even 8.2, but I had put the idea back burner.) I did have some interesting issues to workaround

Missing filter plugins

My migration process was halted and littered with errors due to missing plugins, specifically around my text formats. The culprits were:

  • Media 2.x and media filter
  • GitHub Gist filter
  • OEmbed filter
  • Twitter links and usernames

These plugins were missing from my code base so I had to replicate them. The Twitter module has been ported, but I didn't have a need for it anymore, so I wrote skeleton Filter plugins that had no functionality. However, I relied heavily on the others.

For the Media filter, I had tried to follow the solutions in Kalamuna's blog post. However, it did not quite work (not a custom migration.) So I opted for a custom Filter plugin which provides the same functionality as the Drupal 7 filter: convert a shortcode to an image embed.

The Gist Filter module has yet to be ported, so I quickly wrote my own version.
 

The same was true for the OEmbed module. It has a port to Drupal 8 started but was not stable. Luckily I was able to use the alb/oembed library and the OEmbed module's Filter plugin. All I needed to do was change library references. I made sure to document this https://www.drupal.org/node/2884829#comment-12203367.

A full example of my workarounds can be found in https://github.com/mglaman/glamanate.com/tree/master/web/modules/custom/

Executing the migration

Other than that, it was as simple as running the following command provided by Migrate Tools:

drush migrate-upgrade \ 
  --legacy-db-url=mysql://root:root,@mysql.dev/glamanate_d7 \
  --legacy-root=http://glamanate.com

I also found it useful to provide a custom installation profile during the trial and error process. I patched Drupal 8 to allow a profile to be installed from existing config. When you run a migration all configuration will be migrated. This allowed me to run my migrations, tweak config, export, and then re-import when the process was done.

Summary and what is next

Overall it was fairly smooth. I spent more time porting over the Media embed filter. I didn't bother to fix the actual post content but port the functionality verbatim. I'd say I spent roughly a day overall on this migration. More time was spent rewiring a new theme, which is not finished.

My main motivation was to have an improved authoring experience, thanks to Drupal 8 and CKEditor. I'd like to expand my publishing efforts to provide more tutorials and deep dives into topics.

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web