Mar 16 2019
Mar 16

Drush 9 has removed dynamic site aliases. Site aliases are hardcoded in YAML files rather than declared in PHP. Sadly, that means that many tricks you could do with the declaration of the site aliases are no longer available.

The only grouping possible is based on the YAML filename. So for example, with the Acquia Cloud Site Factory site aliases generated by the 'blt recipes:aliases:init:acquia' command, you can run a command on the same site across different environments.

But what you can't do is run a command on all the sites in one environment.

One use case for this is checking whether a module is enabled on any sites, so you know that it's safe to remove it from the codebase.

Currently, this is quite a laborious process, as 'drush pm-list' needs to be run for each site.

With environment aliases, this would be a one liner:

drush @hypothetical-env-alias pm-list | ag some_module

('ag' is the very useful silver searcher unix command, which is almost the same as the also excellent 'ack' but faster, and both are much better than grep.)

While site aliases are fixed, they can be altered with Drush hooks. I considered that these might allow something to dynamically declare aliases, or a command option. There's an example of altering aliases with a hook in the Drush code.

In the meantime, a much simpler solution is to use xargs, which I have recently found is extremely useful in all sorts of situations. Because this allows you to run one command multiple times with a set of parameters, all you need to do is pass it a list of site aliases. Fortunately, the 'drush sa' command has lots of formatting options, and one of them gives us just what we need, a list of aliases with one on each line:

drush sa --format=list 

That gives us all the aliases, and we probably don't want that. So here's where ag first comes in to play, as we can filter the list, for example, to only run on live sites (I'm using my ACSF aliases here as an example):

drush sa --format=list| ag 01live

Now we have a filtered list of aliases, and we can feed that into xargs:

drush sa --format=list| ag 01live | xargs -I % drush % pm-list

Normally, xargs puts the input parameter at the end of its command, but here we want it inserted just after the 'drush' command. The -I parameter allows us to specify a placeholder where the input parameter goes, so:

xargs -I % drush % pm-list

says that we want the site name to go where the '%' is, and means that that xargs will run:

drush SITE-ALIAS pm-list

with each value it receives, in this case, each site alias.

Another thing we will do with xargs is set the -t parameter, which outputs each actual command it executes on STDERR. That acts as a heading in the output, so we can clearly see which site is outputting what.

Finally, we can use ag a second time to filter the module list down to just the module we want to find out about:

drush sa --format=list | ag live | xargs -t -I % drush % pml | ag some_module 

The nice thing about the -t parameter is that as it's STDERR, it's not affected by the final pipe to ag for filtering output. So the output will consist of the drush command for the site, followed by the filtered output.

And hey presto.

In conclusion: dynamic site aliases in Drush were nice, but the maintainers removed them (as far as I can gather) because they were a mess to implement, and removing them vastly simplified things. Doing the equivalent with xargs took a bit of figuring out, but once you know how to do it, it's actually a much more powerful way to work with multiple sites at once.

Jan 27 2019
Jan 27

When you check out a branch or commit with git, two things happen: git changes the files in the repository folder, and changes the file that tells it what is currently checked out. In git terms, it changes what HEAD points to. In practical terms, it updates the .git/HEAD text file to contain a different reference.

But these two operations can be done separate from one another, so that while the files correspond to one commit, git thinks that the HEAD commit is another one.

This of course puts your git repository in an unstable, even unnatural state. But there are useful reasons for doing this, and one of these comes up in the operation of dorgflow, my tool for working with git and drupal.org patches.

Dorgflow creates a local branch for a drupal.org issue, and creates a commit for each patch, so you end up with a nice sequential representation of the work that's been done so far.

But every patch that's uploaded to an issue is a patch against the master branch, so we can't just apply the patches one by one and make commits: the first patch will apply, and all the subsequent ones will fail.

What we need is for the files be in the same state as the master branch, so that the patch applies, but when we make a commit, it needs to be a new commit on the feature branch:

 * [Feature branch] Patch 1 <- Make patch 2 commit on top of this commit.
/
* [master branch] Latest master commit. <- Apply patch 2 to these files.

In git terms, we want the current files (the working tree, as git calls it) to be on the master branch, while git's HEAD is on the feature branch.

For my first attempt at getting this to work, I did the following:

  1. Put git on the feature branch.
  2. Check out the master branch's files, with git checkout master -- .

When you use git checkout command with a file or files specified, it doesn't move HEAD, but instead changes just those files to the given commit or branch. If you give '.' as the file to check out, then it does that for all of the repository's files. In effect, you get the files to look like the given commit, in our case the master branch.

This is what we want, but it has one crucial flaw. Suppose patch 1 added a new file, foo.php. When we check out master's files, foo.php is not changed, because it's not on master. There's nothing on master to overwrite it. The 'git checkout master -- .' doesn't actually say 'make all the files look like master', it says 'check out all the files from master'. foo.php isn't on master, and so it's simply left alone.

Suppose now that patch 2 also adds the foo.php file, which it most likely will, since in most cases, a newer patch incorporates all the work of the previous one.

Applying patch 2 to the files in their current state will fail, because patch 2 tries to create the file foo.php, but it's already there.

So this approach is no good. I was stuck with this problem with dorgflow for months, until I had a brainwave: instead of staying on the feature branch and changing the files to look like master, why not check out the master branch, but tell git that the HEAD is the feature branch?

That solves the problem of the foo.php file: when you check out master, the foo.php that's at the patch 1 commit vanishes, because it doesn't exist on master. So applying patch 2 will be fine.

The only remaining question was how to you tell git it's on a different branch without doing a checkout of files? Turns out this is a simple plumbing command: git symbolic-ref HEAD refs/heads/BRANCH. This is just telling git to change the reference that HEAD points to, and that's how git stores the fact that a particular branch is the current one.

This was a simple change to make in dorgflow (it was actually more work to update the tests so the mocks had the correct method call expectations!).

This means that dorgflow accordingly now handles applying a sequence of issue patches that add files now works in the latest release.

Jan 09 2019
Jan 09

It's no secret that I find Composer a very troublesome piece of software to work with.

I have issues with Composer on two fronts. First, its output is extremely user-unfriendly, such as the long lists of impenetrable statements about dependencies that it produces when it tells you why it can't make a change you request. Second, many Composer commands have unwanted side-effects, and these work against the practice that changes to your codebase should be as simple as possible for the sake of developer sanity, testing, and user acceptance.

I recently discovered that removing packages is one such task where Composer has ideas of its own. A command such as remove drupal/foo will take it on itself to also update some apparently unrelated packages, meaning that you either have to manage the deployment of these updates as part of your uninstallation of a module, or roll up your sleeves and hack into the mess Composer has made of your codebase.

Guess which option I went for.

Step 1: Remove the module you actually want to remove

Let's suppose we want to remove the Drupal module 'foo' from the codebase because we're no longer using it:

$ composer remove drupal/foo

This will have two side effects, one of which you might want, and one of which you definitely don't.

Side effect 1: dependent packages are removed

This is fine, in theory. You probably don't need the modules that are dependencies of foo. Except... Composer knows about dependencies declared in composer.json, which for Drupal modules might be different from the dependencies declared in module info.yml files (if maintainers haven't been careful to ensure they match). UPDATE: I've been informed in comments that drupal.org's packaging process ensures these are kept in sync. So that's one less thing to worry about!

Furthermore, Composer doesn't know about Drupal configuration dependencies. You could have the situation where you installed module Foo, which had a dependency on Bar, so you installed that too. But then you found Bar was quite useful in itself, and you've created content and configuration on your site that depends on Bar. Ideally, at that point, you should have declared Bar explicitly in your project's root composer.json, but most likely, you haven't.

So at this point, you should go through Composer's output of what it's removed, and check your site doesn't have any of the Drupal modules enabled.

I recommend taking the list of Drupal modules that Composer has just told you it's removed in addition to the requested one, and checking its status on your live site:

$ drush pml | ag MODULE

If you find that any modules are still enabled, then revert the changes you've just made with the remove command, and declare the modules in your root composer.json, copying the declaration from the composer.json file of the module you are removing. Then start step 1 again.

Side effect 2: unrelated packages are updated

This is undesirable basically because any package update is something that has to be evaluated and tested before it's deployed. Having that happen as part of a package removal turns what should be a straight-forward task into something complex and unpredictable. It's forcing the developer to handle two operations that should be separate as one.

(It turns out that the maintainers of Composer don't even consider this to be a problem, and as I have unfortunately come to expect, the issue on github is a fine example of bad maintainership (for the nadir, see the issue on the use of JSON as a format for the main composer file) -- dismissing the problems that users explain they have, claiming the problems are by design, and so on.)

So to revert this, you need to pick apart the changes Composer has made, and reverse some of them.

Before you go any further, commit everything that Composer changed with the remove command. In my preferred method of operation, that means all the files, including the modules folder and the vendor folder. I know that Composer recommends you don't do that, but frankly I think trusting Composer not to damage your codebase on a whim is folly: you need to be able to back out of any mess it may make.

Step 2: Repair composer.lock

The composer.lock file is the record of how the packages currently are, so to undo some of the changes Composer made, we undo some of the changes made to this file, then get Composer to update based on the lock.

First, restore version of composer.lock to how it was before you started:

$ git checkout HEAD^ composer.lock

Unstage it. I prefer a GUI for git staging and unstaging operations, but on the command line it's:

$ git reset composer.lock

Your composer lock file now looks as it did before you started.

Use either git add -p or your favourite git GUI to pick out the right bits. Understanding which bits are the 'right bits' takes a bit of mental gymnastics: overall, we want to keep the changes in the last commit that removed packages completely, but we want to discard the changes that upgrade packages.

But here we've got a reverted diff. So in terms of what we have here, we want to discard changes that re-add a package, and stage and commit the changes that downgrade packages.

When you're done staging you should have:

  • the change to the content hash should be unstaged.
  • chunks that are a whole package should be unstaged
  • chunks that change version should be staged (be sure to get all the bits that relate to a package)

Then commit what is staged, and discard the rest.

Then do a git diff of composer.lock against your starting point: you should see only complete package removals.

Step 3: Restore packages with unrelated changes

Finally, do:

$ composer update --lock

This will restore the packages that Composer updated against your will in step 1 to their original state.

If you are committing Composer-managed packages to your repository, commit them now.

As a final sanity check, do a git diff against your starting point, like this:

$ git diff --name-status master

You should see mostly deleted files. To verify there's nothing that shouldn't be there in the changed files, do:

$ git diff --name-status master | ag '^[^D]'

You should see only composer.json, composer.lock, and the autoloader's files.

PS. If I am wrong and there IS a way to get Compose to remove a package without side-effects, please tell me.

I feel I have exhausted all the options of the remove command:

  • --no-update only changes composer.json, and makes no changes to package files at all. I'm not sure what the point of this is.
  • --no-update-with-dependencies only removes the one package, and doesn't remove any dependencies that are not required anywhere else. This leaves you having to pick through composer.json files yourself and remove dependencies individually, and completely obviates the purpose of a package manager!

Why is something as simple as a package removal turned into a complex operation by Composer? Honestly, I'm baffled. I've tried reasoning with the maintainers, and it's a brick wall.

PPS. Since writing this post, I’ve made Composer Manifest a small Composer plugin which makes it easier to see what Composer has decided to change behind your back. Every time you do a Composer update, install, or remove, it writes a YAML file that lists all the installed packages with their versions. Committing that to your repository means you have an easy way to see exactly what's been changed and when.

Sep 09 2018
Sep 09

It's now just over 10 years since webchick handed maintainership of the Module Builder project over to me. I can still remember nervously approaching her at the end of a day at DrupalCon 2008, outside the building where people were chatting, and asking her if she'd had time to look at the patches I'd posted for it. I was at my first DrupalCon; she'd just been named as the core co-maintainer for the work about to start on Drupal 7. No, she hadn't, she said, and she instantly came to the conclusion that she'd never have the time to look at them, and so got her laptop out of her rucksack and there and then edited the project node to make me the maintainer.

Given the longevity of the project, I am often surprised when I talk to people that they don't know what its crucial advantage is over the other Drupal code generating tools that now also exist. Drupal Code Builder, as it's now called, is fundamentally different from Drupal Console's Generator command, and the Drupal Code Generator library that’s included in Drush 9.

I feel that perhaps this requires a buzzword, to make it more noticeable and memorable.

So here is one: Drupal Code Builder is an analytical code generator.

I'm going to add a second one: the Drupal Code Builder Drush commands and Module Builder (which still exists, though is now just module that provides a Drupal-based UI for the Drupal Code Builder library) are adaptive.

What do I mean by those?

When you first install Drupal Code Builder, it has no information about Drupal hooks, or plugin types, or services. There is no data in the codebase on how to generate a hook_form_alter() implementation, or a Block plugin, or how to inject the entity_type.manager service.

Before you can generate code, you need to run a command or submit an admin form which will analyse your entire codebase, and store the resulting data on hooks, plugin types, services, and other Drupal structures (the list of what is analysed keeps growing.).

The consequence of this is huge: Drupal Code Builder is not limited to core hooks, or to the hooks that it's been programmed for. Drupal Code Builder knows all the hooks in your codebase: from core, from contrib module, even from any of your own custom code that invents hooks.

Now repeat that statement for plugin types, for services, and so on.

And that analysis process can, and should, be repeated when you download a new module, because then any Drupal structures (hooks, plugin types, etc) that are defined in new modules gets added to the list of what Drupal Code Builder can generate.

This means that you can use Drupal Code Builder to do something like this:

  1. Generate a new plugin type. This creates a plugin manager service class and declaration, an annotation class that defines the plugin annotation, and an interface and base class for the plugins.
  2. Re-run DCB's analysis.
  3. Generate a plugin of the type you've just made.

When I say that this technique of analysis makes Drupal Coder Builder more powerful than other code generators, I'm not even blowing my own trumpet: the module I was handed in 2008 did the same. Back then in the Drupal 6 era is needed to connect to drupal.org's CVS repository to download the api.php files that document a module's hooks (which is why the Drush command for updating the code analysis prior to Drush 9 is called mb-download). For Drupal 7, the api.php files were moved into the core Drupal codebase, so from that point the process would scan the site's codebase directly. For Drupal 8, I added lots more code analysis, of plugin types and services, but still, the original idea remains: Drupal Code Builder knows how to detect Drupal structures, so it can know more structures than can be hardcoded.

This analysis can get pretty complex. While for hooks, it's just a case of running a regex on api.php files, for detecting information about plugin types, all sorts of techniques are used, such as searching the class parentage of all the plugins of a type to try to guess whether there is a plugin base class. It’s not always perfect, and it could always use more refinement, but it’s a more powerful approach than hardcoding a necessarily finite number of cases. (If only plugin types were declared in some way, or if the documentation for them were systematic in the way hook documentation is, this would all be so much simpler, and much more importantly, it would be more accurate!)

My second buzzword, that UIs for Drupal Code Builder are adaptive, describes the way that neither of the Drush command nor the Drupal module need to know what Drupal Code Builder can build. They merely know the API for describing properties, and how to present them to the user, to then pass the input values to Drupal Code Builder to get the generated code.

This is analogous to the way that Form API doesn’t know anything about a particular form, just about the different kinds of form elements.

This isn’t as exciting or indeed relevant to the end-user, but it does mean that development on Drupal Code Builder can move quickly, as adding new things to generate doesn’t require coordinated releases of software packages for the UI. In fact, I think nearly every new release of Drupal Code Builder has featured some sort of new code generating functionality, while keeping the API the same.

For example, this used to be the UI for adding a route on Drupal and Drush:

Screenshot showing Drush UI

Screenshot showing Module Builder UI

A later release of Drupal Code Builder turned the UI into this, without the Drush command or Module Builder needing their code to be updated:

Screenshot showing Drush UI

Screenshot showing Module Builder UI

Similarly, an even later version of Drupal Code Builder added code analysis to detect all top-level admin menu items, and so the admin settings form generation now lets you pick where to place the form in the menu.

It would be nice to think that a couple of buzzwords could gain Drupal Code Builder more attention and more users, but I fear that Drupal Console’s generator has got rather more traction in Drupal 8, despite its huge limitation. It’s disappointing that Drush has now added a third code generator to the mix, to even further dilute the ecosystem, and that it’s just as limited by hardcoding.

So where do we go from here?

Well, people get attached to UIs, but they don’t care much about what powers them, especially in this day and age of Composer, where adding dependencies no longer creates an imposition on the end-user.

So I suggest the following: the code analysis portion of Drupal Code Builder could be extracted to a new package. It doesn’t depend on anything on the code generation side, so that should be fairly simple. It provides an API that supports analysis being run in batches, which could maybe do with being spruced up, but it’s good enough for a 1.0.0 version for now.

Then, any code generating system would be able to use it. Both Console and Drush could replace their hardcoded lists of hooks and plugins with analytical data, while keeping the commands they expose to the end-user unchanged.

I’ll be at DrupalEurope in Darmstadt, where I’ll be running a BoF to discuss where we go next with code generation on Drupal.

Jun 26 2018
Jun 26

One of the big challenges with updating Drupal Code Builder for Drupal 8 has been the sheer variety of code to be output. On earlier versions of Drupal, it was just about hooks, and all that needed to be done was to take the API documentation code and replace 'hook_' with the module name. There were info files too, and Drupal 7 added the placing of hooks into different .inc files, but compared to this, Drupal 8 has things like plugin annotations, fluent method calls for content entity baseFieldDefinitions(), FormAPI arrays, not to mention PHP class methods, and more.

But one of the things I enjoy about working on DCB is that I am free to experiment with different ideas, much more so than with work on core or even contrib. It is its own system, without any need to work with what a framework supplies, and it has no need to be extensible. So I can try a new way of doing things as often as I want, and clean up when I've had time to figure out which way works best.

For example, up until recently, the code for a field definition in baseFieldDefinitions() was getting generated in three different ways.

First, the old-fashioned way of doing it line by line, then concatenating the array with a "\n" to make the final code. This is the way most of the old code in DCB was done, but with things that need handling of terminal commas or semicolons, and nesting indents and so on, it was starting to get really clunky.

So then I tried writing something loosely inspired by Drupal's RenderAPI. Because that's a nice big hammer that seems to fit a lot of nails: make a big array of data, chuck your stuff into it, then hand it over to something that makes the output. Except, not so good. Writing the code to make the right sort of array was fiddly. The array of data needed to combine actual data and metadata (such as the class of an annotation), which added levels to the nesting.

Then I hit on an idea: baseFieldDefinitions() fields are a fluent interface, like this:

$fields['changed'] = BaseFieldDefinition::create('changed')
  ->setLabel(t('Changed'))
  ->setDescription(t('The time that the node was last edited.'))
  ->setRevisionable(TRUE)
  ->setTranslatable(TRUE);

What if the code that builds this could be the same, to the point where you could just copy-paste code from, say, the node entity class, and make a few tweaks? Creating the code in DCB would be much simpler, and having the DCB code look like the output code would make debugging easier too.

Using a class with the magic __call() method lets us have just that: a renderer object that treats a method call as some information about code to render. Here's what the builder code for the base field definition code looks like now:

$changed_field_calls = new FluentMethodCall;
$changed_field_calls
  ->setLabel(FluentMethodCall::t('Changed'))
  ->setDescription(FluentMethodCall::t('The time that the entity was last edited.'));
if ($use_revisionable) {
  $changed_field_calls->setRevisionable(TRUE);
}
if ($use_translatable) {
  $changed_field_calls->setTranslatable(TRUE);
}
$method_body = array_merge($method_body, $changed_field_calls->getCodeLines());

It's not yet perfect, as the first line isn't done by this, and the handling of the t() calls could do with some polish; probably by creating a separate class called something like FunctionCall, such that FunctionCall::somefunction() returns the code for a call to somefunction().

But the efficiency and elegance of this approach has led me to devise a new principle for DCB: builder code should look as much as possible like that code that it outputs.

So applying this approach to outputting annotations, the code now looks like this:

$annotation = ClassAnnotation::ContentEntityType([
  'id' => 'cat',
  'label' => ClassAnnotation::Translation("Cat"),
  'label_count' => ClassAnnotation::PluralTranslation([
    'singular' => "@count content item",
    'plural' => "@count content items",
  ]),
]);
$annotation_lines = $annotation->render();

Magic methods used there as well, this time for static calls. The similarity to the output code isn't as good, as annotations aren't PHP code, but it's still close enough that you can copy the code you want to output, make a few simple changes, and you have the builder code.

This work has embodied another principle that I've come to follow: complexity and ugliness should be pushed down, hidden, and encapsulated. Here, the ClassAnnotation and FluentMethodCall have to do fiddly stuff like quoting string values, recurse into nested arrays. They have to handle special cases, like the last line of a fluent call has a semicolon and the last line of an annotation has no comma. All of that is hidden from the code that uses them. That can get on with doing the interesting bits.

May 15 2018
May 15

There's a great module called the debug module. I'd give you the link… but it doesn't exist. Or rather, it's not a module you download. It's a module you write yourself, and write again, over and over again.

Do you ever want to inspect the result of a method call, or the data you get back from a service, the result of a query, or the result of some other procedure, without having to wade through the steps in the UI, submit forms, and so on?

This is where the debug module comes in. It's just a single page which outputs whatever code you happen to want to poke around with at the time. On Drupal 8, that page is made with:

  • an info.yml file
  • a routing file
  • a file containing the route's callback. You could use a controller class for this, but it's easier to have the callback just be a plain old function in the module file, as there's no need to drill down a folder structure in a text editor to reach it.

(You could quickly whip this up with Module Builder!)

Here's what my router file looks like:

joachim_debug:
  path: '/joachim-debug'
  defaults:
    _controller: 'joachim_debug_page'
  options:
    _admin_route: TRUE
  requirements:
    _access: 'TRUE'


My debug module is called 'joachim_debug'; you might want to call yours something else. Here you can see we're granting access unconditionally, so that whichever user I happen to be logged in as (or none) can see the page. That's of course completely insecure, especially as we're going to output all sorts of internals. But this module is only meant to be run on your local environment and you should on no account commit it to your repository.

I don't want to worry about access, and I want the admin theme so the site theme doesn't get in the way of debug output or affect performance.

If you sometimes want to see themed output, you can add a second route with a different path, which serves up the same content but without the admin theme option:

joachim_debug_theme:
  path: '/joachim-debug-theme'
  defaults:
    _controller: 'joachim_debug_page'
  requirements:
    _access: 'TRUE'

The module file starts off looking like this:

opcache_reset();

function joachim_debug_page() {
  $build = [
    '#markup' => “aaaaarrrgh!!!!”,
  ];

  /*
  // ============================ TEMPLATE


  return $build;
  */

  return $build;
}


The commented-out section is there for me to quickly copy and paste a new section of code anytime I want to do something different. I always leave the old code in below the return, just in case I want to go back to it later on, or copy-paste snippets from it.

Back in the Drupal 6 and 7 days, the return of the callback function was merely a string. On Drupal 8, it has to be a proper render array. The return text used to be 'It's going wrong!' but these days it's the more expressive 'aaaaarrrgh'. Most of the time, the output I want will be the result of dsm() call, so the $build is there just so Drupal's routing system doesn't complain about a route callback not returning anything.

Here are some examples of the sort of code I might have in here.

  // ============================ Route provider
  $route_provider = \Drupal::service('router.route_provider');

  $path = 'node/%/edit';
  $rs = $route_provider->getRoutesByPattern($path);
  dsm($rs);

  return $build;

Here I wanted to see the what the route provider service returns. (I have no idea why, this is just something I found in the very long list of old code in my module's menu callback, pushed down by newer stuff.)

  // ============================ order receipt email
  $order = entity_load('commerce_order', 3);

  $build = [
    '#theme' => 'commerce_order_receipt',
    '#order_entity' => $order,
    '#totals' => \Drupal::service('commerce_order.order_total_summary')->buildTotals($order),
  ];

  return $build;

I wanted to work with the order receipt emails that Commerce sends. But I don't want to have to make a purchase, complete and order, and then look in the mail logger just to see the email! But this is quicker: all I have to do is load up my debug module's page (mine is at the path 'joachim-debug', which is easy to remember for me; you might want to have yours somewhere else), and vavoom, there's the rendered email. I can tweak the template, change the order, and just reload the page to see the effect.

As you can see, it's quick and simple. There's no safety checks, so if you ever put code here that does something (such as an entity_delete(), it's useful for deleting entities in bulk quickly), be sure to comment out the code once you're done with it, or your next reload might blow up! And of course, it's only ever to be used on your local environment; never on shared development sites, and certainly never on production!

I once read something about how a crucial piece of functionality required for programming, and more specifically, for ease of learning to program with a language or a framework, is being able to see and understand the outcomes of the code you are writing. In Drupal 8 more than ever, being able to understand the systems you're working with is vital. There are tools such as debuggers and the Devel and Devel Contrib modules' information pages, but sometimes quick and dirty does the job too.

Feb 21 2018
Feb 21

Dependency injection is a pattern that adds a lot of boilerplate code, but Drupal Code Builder makes it easy to add injected services to plugins, forms, and service classes.

Now that the Drupal 8 version of Module Builder (the Drupal front-end to the Drupal Code Builder library) uses an autocomplete for service names in the edit form, adding injected services is even easier, and any of the hundreds of services in your site’s codebase (443 on my local sandbox Drupal 8 site!) can be injected.

I often used this when I want to add a service to an existing plugin: re-generate the code, and copy-paste the new code I need.

This is an area in which Module Builder now outshines its Drush counterpart, because unlike the Drush front end for Drupal Code Builder, which generates code with input parameters every time, Module Builder lets you save your settings for the generated module (as a config entity).

So you can return to the plugin you generated to start with, add an extra service to it, and generate the code again. You can copy and paste, or have Module Builder write the file and then use git to revert custom code it’s removed. (The ability to insert generated code into existing files is on my list of desirable features, but is realistically a long way off, as it would be rather complex, a require the addition of a code parsing library.)

But why stop at generating code for your own modules? I recently filed an issue on Search API, suggesting that its plugins could do with tweaking to follow the standard core pattern of a static factory method and constructor, rather than rely on setters for injection. It’s not a complex change, but a lot of code churn. Then it occurred to me: Drupal Code Builder can generate that boilerplate code: simply create a module in Module Builder called ‘search_api’, and then add a plugin with the name of one that is already in Search API, and then set its injected services to the services the real plugin needs.

Drupal Code Builder already knows how to build a Search API plugin: its code analysis detects the right plugin base class and annotation to use, and also any parameters that the constructor method should pass up to the base class.

So it’s pretty quick to copy the plugin name and service names from Search API’s plugin class to the form in Module Builder, and then save and generate the code, and then copy the generated factory methods back to Search API to make a patch.

I’m now rather glad I decided to use config entities for generated entities. Originally, I did that just because it was a quick and convenient way to get storage for serialized data (and since then I discovered in other work that map fields are broken in D8 so I’m very glad I didn’t try to make then content entities!). But the ability to save the generating settings for a module, and then return to it to add to them has proved very useful.

Jan 12 2018
Jan 12

As far as I know, there's nothing (yet) for triggering an arbitrary event. The complication is that every event uses a unique event class, whose constructor requires specific things passing, such as entities pertaining to the event.

Today I wanted to test the emails that Commerce sends when an order completes, and to avoid having to keep buying a product and sending it through checkout, I figured I'd mock the event object with Prophecy, mocking the methods that the OrderReceiptSubscriber calls (this is the class that does the sending of the order emails). Prophecy is a unit testing tool, but its objects can be created outside of PHPUnit quite easily.

Here's my quick code:

  $order = entity_load('commerce_order', ORDER_ID);

  $prophet = new \Prophecy\Prophet;
  $event = $prophet->prophesize('Drupal\state_machine\Event\WorkflowTransitionEvent');

  $event->getEntity()->willReturn($order);

  $subscriber = \Drupal::service('commerce_order.order_receipt_subscriber');

  $subscriber->sendOrderReceipt($event->reveal());

Could some sort of generic tool be created for triggering any event in Drupal? Perhaps. We could use reflection to detect the methods on the event class, but at some point we need some real data for the event listeners to do something with. Here, I needed to load a specific order entity and to know which method on the event class returns it. For another event, I'd need some completely different entities and different methods.

We could maybe detect the type that the event method return (by sniffing in the docblock... once we go full PHP 7, we could use reflection on the return type), and the present an admin UI that shows a form element for each method, allowing you to enter an entity ID or a scalar value.

Still, you'd need to look at the code you want to run, the event listener, to know which of those you'd actually want to fill in.

Would it same more time than cobbling together code like the above? Only if you multiply it by a sufficiently large number of developers, as is the case with many developer tools.

It's the sort of idea I might have tinkered with back in the days when I had time. As it is, I'm just going to throw this idea out in the open.

Nov 13 2017
Nov 13

I started adding unit tests to Drupal Code Builder about 3 years ago, and I’ve been gradually expanding on them ever since. It’s made refactoring the code a pleasant rather than stressful experience.

However, while all generator types are covered, the level of detail the tests go into isn’t that deep. Back when I wrote the tests, they mostly needed to check for hook implementations that were generated, and so quick and dirty regexes on the generated code did the job: match 'mymodule_form_alter' in the generated code, basically. I gradually extended those to cover things like class declarations and methods, but that approach is very much cracking at the seams.

So it’s time to switch to something more powerful, and more suited to the task.

I’ve already removed my frankly hideous attempt at verifying that generated code is correctly-formed PHP, replacing it with a call to PHP’s own code linter. My own code was running the generated PHP code files through eval() (yes, I know!) to check nothing crashed, which was quick and worked but only up to a point: tests couldn’t create the same function twice, as eval()ing code that contains a function declaration brings it into the global namespace, and it didn’t work at all for classes where while tests were being run, as the parent classes in Drupal core or contrib aren't present.

It's already proved worthwhile, as once I'd converted the tests, I found an error in the generated code: a stray quote mark in base field definitions for a content entity, which my approach wasn't picking up, and never would have.

The second phase is going to be to use PHPCS and Drupal Coder to check that generated code follows Drupal Coding Standards. I'm currently getting that to work in my testing base class, although it might be a while before I push it, as I suspect it's going to complain about quite a few nipicks in the generated code that I'll then have to spend some time fixing.

The third phase (this is a 3-phrase programme, all the best ones are) is going to be to look into using PHP-Parser to check for the presence of functions and methods in the code, rather than my regex-based approach. This is going to allow for much more thorough checking of the generated code, with things such as method order in the code, service injection, and more.

After that, it'll be back to some more refactoring and code clean-up, and then some more code generators! I have a few ideas of what else Drupal Code Builder could generate, but more ideas are welcome in the issue queue on github.

Sep 15 2017
Sep 15

I've completely revamped the Drush commands for Drupal Code Builder:

  • First, they're now in their own project on Github
  • Second, I've rewritten them completely for Drush 9, completely interactive.
  • Third, they are now geared towards adding to existing modules, rather than generating a module as a single shot. That approach made sense in the days of Drupal 6 when it was just hook implementations, but I increasingly find I want to add a plugin, a service, a form, to a module I've already started.

The downside is that installing these is rather tricky at the moment due to some current limitations in Drush 9 beta; see details in the project README, which has full instructions for workarounds.

Now that's out of the way, I'm pushing on with some new generators for the Drupal Code Builder library itself. On my list is:

  • plugin types (as in the plugin manager service, base class and interface, and declaration for Plugins module)
  • entity type
  • entity type handlers
  • your suggestions in the issue queue...

And of course more unit tests and refactoring of the codebase.

Feb 22 2017
Feb 22

I’ve written previously about git workflow for working on drupal.org patches, and about how we don’t necessarily need to move to a github-style system on drupal.org, we just maybe need better tools for our existing workflow. It’s true that much of it is repetitive, but then repetitive tasks are ripe for automation. In the two years since I released Dorgpatch, a shell script that handles the making of patches for drupal.org issues, I’ve been thinking of how much more of the drupal.org patch workflow could be automated.

Now, I have released a new script, Dorgflow, and the answer is just about everything. The only thing that Dorgflow doesn’t automate is uploading the patch to drupal.org (and that’s because drupal.org’s REST API is read-only). Oh, and writing the code to actually fix bugs or create new features. You still have to do that yourself, along with your cup of coffee.

So assuming you’ve made your own hot beverage of choice, how does Dorgflow work?

Simply! To start with, you need to have an up to date git clone of the project you want to work on, be it Drupal core or a contrib project.

To start work on an issue, just do:

$ dorgflow https://www.drupal.org/node/123456

You can copy and paste the URL from your browser. It doesn’t matter if it has an anchor link on the end, so if you followed a link from your issue tracker and it has ‘#new’ at the end, or clicked down to a comment and it has ‘#comment-1234’ that’s all fine.

The first thing this comment does it make a new git branch for you, using the issue number and the name. It then also downloads and applies all the patch files from the issue node, and makes a commit for each one. Your local git now shows you the history of the work on the issue. (Note though that if a patch no longer applies against the main branch, then it’s skipped, and if a patch has been set to not be displayed on the issue’s file list, then it’s skipped too.)

Let’s see how this works with an actual issue. Today I wanted to review the patch on an issue for Token module. The issue URL is https://www.drupal.org/node/2782605. So I did:

$ dorgflow https://www.drupal.org/node/2782605

That got me a git history like this:

  * 6d07524 (2782605-Move-list-of-available-tokens-from-Help-to-Reports) Patch from Drupal.org. Comment: 35; URL: https://www.drupal.org/node/2782605#comment-11934790; file: token-move-list-of-available-tokens-2782605-34.patch; fid 5784728. Automatic commit by dorgflow.
 * 6f8f6e0 Patch from Drupal.org. Comment: 15; URL: https://www.drupal.org/node/2782605#comment-11666939; file: 2782605-13.patch; fid 5710235. Automatic commit by dorgflow.
 /
* a3b68cc (8.x-1.x) Issue #2833328 by Berdir: Handle bubbleable metadata for block title token replacements
* [older commits…]

What we can see here is:

  • Git is now on a feature branch, called ‘2782605-Move-list-of-available-tokens-from-Help-to-Reports’. The first part is the issue number, and the rest is from the title of the issue node on drupal.org.
  • Two patches were found on the issue, and a commit was made for each one. Each patch’s commit message gives the comment index where the patch was posted, the URL to the comment, the patch filename, and the patch file entity ID (these last two are less interesting, but are used by Dorgflow when you update a feature branch with newer patches from an issue).

The commit for patch 35 will obviously only show the difference between it and patch 15, an interdiff effectively. To see what the patch actually contains, take a diff from the master branch, 8.x-1.x.

(As an aside, the trick to applying a patch that’s against 8.x-1.x to a feature branch that already has commit for a patch is that there is a way to check out files from any git commit while still keeping git’s HEAD on the current branch. So the patch applies, because the files look like 8.x-1.x, but when you make a commit, you’re on the feature branch. Details are on this Stack Overflow question.)

At this point, the feature branch is ready for work. You can make as many commits as you want. (You can rename the branch if you like, provided the ‘2782605-’ part stays at the beginning.) To make your own patch with your work, just run the Dorgflow script without any argument:

$ dorgflow

The script detects the current branch, and from that, the issue number, and then fetches the issue node from drupal.org to get the number of the next comment to use in the patch filename. All you now have to do is upload the patch, and post a comment explaining your changes.

Alternatively, if you’re a maintainer for the project, and the latest patch is ready to be committed, you can do the following to put git into a state where the patch is applied to the main development branch:

$ dorgflow commit

At that point, you just need to obtain the git commit command from the issue node. (Remember the drupal standard git message format, and to check the attribution for the work on the issue is correct!)

What if you’ve previously reviewed a patch, and now there’s a new one? Dorgflow can download new patches with this command:

$ dorgflow update

This compares your feature branch to the issue node’s patches, and any patches you don’t yet have get new commits.

If you’ve made commits for your own work as well, then effectively there’s a fork in play, as your development in your commits and the other person’s patch are divergent lines of development. Appropriately, Dorgflow creates a separate branch. Your commits are moved onto this branch, while the feature branch is rewound to the last patch that was already there, and then has the new patches applied to it, so that it now reflects work on the issue. It’s then up to you to do a git merge of these two branches in order to combine the two lines of development back into one.

Dorgflow is still being developed. There are a few ideas for further features in the issue queue on github (not to mention a couple of bugs for some of the various possible cases the update command can encounter). I’m also pondering whether it’s worth the effort to convert the script to use Symfony Console; feel free to chime in with any opinions on the issue for that.

There are tests too, as it’s pretty important that a script that does things to your git repository does what it’s supposed to (though the only command that does anything destructive is ‘dorgflow cleanup’, which of course asks for confirmation). Having now written this, I’m obviously embarking upon cleaning it up and to some extent rewriting it, though I do have the excuse that the early weeks of working on this were the days after the late nights awake with my newborn daughter, and so the early versions of the code were written in a haze of sleep deprivation. If you’d like to submit a pull request, please do check in with me first on an issue to ensure it’s not going to clash with something I’m partway through changing.

Finally, if you find this as useful as I do (this was definitely an itch I’ve been wanting to scratch for a long time, as well as being a prime case of condiment-passing), please tell other Drupal developers about it. Let’s all spend less time downloading, applying, and rolling patches, and more time writing Drupal code!

Jan 16 2017
Jan 16

There’s an old saying that no information architecture survives contact with the user. Or something like that. You’ll carefully design and build your content types and taxonomies, and then find that the users are actually not quite using what you’ve built in quite the way it was intended when you were building it.

And so there comes a point where you need to grit your teeth, change the structure of the site’s content, and convert existing content.

Back on Drupal 7, I wrote a plugin for Migrate which handled migrations within a single Drupal site, so for example from nodes to a custom entity type, or from one node type to another. (The patch works, though I never found the time to polish it sufficiently to be committed.)

On Drupal 8, without the time to learn the new version of Migrate, I recently had to cobble something together quickly.

Fortunately, this was just changing the type of some nodes, and where all the fields were identical on both source and destination node types. Anything more complex would definitely require Migrate.

First, I created the new node type, and cloned all its fields from the old type to the new type. Here I took the time to update some of the Field Tools module’s functionality to Drupal 8, as it pays off to have a single form to clone fields rather than have to add them to the new node type one by one.

Field Tools also copies display settings where form and view modes match (in other words, if the source bundle has a ‘teaser’ display mode configured, and the destination also has a ‘teaser’ display mode that’s enabled for custom settings, then all of the settings for the fields being cloned are copied over, with field groups too).

With all the new configuration in place, it was now time to get down to the content. This was plain and simple a hack, but one that worked fine for the case in question. Here’s how it went…

We basically want to change the bundle of a bunch of nodes. (Remember, the ‘bundle’ is the generic name for a node type. Node types are bundles, as taxonomy vocabularies are bundles.) The data for a single node is spread over lots of tables, and most of these have the bundle in them.

On Drupal 8 these tables are:

  • the entity base table
  • the entity data table
  • the entity revision data table
  • each field data table
  • each field data revision table

(It’s not entirely clear to me what the separation between base table and data table is for. It looks like it might be that base table is fields that don’t change for revisions, and data table is for fields that do. But then the language is on the base table, and that can be changed, and the created timestamp is on the data table, and while you can change that, I wouldn’t have thought that’s something that has past values kept. Answers on a postcard.)

So we’re basically going to hack the bundle column in a bunch of tables. We start by getting the names of these tables from the entity type storage:

$storage = \Drupal::service('entity_type.manager')->getStorage('node');

// Get the names of the base tables.
$base_table_names = [];
$base_table_names[] = $storage->getBaseTable();
$base_table_names[] = $storage->getDataTable();
// (Note that revision base tables don't have the bundle.)

For field tables, we need to ask the table mapping handler:

$table_mapping = \Drupal::service('entity_type.manager')->getStorage('node')
  ->getTableMapping();

// Get the names of the field tables for fields on the service node type.
$field_table_names = [];
foreach ($source_bundle_fields as $field) {
  $field_table = $table_mapping->getFieldTableName($field->getName());
  $field_table_names[] = $field_table;

  $field_storage_definition = $field->getFieldStorageDefinition();
  $field_revision_table = $table_mapping
    ->getDedicatedRevisionTableName($field_storage_definition);

  // Field revision tables DO have the bundle!
  $field_table_names[] = $field_revision_table;
}

(Note the inconsistency in which tables have a bundle field and which don’t! For that matter, surely it’s redundant in all field tables? Does it improve the indexing perhaps?)

Then, get the IDs of the nodes to update. Fortunately, in this case there were only a few, and it wasn’t necessary to write a batched hook_update_N().

// Get the node IDs to update.
$query = \Drupal::service('entity.query')->get('node');
// Your conditions here!
// In our case, page nodes with a certain field populated.
$query->condition('type', 'page');
$query->exists(‘field_in_question’);
$nids = $query->execute();

And now, loop over the lists of tables names and hack away!

// Base tables have 'nid' and 'type' columns.
foreach ($base_table_names as $table_name) {
  $query = \Drupal\Core\Database\Database::getConnection('default')
    ->update($table_name)
    ->fields(['type' => 'service'])
    ->condition('nid', $service_nids, 'IN')
    ->execute();
}
// Field tables have 'entity_id' and 'bundle' columns.
foreach ($field_table_names as $table_name) {
  $query = \Drupal\Core\Database\Database::getConnection('default')
    ->update($table_name)
    ->fields(['bundle' => 'service'])
    ->condition('entity_id', $service_nids, 'IN')
    ->execute();
}

Node-specific tables use ‘nid’ and ‘type’ for their names, because those are the base field names declared in the entity type class, whereas Field API tables use the generic ‘entity_id’ and ‘bundle’. The mapping between these two is declared in the entity type annotation’s entity_keys property.

This worked perfectly. The update system takes care of clearing caches, so entity caches will be fine. Other systems may need a nudge; for instance, Search API won’t notice the changed nodes and its indexes will literally need turning off and on again.

Though I do hope that the next time I have to do something like this, the amount of data justifies getting stuck into using Migrate!

May 15 2016
May 15

Drupal Code Builder library is the new library which powers Module Builder. I recently split Module Builder up, so Drupal Code Builder (DCB) is the engine for generating Drupal code, while what remains in the Module Builder module is just the UI.

DCB is an extensible framework, so if you wanted to have DCB create scaffold code for a particular Drupal component or system, you can.

DCB's API is documented in the README. It's based on the idea of tasks: for example, list the hooks and plugin types that DCB has parsed from the site code, analyze the site code to update that list, or generate code for a module. There are Task classes, and you call public methods on these to do something.

The generators

Broadly, there are three things you want to do with DCB: collect and analyze data about a Drupal codebase to learn about hooks and plugin types, report on that data, and actually generate some code.

The Generate task class is where the work of creating code begins. The other task classes are all pretty simple, or at least self-contained, but the Generate task is accompanied by a large number of classes in the DrupalCodeBuilder\Generate namespace. You can see from the file names that these represent all the different components that make up generated code.

Furthermore, as well as all inheriting from BaseGenerator, there are hierarchies which can probably be deduced from the names alone, where more specialized generators inherit from generic ones. For example, we have:

  • File
    • PHPFile
    • ModuleCodeFile
    • PHPClassFile
      • Plugin
      • Service
    • API (this one's for your mymodule.api.php file)
    • YMLFile
    • Readme

and also:

  • PHPFunction
    • HookImplementation
    • HookMenu
    • HookPermission

However, these hierarchies are only about code re-use. In terms of PHP code, HookImplementation is only related to ModuleCodeFile by the common BaseGenerator base class. As the process of code generation takes place, there will be a tree of components that represents components containing each other, but it's important to remember that class inheritance doesn't come into it.

Also, while the generators in the hierarchies above clearly represent some tangible part of the code we're going to generate, some are more abstract, such as Module and Hooks. These aren't abstract in the OO sense, as they will get instantiated, but I think of them as abstract in the sense that they're not concrete and are responsible for code across different files. (Suggestions for a better word to describe them please!)

The process of generating code starts with a call to the Generate task's generateComponent() method. The host UI application (such as Module Builder module, or the Drush command) passes it an array of data that looks something like this:

[
  'base' => 'module',
  'root_name' => 'mymodule,
  'readable_name' => 'My module',
  'hooks' => [
    'form_alter' => TRUE,
    'install' => TRUE,
  ],
  'plugins => [
    0 => [
      'plugin_type' => 'block',
      'plugin_name' => 'my_plugin',
      'injected_services' => [
        'current_user',
      ],
    ],
  ],
  'settings_form' => TRUE,
  'readme' => TRUE,
]

(How you get the specification for this array as a list of properties and their expected format is a detailed topic of its own, which will be covered later. For now, we're jumping in at the point where code is generated.)

Assembling components

The first job for the Generate task class is to turn this array of data into a list of generator classes for the necessary components.

This list is built up in a cascade, where each component gets to request further components, and those get to request components too, and so on, until we reach components that don't request anything. We start with the root component that was initially requested, Module, let that request components, and then repeat the process.

This is best illustrated with the AdminSettingsForm generator. This implements the requiredComponents() method to request:

  • a permission
  • a router item (on Drupal 7 that's a menu item, but in DCB we refer to these a router item whatever the core Drupal version)
  • a form

In turn, the Permission generator requests a permissions YAML file. You'll see that there are two Permission generators, each with a version suffix. The Permission7 generator requests a hook_permission() hook, which in turn requests a .module file. The Permission8 generator is somewhat simpler, and just requests a YMLFile component.

Meanwhile, the router item requests a routing.yml file on D8, and a hook_menu() on D7.

These two parts of the cascade end when we reach the various file generators: ModuleCodeFile and YMLFile don't request anything. The process that gathers all these generators works iteratively: every iteration it calls requiredComponents() on all the components the previous iteration gave it, and it only stops once an iteration produces no new components. It's safe to request the same component multiple times; in the D7 version of our example, both our hook_menu() and hook_permission() will request a ModuleCodeFile component that represents the .module file. The cascade system knows to either combine these two requests into one component, or ignore the second if it's identical to what's already been requested.

We now have a list of about a dozen or so components, each of which is an instantiated Generator object. Some represent files, some represent functions, and some like Hooks represent a more vague concept of the module 'having some hooks'. There's also the Module generator which started the whole process, whose requiredComponents() did most of the work of interpreting the given array of data.

Assembling a tree of components

The second part of the process is to assemble this flat list of components into a tree. This is where the notion of which component contains others does come into play. This is a different concept from requested components: a component can request something that it won't end up containing, as we saw with the AdminSettingsForm, which requests a permission.

The Generate task calls the containingComponent() method on each component, and this is used to assemble an array of parentage data. There's nothing fancy or recursive going on here; the tree is just an array whose keys are the identifiers of components, and whose values are arrays of the child component identifiers.

This tree now represents a structure of components where child items will produce code to be included in their parents. One part of this structure could be represented like this:

  • module
    • routing.yml
      • router item
    • permission.yml
      • permission
    • .install
      • hook_install()

Some components, such as the Hooks component, are no longer around now: their job was to be a sort of broker for other components in the requesting phase, and they're no longer involved. The root component, Module, is the root of the tree. All the files we'll be outputting are its immediate children. (This is not a file hierarchy, folders are not represented here.)

Assembling file contents

We now have everything we need to start actually generating some code. This is done in a way that's very similar to Drupal's Render API: we recurse into the tree, asking each component to return some content both from itself and its children.

So for example, the router items contribute some lines to the routing.yml file, which then turns them into YAML. The .install component, which is an instance of ModuleCodeFile, produces a @file docblock, and then gets the docblock, function declaration, and function body from the hook_install component, and glues them all together.

Finally, each file component (the immediate children of the module component in the tree) gets to say what its file name and path should be.

So the Generate task has an array of data about files, where each item has a file name, file path, and file contents. This is returned to the caller to be output to the user, or written to the filesystem. Module Builder presents the files in a form, and allows the files to be written. The Drush command outputs them to terminal and optionally writes them too.

Extending it with new components

The best way to add new things for DCB to generate is to inherit from existing basic classes. If these don’t provide the flexibility, there’s always a case to be made to give them more configurable options: for example, the AdminSettingsForm class inherits from Form, but neither of those do very little for the actual generated form class, as the work for that is mostly done by the PHPClass class.

The roadmap for DCB at the moment consists of the following:

  • Generalize the injected services functionality that’s already in Plugins, so generated Form classes and Services can have them too.
  • Add Forms as a basic component that you can request to generate. (It’s currently there only as a base for the AdminSettingsForm generator.)

And as ever, keep adding tests, keep refactoring and improving the code. But I'm always interested in hearing new ideas (or you know, better yet, patches) in the issue queue.

Feb 28 2016
Feb 28

For Drupal 8, Module Builder is undergoing some big changes. It still builds hooks, a README file, an api.php file, permissions, an admin settings form, but now also builds plugins, services, routing items, and its ability to scan your codebase to learn about hooks invented by any of your modules is now extended to plugin types too. And it's actually been available for Drupal 8 for quite some time, but up till now only as the Drush plugin version.

I've now released the D8 version of the module, so you can use an admin UI in Drupal which lets you select the components you want in a form. Unlike Drupal 7 though, the options you enter for your module to generate are stored in a config entity, so you can generate code and then go back and tweak the settings and generate it again, as often as you like.

The big change isn't any of these though. The big change is that Module Builder is being split up.

For a very long time, the Module Builder codebase has been three things in one. Back when I added Drush support (in 2009, according to the git log), it made sense to gradually refactor the code into three parts: the Drupal module UI, the Drush commands, and the common code that does the actual work of generating code based on some parameters (such as which hooks you want, the module name, etc).

That core code has undergone a lot of changes. It's gone from just working with hooks, to a framework that's extensible with new component types. So for example, it's possible to request simply 'an admin form', and the generating code knows to produce the code for the form, the admin permission, and the router item. So that's one component that in fact produces form functions, hook_permission(), hook_menu() on Drupal 7, form class, permissions.yml, routing.yml on Drupal 8. Because Module Builder also works on multiple versions of Drupal (the code to produce Drupal 5 code is even still in there, if you have cause to try it, let me know if it still works!).

Having this multiple-version code within a Drupal module that's only for single version is a source of problems and confusion. The 7.x-2.x version of the project contains a module that's only for Drupal 7, but also the core code and the Drush plugin that both work on all versions. It also increases maintenance work, if we want to the older versions to keep receiving improvements to the generating code.

Hence the split. Module Builder is being divided into three parts:

  • The core code of Module Builder has been moved to a separate library, which is called Drupal Code Builder to distinguish it.
  • The Module Builder project is from now on just Drupal module, which requires the Drupal Code Builder library.
  • The Drush plugin will be moving too, and will also require the Drupal Code Builder library.

So to summarize, the situation is now as follows:

  • To build modules in a Drupal UI, on Drupal 8, you need:
  • To build modules with Drush, on any version, you need:
    • Module Builder 7.x-2.x, installed as a Drush command plugin (again, see the README). But note this will shortly be changing when the Drush command moves out of the d.org project too.
  • To build modules in a Drupal UI, on Drupal 7, you need:
    • Module Builder 7.x-2.x. I will probably release a 7.x-3.x at some point which requires the Drupal Code Builder library, so that the Drupal 7.x UI gets new features that are released in the library.

I'll be writing a post soon about how Drupal Code Builder works, so if you're interested in making Drupal Code Builder make something new, look out for that.

Jan 05 2016
Jan 05

Migrate module makes the assumption that for each item in your source, you're importing one item into Drupal, and that's baked in pretty deep.

So how do you handle source items where there is body text containing multiple inline images, which should be imported to file entities? For each single source item, ultimately imported as a node, you also have a variable number of images: a one-to-many source to destination relationship.

One way is to simply import the image files at the same time as the nodes whose body text they are in. In the prepareRow() method of your migration class, you effectively do a mini-import, analysing the text, fetching the file data, saving the file locally, and then getting a file ID that you can use to replace into the body text. But doing that, you don't benefit from any of Migrate's helper classes for importing files, nor can you roll these back without writing further code: this isn't a Migration, capital M, it's just an import.

The better way is to write a second migration, to run before your node migration. It may seem like extra work to have to write a second migration class, but it pays off, and besides, since they both draw from the same source data, a lot of your code is already written. Copy-paste the class definition and the constructor as far as the field mappings. The code you would have put in prepareRow() to analyse the text goes somewhere else, but we're getting ahead of ourselves.

And it turns out that Migrate does allow for your single source items to yield multiple destination items: tucked away in the module's source classes and not mentioned in the examples (which do cover a great deal, are probably the most extensive examples of any contrib module all the same, it must be said) there are at least two (that I've found) places where the one-to-one correspondence can be skirted around.

The one that I used is only available when you use the MigrateSourceList source type, and furthermore when your list source is a set of files. As it happened, the source data I was working with on my project came in JSON files, one file per node to import, and with a body text field which contained references to image files which also needed to be imported.

MigrateSourceList is a source type that separates out the two concepts of listing your source items and processing each one. So unlike, say, the CSV source where the list and the item processing are both provided by the one class, with MigrateSourceList, you can say something like 'my list source is a directory listing, and each item is a JSON file', or 'my list source is a JSON file, and each item is an HTML file'. The MigrateSourceList class delegates the two jobs to further classes, which allows you to mix and match them. Your migration specifies them in the constructor:

    $this->source = new MigrateSourceList(new MigrateListFiles($list_dirs, $base_dir),
      new MigrateItemFile($base_dir), $fields);

There's one more component in the system, and this is the crucial piece that allows us to have more than one destination item per source item: it's the MigrateContentParser class. This allows the files that MigrateListFiles to each return multiple items to MigrateSourceList.

The only implementation of this in Migrate is MigrateSimpleContentParser, which doesn't do much, so you'll want to subclass this for your particular case. It's fairly simple:

  • setContent() — perform any processing on the content of the file. In my case, I needed to run it through drupal_json_decode() and grab the body text field, since the whole file was JSON representing a node.
  • getChunkCount() — process the file content to return a count of how many items for migration are contained in it.
  • getChunkIDs() — similar to getChunkCount(), but return IDs. You will probably end up using a common helper method for this and getChunkCount(), as they do the same sort of work. The IDs you return can be anything you like; in my case they were GUIDs. They are appended to the ID for the file to form the overall source ID.
  • getChunk() — given a chunk ID (one of the ones you provided in getChunkIDs()), return the actual data for that item. Again, you may want to use a common helper. In my case, here I merely returned the ID itself, since the images to migrate were on a remote server and accessed by their GUID.

I submitted a patch to make it a bit easier to deal with the case where files might contain either one or many chunks (or even none): by default, a file providing only one chunk doesn't get to return a chunk ID, which wasn't working for my case. The patch (committed but not yet in a release) adds an option to override this so you always know the ID of the chunk: in my case, the GUID for the image found inside the body text, which was always needed by the image migration code.

At the other end, I needed a custom subclass of MigrateItem to deal with the data returned by getChunk(). This just needs to implement getItem(), and it can pretty much return anything you like: this is the same source data item that your migration class gets to work with.

So to recap, as there's quite a few classes flying around helping one another here, we have MigrateListFiles which uses a custom MigrateContentParser implementation to extract items from the source data files, with possibly more than one item from a single file, and then MigrateSourceList uses MigrateListFiles along with a custom MigrateItem implementation.

The setup code in my migration's constructor then looks like this:

    $parser = new CustomJSONBodyImagesParser();
    $list = new MigrateListFiles(
      // $list_dirs
      array($source_folder),
      // $base_dir
      $source_folder,
      // $file_mask
      '//',
      // $options
      array(),
      // MigrateContentParser $parser
      $parser
    );
    $item = new CustomImageGUIDItem();
    $this->source = new MigrateSourceList($list, $item, $fields);

The end result worked great, and the ability to rollback turned out to be very useful, when there turned out to be bad data here and there that needed to be cleaned up or skipped. But that's what always happens with migrations in my experience!

Mar 27 2015
Mar 27

I have a standard format for patchnames: 1234-99.project.brief-description.patch, where 1234 is the issue number and 99 is the (expected) comment number. However, it involves two copy-pastes: one for the issue number, taken from my browser, and one for the project name, taken from my command line prompt.

Some automation of this is clearly possible, especially as I usually name my git branches 1234-brief-description. More automation is less typing, and so in true XKCD condiment-passing style, I've now written that script, which you can find on github as dorgpatch. (The hardest part was thinking of a good name, and as you can see, in the end I gave up.)

Out of the components of the patch name, the issue number and description can be deduced from the current git branch, and the project from the current folder. For the comment number, a bit more work is needed: but drupal.org now has a public API, so a simple REST request to that gives us data about the issue node including the comment count.

So far, so good: we can generate the filename for a new patch. But really, the script should take care of doing the diff too. That's actually the trickiest part: figuring out which branch to diff against. It requires a bit of git branch wizardry to look at the branches that the current branch forks off from, and some regular expression matching to find one that looks like a Drupal development branch (i.e., 8.x-4.x, or 8.0.x). It's probably not perfect; I don't know if I accounted for a possibility such as 8.x-4.x branching off a 7.x-3.x which then has no further commits and so is also reachable from the feature branch.

The other thing this script can do is create a tests-only patch. These are useful, and generally advisable on drupal.org issues, to demonstrate that the test not only checks for the correct behaviour, but also fails for the problem that's being fixed. The script assumes that you have two branches: the one you're on, 1234-brief-description, and also one called 1234-tests, which contains only commits that change tests.

The git workflow to get to that point would be:

  1. Create the branch 1234-brief-description
  2. Make commits to fix the bug
  3. Create a branch 1234-tests
  4. Make commits to tests (I assume most people are like me, and write the tests after the fix)
  5. Move the string of commits that are only tests so they fork off at the same point as the feature branch: git rebase --onto 8.x-4.x 1234-brief-description 1234-tests
  6. Go back to 1234-brief-description and do: git merge 1234-tests, so the feature branch includes the tests.
  7. If you need to do further work on the tests, you can repeat with a temporary branch that you rebase onto the tip of 1234-tests. (Or you can cherry-pick the commits. Or do cherry-pick with git rev-list, which is a trick I discovered today.)

Next step will be having the script make an interdiff file, which is a task I find particularly fiddly.

Nov 27 2014
Nov 27

There's been a lot of discussion about how we need github-like features on d.org. Will we get them? There's definitely many improvements in the pipeline to the way our issue queues work. Whether we actually need to replicate github is another debate (and my take on it is that I don't think we do).

In the meantime, I think that it's possible to have a good collaborative workflow with what we have right now on drupal.org, with just the issue queue and patches, and git local branches. Here's what I've gradually refined over the years. It's fast, it helps you keep track of things, and it makes the most of git's strengths.

A word on local branches

Git's killer feature, in my opinion, is local branches. Local branches allow you to keep work on different issues separate, and they allow you to experiment and backtrack. To get the most out of git, you should be making small, frequent commits.

Whenever I do a presentation on git, I ask for a show of hands of who's ever had to bounce on CMD-Z in their text editor because they broke something that was working five minutes ago. Commit often, and never have that problem again: my rule of thumb is to commit any time that your work has reached a state where if subsequent changes broke it, you'd be dismayed to lose it.

Starting work on an issue

My first step when I'm working on an issue is obviously:

  git pull

This gets the current branch (e.g. 7.x, 7.x-2.x) up to date. Then it's a good idea to reload your site and check it's all okay. If you've not worked on core or the contrib project in question in a while, then you might need to run update.php, in case new commits have added updates.

Now start a new local branch for the issue:

  git checkout -b 123456-foobar-is-broken

I like to prefix my branch name with the issue number, so I can always find the issue for a branch, and find my work in progress for an issue. A description after that is nice, and as git has bash autocompletion for branch names, it doesn't get in the way. Using the issue number also means that it's easy to see later on which branches I can delete to unclutter my local git checkout: if the issue has been fixed, the branch can be deleted!

So now I can go ahead and start making commits. Because a local branch is private to me, I can feel free to commit code that's a total mess. So something like:

  dpm($some_variable_I_needed_to_examine);
  /*
  // Commented-out earlier approach that didn't quite work right.
  $foo += $bar;
  */
  // Badly-formatted code that will need to be cleaned up.
  if($badly-formatted_code) { $arg++; }

That last bit illustrates an important point: commit code before cleaning up. I've lost count of the number of times that I've got it working, and cleaned up, and then broken it because I've accidentally removed an important line that was lost among the cruft. So as soon as code is working, I make a commit, usually whose message is something like 'TOUCH NOTHING IT WORKS!'. Then, start cleaning up: remove the commented-out bits, the false starts, the stray code that doesn't do anything, in small commits of course. (This is where you find it actually does, and breaks everything: but that doesn't matter, because you can just revert to a previous commit, or even use git bisect.)

Keeping up to date

Core (or the module you're working on) doesn't stay still. By the time you're ready to make a patch, it's likely that there'll be new commits on the main development branch (with core it's almost certain). And before you're ready, there may be commits that affect your ongoing work in some way: API changes, bug fixes that you no longer need to work around, and so on.

Once you've made sure there's no work currently uncommitted (either use git stash, or just commit it!), do:

git fetch
git rebase BRANCH

where BRANCH is the main development branch that is being committed to on drupal.org, such as 8.0.x, 7.x-2.x-dev, and so on.

(This is arguably one case where a local branch is easier to work with than a github-style forked repository.)

There's lots to read about rebasing elsewhere on the web, and some will say that rebasing is a terrible thing. It's not, when used correctly. It can cause merge conflicts, it's true. But here's another place where small, regular commits help you: small commits mean small conflicts, that shouldn't be too hard to resolve.

Making a patch

At some point, I'll have code I'm happy with (and I'll have made a bunch of commits whose log messages are 'clean-up' and 'formatting'), and I want to make a patch to post to the issue:

  git diff 7.x-1.x > 123456.PROJECT.foobar-is-broken.patch

Again, I use the issue number in the name of the patch. Tastes differ on this. I like the issue number to come first. This means it's easy to use autocomplete, and all patches are grouped together in my file manager and the sidebar of my text editor.

Reviewing and improving on a patch

Now suppose Alice comes along, reviews my patch, and wants to improve it. She should make her own local branch:

  git checkout -b 123456-foobar-is-broken

and download and apply my patch:

  wget PATCHURL
  patch -p1 < 123456.PROJECT.foobar-is-broken.patch

(Though I would hope she has a bash alias for 'patch -p1' like I do. The other thing to say about the above is that while wget is working at downloading the patch, there's usually enough time to double-click the name of the patch in its progress output and copy it to the clipboard so you don't have to type it at all.)

And finally commit it to her branch. I would suggest she uses a commit message that describes it thus:

  git commit -m "joachim's patch at comment #1"

(Though again, I would hope she uses a GUI for git, as it makes this sort of thing much easier.)

Alice can now make further commits in her local branch, and when she's happy with her work, make a patch the same way I did. She can also make an interdiff very easily, by doing a git diff against the commit that represents my patch.

Incorporating other people's changes to ongoing work

All simple so far. But now suppose I want to fix something else (patches can often bounce around like this, as it's great to have someone else to spot your mistakes and to take turns with). My branch looks like it did at my patch. Alice's patch is against the main branch (for the purposes of this example, 7.x-1.x).

What I want is a new commit on the tip of my local branch that says 'Alice's changes from comment #2'. What I need is for git to believe it's on my local branch, but for the project files to look like the 7.x-1.x branch. With git, there's nearly always a way:

  git checkout 7.x-1.x .

Note the dot at the end. This is the filename parameter to the checkout command, which tells git that rather than switch branches, you want to checkout just the given file(s) while staying on your current branch. And that the filename is a dot means we're doing that for the entire project. The branch remains unchanged, but all the files from 7.x-1.x are checked out.

I can now apply Alice's patch:

  wget PATCHURL
  patch -p1 < 123456.2.PROJECT.foobar-is-broken.patch

(Alice has put the comment ID after the issue ID in the patch filename.)

When I make a commit, the new commit goes on the tip of my local branch. The commit diff won't look like Alice's patch: it'll look like the difference between my patch and Alice's patch: effectively, an interdiff. I now make a commit for Alice's patch:

  git commit -m "Alice's patch at comment #2"

I can make more changes, then do a diff as before, post a patch, and work on the issue advances to another iteration.

Here's an example of my local branch for an issue on Migrate I've been working on recently. You can see where I made a bunch of commits to clean up the documentation to get ready to make a patch. Following that is a commit for the patch the module maintainer posted in response to mine. And following that are a few further tweaks that I made on top of the maintainer's patch, which I then of course posted as another patch.

A screenshot of a git GUI showing the tip of a local branch, with a commit for a patch from another user.

(Notice how in a local branch, I don't feel the need to type terribly accurately for my commit messages, or indeed be all that clear.)

Improving on our tools

Where next? I'm pretty happy with this workflow as it stands, though I think there's plenty of scope for making it easier with some git or bash aliases. In particular, applying Alice's patch is a little tricky. (Though the stumbling block there is that you need to know the name of the main development branch. Maybe pass the script the comment URL, and let it ask d.org what the branch of that issue is?)

Beyond that, I wonder if any changes can be made to the way git works on d.org. A sandbox per issue would replace the passing around of patch files: you'd still have your local branch, and merge in and push instead of posting a patch. But would we have one single branch for the issue's development, which then runs the risk of commit clashes, or start a new branch each time someone wants to share something, which adds complexity to merging? And finally, sandboxes with public branches mean that rebasing against the main project's development can't be done (or at least, not without everyone know how to handle the consequences). The alternative would be merging in, which isn't perfect either.

The key thing, for me, is to preserve (and improve) the way that so often on d.org, issues are not worked on by just one person. They're a ball that we take turns pushing forward (snowball, Sisyphean rock, take your pick depending on the issue!). That's our real strength as a community, and whatever changes we make to our toolset have to be made with the goal of supporting that.

Nov 18 2014
Nov 18

Now I've finished the Big Monster Project of Doom that I've been on the last two years, I can talk more about some of the code that I wrote for it. I can also say what it was: it was a web application for activists to canvass the public for a certain recent national referendum (I'll let you guess which one).

One of the major modules I wrote was Entity Operations module. What began as a means to avoid repeating the same code each time I needed a new entity type soon became the workhorse for the whole application UI.

The initial idea was this: if you want a custom entity type, and you want a UI for adding, editing, and deleting entities (much like with nodes), then you have to build this all yourself: hook_menu() items, various entity callbacks, form builders (and validation and submit handlers) for the entity form and the delete confirmation form. (The Model module demonstrates this well.)

That's a lot of boilerplate code, where the only difference is the entity type's name, the base path where the entity UI sits, and the entity form builder itself (but even that can be generalized, as will be seen).

Faced with this and a project on which I knew from the start I was going to need a good handful of custom entities (for use with Microsoft Dynamics CRM, accessed with another custom module of mine, Remote Entity API), I undertook to build a framework that would take away all the repetition.

An Entity UI is thus built by declaring:

  • A base path (for nodes, this would be 'node'; we'll ignore the fact that in core, this path itself is a listing of content).
  • A list of subpaths to form the tabs, and the operation handler class for each one

With this in hand, why stop at just the entity view and edit tabs? The operation handlers can output static content or forms: they can output anything. One of the most powerful enhancements I made to this early on was to write an operations handler that outputs a view. It's the same idea as the EVA module.

So for the referendum canvassing application, I had a custom Campaign entity, that functioned as an Organic Group, and had as UI tabs several different views of members, views of contacts in the Campaign's geographic area, views of Campaign group content (such as tasks and contact lists), and so on.

This approach proved very flexible and quick. The group content entities were themselves also built with this, so that, for example, Contact List entities had operations for a user to book the entity, input data, and release it when done working on it. These were built with custom operation handlers specific to the Contact List entity, subclassing the generic form operation handler.

An unexpected bonus to all this was how easy it was to expose form-based operations to Views Bulk Operations and Services (as 'targeted actions' on the entity). This allowed the booking and release operations to work in bulk on views, and also to be done via a mobile app over Services.

A final piece of icing on the cake was the addition of alternative operation handlers for entity forms that provide just a generic bare bones form that invokes Field API to attach field widgets. With these, the amount of code needed to build a custom entity type is reduced to just three functions:

  • hook_entity_info(), to declare the entity type to Drupal core
  • hook_entity_operations_info(), to declare the operations that make up the UI
  • callback_entity_access(), which controls the access to the operations

The module has a few further tricks up its sleeve. If you're using user permissions for your entities, there's a helper function to use in your hook_permission(), which creates permissions out of all the operations (so: 'edit foobar entities', 'book foobar entities', 'discombobulate foobar entities' and so on). The entity URI callback that Drupal core requires you to have can be taken care of by a helper callback which uses the entity's base path definition. There's a form builder that lets you easily embed form-based operations into the entity build, so that you can put the sort of operations that are single buttons ('publish', 'book', etc) on the entity rather than in a tab. And finally, the links to operation tabs can be added to a view as fields, allowing a list of entities with links to view, edit, book, discombobulate, and so on.

So what started as a way to simplify and remove repetitive code became a system for building a whole entity-based UI, which ended up powering the whole of the application.

Aug 31 2014
Aug 31

Another thing that was developed as a result of my big Commerce project (see my previous blog post for the run-down of the various modules this contributed back to Drupal) was a bit of code for generating a graph that represents the relationships between entity types.

For a site with a lot of entityreference fields it's a good idea to draw diagrams before you get started, to figure out how everything ties together. But it's also nice to have a diagram that's based on what you've built, so you can compare it, and refer back to it (not to mention that it's a lot easier to read than my handwriting).

The code for this never got released; I tried various graph engines that work with Graph API, but none of them produced quite what I was hoping for. It just sat in my local copy of Field Tools for the last couple of years (I didn't even make a git branch for it, that shows how rough it was!). Then yesterday I came across the Sigma.js graph library, and that inspired me to dig out the code and finish it off.

To give the complete picture, I've added support for the relationships that are formed between entity types by their schema fields: things like the uid property on a node. These are easily picked out of hook_schema()'s foreign keys array.

In the end, I found Sigma.js wasn't the right fit: it looks very pretty, but it expects you to dictate the position of the nodes in the canvass, which for a generated graph doesn't really work. There is a plugin for it that allows the graph to be force-directed, but that was starting to be too fiddly. Instead though, I found Springy, that while maybe not quite as versatile, automatically lays out the graph nodes out of the box. It didn't take too long to write a library module for using Springy with Graph API.

Here's the result:

Graph showing relationships between entity types on a development Drupa site

Because this uses Graph API, it'll work with any graph engine, not just Springy. So I'll be interested to see what people who are more familiar with graphing can make it do. To get something that looks like the above for your site, it's simple: install the 7.x-1.x-dev release of Field Tools, install Graph API, install the Springy module, and follow the instructions in the README of that last module for installing the Springy Javascript library.

The next stage of development for this tool is figuring out a nice way of showing entity bundles. After all, entityreference fields are on specific bundles, and may point to only certain bundles. However, they sometimes point to all bundles of an entity type. And meanwhile, schema properties are always on all bundles and point to all bundles. How do we represent that without the graph turning into a total mess? I'm pondering adding a form that lets you pick which entity types should be shown as separate bundles, but it's starting to get complicated. If you have any thoughts on this, or other ways to improve this feature, please share them with me in the Field Tools issue queue!

Aug 19 2014
Aug 19

I've just made a commit to Module Builder that adds unit tests. This is a big deal, because having these frees me up to start making the big changes that are needed for supporting Drupal 8's new structures: routes, plugins, forms, and so on.

The biggest challenge is going to be the interface. Currently, you give Module Builder just a module name and a list of hook names, and it does the necessary. On the command line it's nice and simple:

drush mb mymodule install schema node_insert form_alter views_data_alter

The first parameter is the module name, and everything that follows is a hook name. Now we add to the mix requests such as a form called MyModuleCakeToppingForm, or an entity type plugin, or a route bake_my_cake and its page controller. How to elegantly specify all that over the command line, without making it horribly unwieldy and impossible to remember how to use?

It's also going to be an interesting exercise in reading my own documentation and seeing how much sense it makes after something like 7 months away from the code.

From what I recall, Module Builder uses a hierarchy of component generators to build your module. Taking our example above, the first thing that happens is that the Module generator class kicks in. 'So, you want a module, do you?' it asks, 'You'll need some of these.' And it begins to assemble a list of further generators, for the components it needs: an info file, and the hooks generator. The hooks generator does the actual job of examining your list of requested hooks, and decides based on that that you need three code files: a .module, a .install, and a .views.inc. So by now we have a tree of generators like this:

- Module
-- Info file
-- Hooks
--- Code file: .module
--- Code file: .install
--- Code file: .views.inc

This is not a class hierarchy; this is a tree of objects where each generator has a list of the generators beneath it, and is responsible for collecting data from them. Once we have the tree, we iteratively have each generator assemble the data it wants to contribute, starting with the Module generator at the top.

The original plan when I wrote this system was to make the smallest granularity be a file. The leaves of the generator tree would assemble the text for their file's contents, and the Module generator would collect the files up and return them to the caller for output (either in the UI, or to write them directly).

However, while the original intention of this system was that it could be generalised to base components other than modules (so profiles and themes, which are both supported to some extend but lack the UI, see above!), it's also proven to be extendable downwards to smaller components, and to be worthwhile to do so.

Enter the Form generator. Once we have a generic Function generator (and its child class the HookImplementation), we can create a Form generator. Given a form machine name, 'foo_form', it simply knows to add three copies of the Function generator: 'foo_form', 'foo_form_validate', 'foo_form_submit', along with the correct parameters and some boiler plate code.

And we can specialize this further: the AdminSettingsForm simply extends the Form generator, and adds a menu item component, which itself ensures hook_menu() is requested.

At this point it starts to get a bit complicated, as we have components that request other components that are in totally different parts of the component tree. That's the point at which I think I was when I realized I needed tests so that I can refactor and clean up the messy bits of this, and enhance and extend it, without breaking what's already there.

So that's the current state of Module Builder: not yet ready for Drupal 8, but has lots of potential. At this point, I'd really welcome input on the Drush interface, as that's the big quandary. And any input on new Drupal 8 component generators would be great too; there are a few open issues in the queue. And finally, Module Builder is a complex beast; should anyone looking at the code find it baffling and impenetrable, do please file a documentation issue to highlight the problem and request clarification.

Aug 09 2014
Aug 09

Some time ago, I released Human Queue Worker, a module that takes the concept of the Drupal Queue system, but where the processing of the items is done by human users rather than an automated process. I say 'takes the concept'; it in fact uses the Drupal Queue to create and claim queue items, but instead of declaring your queue with hook_cron_queue_info(), you declare it to Human Queue Worker as a queue that humans will be working on.

This was written for my current project and for a fairly specific need, and I didn't imagine many sites would be using it. However, it has an obvious and popular application: approving comments. I always figured it would be nice if someone wrote a little module to define a comment processing human queue.

Well, that someone is me, and the time is now. You see, I'm an idiot: when I set up this new blog site of mine, I totally forgot to set up a CAPTCHA, and then when I added Mollon, I didn't set it up properly. So this site has a few hundred spammy comments that I need to delete.

The problem is that comment management takes time. Unless there are some magical area of the core UI I've completely missed, I can either visit each node and delete them one by one, or use the comment admin form. There, I can mass-delete the ones with obvious spammy titles, but all the others will still need individual inspection.

The Human Queue UI simplifies this hugely. There's just one page for the queue. When you go to that page, you're presented with an item to process. In the case of comment approval, that's the comment itself, plus the parent node and parent comment to give you some context. To process the comment, click one of two buttons: 'Publish' or 'Delete'. The comment is dealt with, and the form reloads, with a brand new comment for you to process. Which means that the only clicking you do is the action buttons: Publish; Delete; Publish; Delete. (Though with the amount of spam on my site, it's probably Delete; Delete; Delete, like the Cybermen.)

I've not timed it, but I reckon I can probably go at quite a rate. And that's with just one of me: the core Queue system guarantees that only one worker can claim an item at any one time, and that applies to human workers too. So if another user were to work the queue too, by going to the same page, they would be getting shown different comments to work on, and we'd work through the comments at twice the rate.

Now I just need to find a compliant friend and make them into my worker drone. If you're interested, please don't post a comment!

Jul 11 2014
Jul 11

Hello! I'm back. I've not made any blog posts in over a year and a half due to the site where my blog was before, drupaler.co.uk, closing down. And while it took me some time to get round to writing a Migrate script to import my posts from the old site's database, it was actually getting round to setting up this new domain that took the longest.

So what have I been doing all this time? Especially as I still don't have a single Drupal 7 site out there to my name? Well, these days I work on a humongous web application which has kept me busy for the last 18 months; it's a large Drupal site (we hit a million of one of its several custom entity types recently), but to the general public it's just a login page. I may talk more about the development challenges in future posts.

Prior to that, I was building what would have been one of the earliest big Drupal Commerce sites to launch... except that very shortly before launch in October 2012, the whole project got canned.

That was obviously rather demoralizing, as it had been a year in the building. However, it's interesting to look back and see just how much a large-scale project can contribute back to the Drupal community. The following is a list of the contrib modules that were created as part of the development of the project. Many of them are pretty simple; many of them don't get that much use. But they get some, and so it's interesting to see how much code a project can share if development is steered towards reusability, and also how a single project, even one that never sees the light of day, can have effects that ripple out and benefit many others.

This was built to improve the UI for selecting terms from large taxonomies, where one vocabulary groups another. For example, you could have a huge vocabulary of cities, and a smaller one of cities. Each city term has a term reference to the country it belongs to (the glee of the early days of Drupal 7: all the fields on all the things!), and the entity that you want to tag (in my case a product) has term reference fields to both the country and the city. The way this module works is that the city terms shown in one widget are filtered each time you change the selected country term in the other field's widget. So you select 'France' and the city field widget updates with AJAX to show only cities in France.

It sounds simple to explain (I hope!), but involves a fair amount of work under the hood in the form alteration to pull it off. It's one of my most popular modules, and has received a fair number of patches from other users.

These two are developer modules. Devel Contrib arose from my constantly needing to dsm() the data from info hooks such as hook_views_data() and hook_entity_property_info(). When the thing you're developing keeps going wrong, one of first things to do is poke around to check you've properly declared everything to the APIs you're using. I started out just having this as a couple of menu items declared in a custom module, but pretty soon I added more, and I wanted it on other development sites, so the obvious thing to do was clean it up and release it.

Field Tools was something I wrote because the Commerce site in question had dozens of different product types and corresponding node types, with lots of common fields, and I really didn't want to spend hours clicking through the field admin UI to set them up. This may seem like a case of condiment-passing, but I'm sure it's saved me hours of tedium: create the fields once, and then quickly clone them to any other entity types and bundles.

A very small tweak module, this lets you override a particular instance of a multi-valued field and set it to be single-valued only. It's a hook_form_alter() hack made reusable by adding some field admin settings, and a good example of how site customization can often be done as modules rather than alter hacks.

Very simple module: your taxonomy term fields (or other entity references) should link to a view with an entity ID argument. I guess the more traditional way of doing this is with a lot of path aliases for your taxonomy terms. I think we may also have used this to create Profile-style lists of related entities.

This was created to deal with the complex business rules we had for shipping costs. Written very near to our planned launch date, it's an example of what I call the 'skimp on the admin UI' contrib module: hardcode the settings, but do so in a way that's cleanly separated from the rest of the module, so that later on an admin UI can be added, perhaps contributed as a patch. Though I think that's yet to materialize for this one.

This was created to help keep track of how the product entities and product display nodes all related to one another.

Allows the presence of exposed filters on a view to be controlled by values in another exposed filter. This was intended to handle filtering Views of products, though we later switched to using Views with Solr.

A little bit of UX sugar for when you're adding lots of taxonomy terms that are very similar. In our case, this was terms representing sizes. On creating a new term, this takes you straight back to the form for adding another term, and prefills the field values from the one you just added.

Allows flags to have either an expiry date, or an expiry period. This was going to be used to feature products, or mark them as new, or discounted. (For new product, you could just automatically mark all products that were created in the last x days, say. But as I recall, the client didn't want ALL new products marked, just selected ones. The way clients do.)

As well as new contrib modules, the project resulted in work on existing contrib modules, in particular Flag, Data, Views Hacks, and Commerce Delivery.

Oct 28 2012
Oct 28

I've just released Permissions Grid. It does what the name suggests: it presents related permissions in a grid, rather than the usual long list.

How are permissions structured into a grid? Well, only the ones that form natural groups are included: every set of permissions of the form 'create foo, edit foo, delete foo, create bar, edit bar, delete bar' is turned into a matrix of checkboxes with the verbs 'create, edit, delete' along the top, and the objects 'foo, bar' along the top. When modules such as node, taxonomy, and commerce define related permissions for nodes, vocabularies, and products respectively, that gives you something like this:

This gives an easy to grasp overview of what a role can do with different objects on the site: which node types can this role create? which can they edit, or delete? which product types can they edit? which vocabularies can they create terms in?

If this sounds and looks vaguely familiar, that's probably because this module has an ancestor: my Drupal 6 module node permissions grid module, which I wrote back when a site's content types started to become too numerous to easily make sense of. That operated only on node types, and like a great many contrib modules porting to Drupal 7, it's had to 'drop the node' and generalize. But in fact nothing restricts Permissions Grid to entities: all it cares about is permissions.

Structured permissions are declared to the module in an info hook, and each module may declare multiple sets of permissions. This allows for the fact that some modules add further vocabulary-related permissions which do not have the same pattern, and that commerce has entity permissions in both singular and plural form.

Are there any groups of permissions I've missed, whether in core or contrib? Post a feature request, or better still, take a look at the hook implementations already there and file a patch.

Sep 21 2012
Sep 21

I need to add a bit of business logic to my Commerce site: a boolean field on product nodes marks that the corresponding products can't be delivered outside the UK.

And I know the way to do this in Commerce is to create a rule: react to the cart completing checkout, iterate over line items, check the corresponding products, and block the completion if the field in question is set.

Rules is great: with Rules, site builders can change site functionality and cause it to react to events. When non-techy people ask if my job involves designing websites, to put them right I say, 'I make websites go "bing!"'; and now, site builders can make them go 'bing!' too.

But I have a confession: I'm reluctant about using Rules. It's partly that I find the UI confusing, and it feels time-consuming to test them, but deeper than that I think it's just that I feel too far removed from the actual thing I'm trying to make.

And that makes me wonder: am I becoming a Drupal dinosaur?

Because I can imagine when Views first came along, developers who were used to writing their own query and formatting the result themselves, looking at the Views UI and thinking 'I don't feel in control of my lists of stuff any more'.

Or before CCK, developers wrote exactly the form elements they needed in the node form and saved it themselves in the database. I still sometimes speak to non-Drupal developers who want to be able to dump data into the node table directly (or pull it out) and when I tell them they can't, because the data that actually makes a node is spread out over the node table, the node_revision table, and then a multitude of field tables. And their feeling of disconnection at not being able to get their hands on 'the node' as a solid lump of database stuff must surely be akin to what I feel with Rules. And I'm going to call this feeling 'abstraction shock'.

I want to write a hook. I want to write the code for it, for it to feel like a solid thing. I know that my rule can (indeed, should) be exported to code, but I want code that I can read and see exactly what it says, rather than code that Rules will consume and understand. And most of all, I want to be able to put in debug statements to understand what I'm getting as I write it, and after I've written it when it's going wrong or when the site functionality has to change.

If that makes me a dinosaur, save me a seat next to the brontosaurus.

Jul 10 2012
Jul 10

My workflow for making patches is to use a feature branch for a single issue. Whether you're a contributor or a maintainer it lets you advance the fixing of the problem in small increments, and safely experiment knowing you can roll back.

But where it goes wrong is when your patch is superseded by a newer one in the issue queue, and you want to work on it some more. How do you update your branch for the ongoing work? As ever, with git there's a way.

Let's start with the basics first: you're making a feature branch to work on an issue. I tend to follow the naming pattern '123456-fix-all-the-bugs', but for this example I'll call it 'issue'.

// Make a new branch and switch to it.
$ git co -b issue
// Make lots of commits.
// Ready to make a patch:
$ git diff > 123456.project.issue.patch

(Note that if you can make your patch to show all your commits one by one, which can sometimes aid in making it clear what you're changing, but that's for another day.)

You've now got a patch which you're uploading to the issue queue, and your tree looks something like this:

* [issue] Last commit, ready to roll a patch!
* Fixed the foobar.
* Added a bizbax.
/
* [master]

Now someone else comes along to the issue queue, reviews your patch, and posts a new patch of their own. You in turn look at patch 2, and while it's an improvement, you think it needs still more work.

The problem is how to apply the patch to your repository. It won't apply to the tip of the issue branch, and if you checkout master, you can't get back to your issue branch. You can of course just discard your original issue branch, and create a branch issue2 for patch 2.

Or you can do this:

// Start on the issue branch.
// Stash any work in progress!
$ git stash
// Checkout just the *files* of master, while keeping the HEAD pointer on the
// issue branch.
$ git checkout master -- .
// This puts the files from master into the working tree, but keeps the index
// on the issue branch. In simpler terms, the reverse of patch 1 will appear
// staged (as git believes that your files *ought* to look like patch 1, but
// actually look like master).
// We want the index clean, so unstage everything:
$ git reset HEAD .
// Now apply the new patch.
$ patch -p1 < patch-2.patch
// Now commit this as patch 2.
// Remember to stash pop when you're done!

Because the working tree files (that is, the actual files on your system) look like the master branch, the patch applies cleanly. But because git still believes its on the tip of the issue branch, the commit you make goes on the tip of that branch, and the diff it records is effectively the interdiff between your patch-1 and the other contributor's patch-2. Your tree looks like this:

* [issue] Applied patch 2 from Ada Lovelace.
* Last commit, ready to roll a patch!
* Fixed the foobar.
* Added a bizbax.
/
* [master]

Result: you can now do more work on this branch, and make more commits, and when you're ready, diff against master to make patch-3, ready to upload to the issue queue.

Jul 06 2012
Jul 06

I often find that I'm in the middle of one thing when I have to do another. Whether it's hotfixes for a client, or just finding a minor bug that blocks my current work, or needing to add components to a feature before I can add custom functionality.

The best way is to stash your current work, checkout the master branch, commit, then go back. If you're working on a feature branch (and you should be), then rebase that afterwards so you have access to the new work there. So that's:

$ git stash
$ git checkout master
// do commits
$ git checkout feature
$ git rebase master
$ git stash pop

But that's not always feasible. Sometimes I'm sloppy, and I've already made code changes before stashing. And lately, I've got one instance of Party module that's got a feature branch that's made database changes, but I don't want to hold that up ongoing commits (and I'm too lazy to set up a new local site!).

If your fix is just one commit, you can make it on the feature branch, then cherrypick it to the master branch like this:

// make your commit and note its SHA
$ git stash
$ git checkout master
$ git cherry-pick COMMIT
$ git checkout feature
$ git rebase master

The rebase should be smart enough to figure out the same commit exists on both branches, and will silently drop it from the feature branch.

Alternatively, if you want to do a chunk of main branch work, make a temporary branch on the tip of feature, which you can then move to the master branch when you're done:

$ git checkout -b moveme
// make as many commits as you like
// Now we take everything that's between the tips of feature and moveme, and move it to the tip of master
$ git rebase --onto master feature moveme
// Now merge moveme into master: this'll just fast-foward master.
$ git checkout master
$ git merge moveme
// moveme can be deleted now
$ git branch -d moveme
// Now rebase the feature branch
$ git checkout feature
$ git rebase master

As I've become more familiar with git, I've found that temporary, throw-away branches can be useful in a variety of situations. Another one is making a backup branch prior to potentially messy rebases: just create a branch 'backup' where you are to be sure that no matter what happens with the rebase, your current chain of commits will be preserved. If there are conflicts, you can diff against it to check no code was lost. And when you're happy with the result of the rebase, just delete it. Branches being cheap, and local, opens up a whole new set of uses for them.

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web