Feeds

Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Jan 06 2024
Jan 06

I was just recently starting to get this on projects after (at least I think that's why) I updated to Chrome 120.

      Could not open connection: unknown error: cannot find Chrome binary
        (Driver info: chromedriver=120.0.6099.71 (9729082fe6174c0a371fc66501f5efc5d69d3d2b-refs/branch-heads/6099_56@{#13}),platform=Linux 6.2.0-37-generic x86_64) (Behat\Mink\Exception\DriverException)

That's the error message in behat at least, which I think originates from the actual webdriver (chromedriver) response. If I look at the debug information from the chromedriver logs it says this:

[1703837483,910][INFO]: [0ca18bb59db30d5acd358de02a01da0a] RESPONSE InitSession ERROR unknown error: cannot find Chrome binary

Well the error is clear enough. It can not find the binary. That's fine by me, but where would I go about informing about the binary? Well, for me that would be in behat.yml:

@@ -33,8 +33,9 @@ default:
               w3c: false
           marionette: null
           chrome:
+            binary: /usr/bin/google-chrome
             switches:

Thanks to kostiukevych-ls in the comments, this seems like it would be something like this if you are on Mac OS:

@@ -33,8 +33,9 @@ default:
               w3c: false
           marionette: null
           chrome:
+            binary: /Applications/Google Chrome.app/Contents/MacOS/Google Chrome

This probably translates to something like this, while initiating the session with chromedriver (slightly edited for relevance and brevity):

[1703887172,675][INFO]: [95b2908582293fa560a7301661f5e741] COMMAND InitSession {
   "desiredCapabilities": {
      "chrome.binary": "/usr/bin/google-chrome",
      "chrome.extensions": [  ],
      "chrome.switches": [ "--ignore-certificate-errors", "--disable-gpu", "--no-sandbox", "--disable-dev-shm-usage" ],
      "goog:chromeOptions": {
         "args": [ "--ignore-certificate-errors", "--disable-gpu", "--no-sandbox", "--disable-dev-shm-usage" ],
         "binary": "/usr/bin/google-chrome",
         "extensions": [  ]
      },
      "ignoreZoomSetting": false,
      "marionette": true,
   }
}

If you are using some other tool that interacts with chromedriver, I am sure you are already setting some parameters there, which you could append this new parameter to.

Alternative solution, using browser name

For the attentive reader, you might contemplate over how practical it would be to have this change laying around locally. Good point. Chromedriver is actually supposed to find the chrome binary of itself, so why is it not in this case?

If you have some verbose logging enabled, you might notice this message also popping up:

[1708076533,081][DEBUG]: Unknown browser name: firefox

For me that was a bit surprising, since I did not ask for the firefox browser in any way I knew myself. However, since I am using behat using mink-selenium2-driver, firefox is actually set as the default browser name. If I am using chromedriver, it will try to use this value when it tries to find the binary.

To override it, all I needed to do was this (this is using the Drupal behat extension for the mink parameters):

diff --git a/behat.yml.dist b/behat.yml.dist
index 7bf8214b..958a9a25 100644
--- a/behat.yml.dist
+++ b/behat.yml.dist
@@ -21,6 +21,7 @@ default:
     Drupal\MinkExtension:
       files_path: '%paths.base%/tests/files'
       ajax_timeout: 15
+      browser_name: chrome

And voila. It now works both in CI environments and locally as well.

Dec 26 2023
Dec 26

Violinist.io is well into its seventh year. It has survived a couple of Drupal core major upgrades and remains a profitable SaaS based on Drupal and a vital tool for agencies and organizations. I'm not sure if there is such a thing as social historical code archeology, but if there was, then this blog post might be it. I have long wanted to write some blog posts about the development of violinist.io that not only focused on exciting or interesting technical aspects. Instead I think this will highlight when things do not go as planned, why that was, and maybe something around what it meant at the time comparing to present day?

The project started as a PHP file and proof of concept in Drupal Developer Days Seville 2017. Literally. It was one PHP script placed inside one file. After I got that working I figured I would make it a composer package and run it with Drupal cron. This was the first version of violinist.io. Run the method of a composer package with cron. I made a module to contain this hook_cron implementation. I called it cronner. It was the module to do things on cron so that was the name. The cronner module.

cronner.module was first checked in to git with the revision 55da4a9c:

From 55da4a9c
Date: Thu, 23 Mar 2017 08:38:57 +0100
Subject: [PATCH] This is a site now?

As we may deduce from the commit message, this was a pretty early one in the codebase. More precisely, this was the second commit to the codebase, after some initial scaffolding files. At that point, this was the entire contents of cronner.module:

getStorage('node')
    ->loadByProperties([
      'type' => 'project'
    ]);
  $q = \Drupal::queue('cronner_project');
  foreach ($nodes as $node) {
    $q->createItem($node);
  }
}

Basically a cron implementation to queue all projects on the site periodically.

Fast forward to 2023 and some reflections around maintaining a codebase. Cronner. I can honestly say that cronner is among my least proud namings in the codebase. The hook_cron implementation is long gone and replaced with all sorts of fancy cloud computing. The name really is misleading now. It contains no cron implementations like that any more. Instead it contains (among other things):

  • Schema for the database table called cronner_log
  • Some random theme hooks used here and there
  • A hook_node_insert implementation that is still the one responsible for adding new projects to the update queue

I have many times wanted to work on extracting parts of cronner.module into separate modules. But fixing bugs, improving test coverage or implementing new features has always been the priority. At this point, from time to time I open the file cronner.module and shudder. But after that initial feeling of annoyance or shame I instead focus on the image of something like the origin life form. Something like a web of branches and roots coming out of a mother plant. Something almost alien-like. The module that started it all, the breathing and growing mother of the codebase.

Some might call it legacy. I call it cronner, the origin mother of the codebase.

Feb 17 2022
Feb 17

My last post about letting a robot update your website was based on some of the questions I got presenting the talk "Less human interaction, more security?" last year at Drupalcon North America. The talk is showing how we can discover and deploy updates to our Drupal websites automatically, using tools like violinist.io and Gitlab CI. In addition to the feedback about automation being scary, the most asked question I get is around database updates. What about database updates? How do we handle them? How can they be done correct automatically? In this blog post I will outline my approach, why I use this approach, and why you might consider doing the same.

The problem

I use violinist.io to update all of my PHP projects (both at work and personal, including this blog). So let's say I receive an automated pull request from violinist.io with an update to Drupal core. Maybe this update has this piece of new code in it:

/**
 * Clear caches due to behavior change in DefaultPluginManager.
 */
function system_update_8201() {
  // Empty update to cause a cache rebuild.
}

As we can see, this database update is actually empty, and only exists to make sure the cache is being cleared. Would this be OK to deploy automatically? For sure, no problem at all.

Let's look at something different. Let's say I receive an automated pull request containing this update:

/**
 * Notes on update for multilingual sites.
 */
function layout_builder_restrictions_update_8210() {
  $moduleHandler = \Drupal::service('module_handler');
  if ($moduleHandler->moduleExists('locale')) {
    $message = t("Please note: since your site uses the Locale module, you will likely need to manually resave each Layout Builder Restriction entity configuration, due to new code that makes Layout Builder Restrictions' configuration multilingual compatible. For more information, see Layout Builder Restrictions version 2.6 release notes.");
    \Drupal::logger('layout_builder_restrictions')->warning($message);
    return $message;
  }
}

This database update is taken from a contributed module, and serves as an example of an update hook that would literally require human interaction. It says that you will likely need to resave and re-export your configuration. So trying to apply that database update automatically would most likely work great, but our configuration will now possibly be wrong since the automated update only contain changes to composer files.

So the problem is that deploying database updates introduces unknown side effects. They might be harmless, they might not.

Another problem is that it's hard to even know if an update contains a database update. A diff of a typical dependency update would probably only show something like this:

},
         {
             "name": "drupal/core",
-            "version": "9.3.2",
+            "version": "9.3.3",
             "source": {
                 "type": "git",
                 "url": "https://github.com/drupal/core.git",
-                "reference": "6c9ba6b6314550e7efb8f5f4e2a40f54cfd6aee1"
+                "reference": "a9bd68be9a4e39724ea555f8040114759a8faf7f"
             },
             "dist": {
                 "type": "zip",
-                "url": "https://api.github.com/repos/drupal/core/zipball/6c9ba6b6314550e7efb8f5f4e2a40f54cfd6aee1",
-                "reference": "6c9ba6b6314550e7efb8f5f4e2a40f54cfd6aee1",
+                "url": "https://api.github.com/repos/drupal/core/zipball/a9bd68be9a4e39724ea555f8040114759a8faf7f",
+                "reference": "a9bd68be9a4e39724ea555f8040114759a8faf7f",
                 "shasum": ""
             },
             "require": {

Can you tell from that diff if there was a database update or not just by looking at it?

The toolbox

So the question then is: How do we identify these database updates and determine if we can apply them safely and automatically? The answer for me is: we don't. Instead we just always assume the worst, and err on the side of caution. So let's instead look at how we can identify a pending database update in the first place. When detected, we want to avoid the update being deployed automatically.

To do that, I am going to use a technique we can combine with detecting another potentially disruptive trait of dependency updates: Changed files. This is not quite as common as database updates, and hopefully you mostly see this from either Drupal core or from a distribution you are using. Sometimes an upgrade can change some of your project files, for example robots.txt or .gitignore. In most cases these changes are useful and something you want to commit to your codebase. But just like database updates, we have no reliable way of determining what is useful and what is disruptive without human interaction. Which is why I am an advocate for failing a test suite when the working tree is not clean after running integration tests. This will then take care of the case of avoiding deploying an update that changes other files than composer.json / composer.lock. Now let's see how it can help us with database updates.

The characterisation testing approach

Enter site schema. A site schema is a big list of the "current" schema of your production website. The idea of a site schema borrows from a testing paradigm called characterisation testing, (or golden-master testing, snapshot testing and probably other names). It contains a list of all the currently applied hook_update_N updates, in addition to all the currently applied post updates. The site schema package is a drush command that makes it possible to produce a file for the site that represents your current working state. This in turn makes it possible to commit a file containing this state. Which in turn makes it easy to see which automated updates contain database updates. All we have to do is dump the site schema as part of our tests, and it would produce a diff in the committed site schema file, failing the tests, showing the dependency update contains a database update. Then when we want to deploy a dependency update that includes a database update, we also would have to commit an update to our site schema file. It's a way of manually approving the unknown side effect.

So that sums up the why and how. We have looked at why applying database updates automatically on automated dependency updates can be harmful. We have touched on why it's hard to know whether a given database update can be harmful or not. Finally, we have looked at one way of detecting database updates, so we can avoid deploying these unknown results automatically.

I also hope this has inspired some ideas on how you can do similar things in your own projects so you can have more automation in both dependency updates and deployment. If you need more inspiration, the next blog post will include examples of actual implementations of this in CI workflows.

What I can say personally is that using this approach has greatly improved my confidence in deploying automated dependency updates that I am continuously getting from violinist.io. Are you still not convinced about automating boring maintenance tasks? Still think it sounds scary? Please let me know in the comments ✌️

I guess all that remains now is to finish off with an animated gif called "scary"!

Aug 02 2021
Aug 02

At Drupalcon North America this year (2021), I had a presentation named "Less human interaction, more security?". This pandemic-ridden year made sure it was an online version of the conference, which makes presenting a bit different. One interesting aspect of online presentations, is that it enables feedback and chatter in the comments section. First of all, this meant that I could get feedback on my GIFs even if I wasn't in the same room as the session participants. As we all know, GIFs are an important part of every presentation. Second, I also received a lot of relevant and interesting comments on other matters related to the presentation. Today, I want to talk about one of the subjects that came up: is it scary to let a robot update and deploy your website?

To those of you that didn't attend the presentation; let’s do a quick recap. First, I talked about how to set up a fully automatic discovery, patching and deployment of Drupal security updates using violinist.io and Gitlab CI. Next, I demonstrated how a new Drupal version was picked up automatically, updated and pushed to open a Gitlab Merge Request for the project in question.

[embedded content]

The Merge request was then analyzed and found to be a security update, which enabled auto merge for the merge request.

[embedded content]

Finally it was tested according to the continuous integration set up for the project, and deployed to the production server for the demonstration.

[embedded content]

No human hands were used to write commands or click buttons in the demonstration. What does this mean? Robots taking over? Reason for anxiety attacks? In this case, it's a clear "no" from me. Let me explain why.

Several things could be considered scary about letting a robot update and deploy your website. First, you´re somehow allowing third party access to your production server. In my opinion, this can actually be a step towards making your deployments more secure. Let me explain: maybe your current workflow involves a person remotely accessing the server and running deployment commands. Depending on your configuration of remote access, this can actually create a larger attack surface, instead of making sure only the machine deploy user can deploy to a server. There are of course more things to consider to this question, but my point still stands: moving towards only automated deployments will make your application more secure, more robust and more predictable.

Another potential cause of concern about letting a robot update and deploy your website, is that an update might break the website. Before deploying a change to a production server, people often like to check that their website still works in a real browser. I acknowledge this need for manual verification, but I believe there's a better way. Take the task of updating Drupal Core. I'll wager that a Drupal release has a lot more test coverage than any website you are currently maintaining. If you combine this with some basic functional testing for your website, an update like this is probably some of the safest code changes you can do. Furthermore, having a robot do it for you makes it very unlikely that the update commands will be done wrong, which could happen if done by human hands.

Of course, sometimes automatic updates will crash your site some way or another. I find this unproblematic. Not that I want websites to crash, but it happens. And this also happens when a human being is the author of code changes. However, I prefer that automatic updates crash my site, as this uncovers missing test coverage and makes me more confident about future updates. Say that your Drupal website was relying on the (fantastic) Metatag module, but one particular update made your metatags stop working on a particular content type. How come the update was deployed then? Because you did not have test coverage for that functionality. By learning from this, you can expand your test coverage, and feel even more confident about automatically updating the Metatag module the next time there is a new release.

Don´t get me wrong. You don't have to wait for your site to crash to learn that you´re missing test coverage. Start by introducing automated updates to your website through merge requests. When a new merge request comes along for an update, you´ll get a feeling about how confident you are to deploy it. Maybe it's the metatag module, and you know you have to test it manually to make sure it works. This could be an indication that you are lacking test coverage. To be able to automatically deploy the next version of metatag, just write some tests for the things you are testing manually.

Ultimately, my claim is that updating automatically will make your updates more secure and predictable. Over time, it will also increase your test coverage and the general robustness of your project. And at a certain point, maybe when your site is in more of a maintenance state, you can merge and deploy all updates automatically.

Now that the concerns around these things being scary are addressed: Please check out the presentation and make up your own mind on how scary automatic updates really are. And if you want to start updating right away, here is a link to violinist.io!

Disclaimer: I am the founder of violinist.io

Mar 25 2021
Mar 25

One of the things a website owner expects when they launch their new Drupal 8/9 website is that their Drupal 6/7 users are still there, and can still log in using the same credentials.

Luckily, the migrate framework and ecosystem makes this very easy for us in Drupal 8 and beyond. However, from time to time a developer might find themselves in a situation where they forgot to migrate something. Like for example the language of the user. This is especially awkward if your site is mainly in one language, and now all of the users are suddenly sent English emails, even if we meticulously translated all emails into the language of the website.

One way to fix this would be to fix the migration and then re-run it. However, if you discovered this after the launch of the new website, your users might have changed their email or their password, and therefore would probably not appreciate it if their changes disappeared. So we need to update the actual language fields instead, keeping the changes in emails and passwords and other fields.

So I was tasked exactly this on a migrated site. Everyone had their language set to English, when the site itself only was selling products to Norwegian customers, and had no English interface. To fix this I wrote this quick script, which I then ran with drush. One way to do that is to save this script as a file called fix-users.php and then run it using the command drush scr fix-users.php. You can also run this piece of code in a devel PHP textfield, or even put it into a post_update hook.

Here is the script, for future reference for myself, and for saving you some minutes if you just had this very problem.

getStorage('user')->getQuery()
  // If you run this script with drush somehow, you need to add this flag to
  // make sure you can load all users, not only the ones you have access to as
  // user id 0.
  ->accessCheck(FALSE)
  ->execute();

// I want to set the language to "nb" for all of the users. "nb" means Norwegian
// - Bokmål. Maybe you want another code?
$language_to_change_to = 'nb';

foreach ($uids as $uid) {
  $user = User::load($uid);
  // User 0 and user 1 are special. Don't change them.
  if (!$user || $user->id() < 2) {
    continue;
  }
  // We change all of these fields, but for different reasons you might want to
  // only change one or two?
  $language_fields = [
    'langcode',
    'preferred_langcode',
    'preferred_admin_langcode',
  ];
  foreach ($language_fields as $field) {
    if (!$user->hasField($field)) {
      continue;
    }
    $user->set($field, $language_to_change_to);
  }
  $user->save();
}

With that out of the way, let's finish this article with an animated gif called "migration".

Sep 29 2020
Sep 29

Since this error now has bit me twice, and I had to search for a while to find out how I fixed it the first time, I decided to just write a quick blog post about it instead. And this is indeed it.

After updating phpstan on a Drupal project, and removing the configuration option autoload_files I suddenly started getting this polite little error message on my static analysis continuous integration runs:

Reflection error: Drupal\Tests\PhpunitCompatibilityTrait not found.  

         ? Learn more at https://phpstan.org/user-guide/discovering-symbols   

To fix this, I simply added the trait to the new configuration property scanFiles:

scanFiles:
            - %currentWorkingDirectory%/drupal/core/tests/Drupal/Tests/PhpunitCompatibilityTrait.php

If your project has Drupal in another folder than "drupal", simply substitute that with "web", "docroot" or whatever your project dictates.

That will serve for me as a reminder to next time. And hopefully to you. Speaking of hope, here is an animated gif called "hope".

Jul 05 2020
Jul 05

Migrate in core is among my favorite parts of Drupal 8 and 9. The framework is super flexible, and it makes migrating content from any source you can dream up pretty straight forward. Today I want to show a trick that I use when I receive a csv (or Excel file) from clients, where they want all of the contents in it migrated to Drupal. One very simple example would be a list of categories.

Typically the file will come with one term on each line. However, migrate would want us to set an ID for all of the terms, which currently none of the rows have. One solution to this is to place an ID on all of the rows manually with some sort of spreadsheet software, and then point our migration to the new column for its IDs. But since that both involves the words "manual" and "spreadsheet software" it immediately makes me want to find another solution. Is there a way we can set the row id programmatically based on the row number instead? Why, yes, it is!

So, here is a trick I use to set the ID from the line number:

The migration configuration looks something like this:

id: my_module_categories_csv
label: My module categories
migration_group: my_module
source:
  # We will use a custom source plugin, so we can set the 
  # ID from there.
  plugin: my_module_categories_csv
  track_changes: TRUE
  header_row_count: 1
  keys:
    - id
  delimiter: ';'
  # ... And the rest of the file 

As stated in the yaml file, we will use a custom source plugin for this. Let's say we have a custom module called "my_module". Inside that module folder, we create a file called Categories Csv.php inside the folder src/Plugin/migrate/source/CategoriesCsv.php. And in that file we put something like this:

file->key();
    $row->setSourceProperty('id', $delta);
    return parent::prepareRow($row);
  }

}
   

In the code above we set the source property of id to the delta (the row number) of the row. Which means you can have a source like this:

Name
Category1
Category2
Category3

Instead of this

id;Name
1;Category1
2;Category2
3;Category3

The best part of this is that when your client changes their mind, you can just update the file instead of editing it before updating it. And with editing, I mean "manually" and with "spreadsheet software". Yuck.

To finish this post, here is an animated gif called "spreadsheet software yuck"

Jun 11 2020
Jun 11

On a project I just discovered I had to make a change to both how some blocks were configured, but also remove access to some other blocks (with the module Layout builder restrictions). This is a somewhat disruptive change. Even though the editors are not supposed to need or want to use any of the blocks I was going to do disruptive changes to, I had to make sure I was not sabotaging their work somehow. So I wanted to find out what blocks and layouts were actually used on the different pages built with Layout builder.

So I wrote a simple query to find them all, and print them to my console. I then realized this might be handy to have for later. Or heck, maybe it is even handy for other people. Let's just throw it on the blog. It looks like this:

select('node__layout_builder__layout', 'ns')
  ->fields('ns', ['entity_id'])
  ->groupBy('ns.entity_id')
  ->execute();
// This will hold all of our block plugin ids.
$block_plugin_ids = [];
// And this will keep track of which layout ids we have.
$layout_ids = [];
foreach ($rows as $row) {
  /** @var \Drupal\node\Entity\Node $node */
  $node = Node::load($row->entity_id);
  if (!$node) {
    // If we can not load the node, something must be wrong.
    continue;
  }
  if (!$node->hasField('layout_builder__layout') || $node->get('layout_builder__layout')->isEmpty()) {
    // If the field is empty or does not exist, something must be wrong.
    continue;
  }
  // This will get all of the sections of the node.
  $layout = $node->get('layout_builder__layout')->getValue();
  foreach ($layout as $item) {
    /** @var \Drupal\layout_builder\Section $section */
    $section = $item['section'];
    // This will overwrite the array key of the layout ID if it exists. But that
    // is ok.
    $layout_ids[$section->getLayoutId()] = TRUE;
    // You can also operate directly on the section object, but getting the
    // array structure makes it more convenient for this case.
    $section_array = $section->toArray();
    foreach ($section_array["components"] as $component) {
      // This ID will correspond to the ID we have in the plugin. For example,
      // the page title block has an ID of "page_title_block".
      $id = $component["configuration"]["id"];
      // We only want unique plugin IDs.
      if (in_array($id, $block_plugin_ids)) {
        continue;
      }
      $block_plugin_ids[] = $id;
    }
  }
}
// This will now be an array of plugin ids like so:
// ['page_title_block', 'my_other_plugin'].
print_r($block_plugin_ids);
// This will now be an array with layout ids as keys. Like so:
// ['layout_onecol' => TRUE, 'layout_twocol' => TRUE].
print_r(array_keys($layout_ids));

This will now give me the plugin IDs of all of the blocks used on a layout builder page, along with all the layout IDs being used on a layout builder page.

Now I can do disruptive changes without worrying about actually disrupting the work for editors.

Disclaimer: This does not take into account any unsaved work any editors have in their layout. If you have a very active site with very active editors, you might want to add that into the mix. But that is left as an exercise to the reader.

Hope that is useful to someone else. I know I will use it myself later. But if not, here is a useful GIF about it being summer soon (for me anyway).

May 23 2020
May 23

My last blog post was about my journey towards "getting rid" of Disqus as a comment provider, and moving over to hosting my comments on Github. The blog post was rather light on technical details, so here is the full technical walkthrough with code examples and all.

Since I already explained most of the reasoning and architectural choices in that post I will just go ahead with a step by step of the technical parts I think is interesting. If that means I end up skipping over something important, please let me know in the (Github) comments!

Step 0: Have a Gatsby.js site that pulls it's content from a Drupal site

This tutorial does not cover that, since many people have written such tutorials. As mentioned in my article "Geography in web page performance", I really like this tutorial from Lullabot, and Gatsby.js even has an official page about using Drupal with Gatsby.js.

Step 1: Create a repo to use for comments

The first step was to create an actual repository to act as a placeholder for the comments to be. It should be public, since we want anyone to be able to comment. I opted for the repo https://github.com/eiriksm/eiriksm.dev-comments. To create a repository on GitHub, you need an account, and then just visit https://github.com/new

Step 2: Associate blog posts with issues

The second step is to have a way to indicate which issue is connected to which blog post. Luckily I am still using Drupal, so I can just go ahead and create a field called "Issue id". Then, for each post I write, I create an issue on said repo, and take a note of the issue id. For example, this blog post has a corresponding issue at https://github.com/eiriksm/eiriksm.dev-comments/issues/6, so I just go ahead and write "6" in my issue id field.

Step 3: Fetch all the issues and their comments to Gatsby.js

This is an important part. To be able to render the comments at build time, the most robust approach is to actually "download" the comments and have them available in graphql. To do that you can try to use the source plugin for GitHub, but I opted for getting the relevant issues and comments and creating the source myself. Why, you may ask? Well the main reason is because then I can more easily control how the issue comments are stored, and by extension also store the drupal ID of the node along with the comments.

To achieve this, I went ahead and used the same JSONAPI source as Gatsby will use under the hood, and then I am fetching the issue comment contents of each of the nodes that has an issue ID reference. This will then go into the gatsby-node.js file, to make the data we fetch available to the static rendering. Something along these lines:

exports.sourceNodes = async({ actions, createNodeId, createContentDigest }) => {
  const { createNode } = actions
  try {
    // My build steps knows the basic auth for my API endpoint.
    const login = process.env.BASIC_AUTH_USERNAME
    const password = process.env.BASIC_AUTH_PASSWORD
    // API_URL holds the base URL of my Drupal site. Like so:
    // API_URL=https://example.com/
    let data = await fetch(process.env.API_URL + 'jsonapi/node/article', {
      headers: new fetch.Headers({
        "Authorization": `Basic ${new Buffer(`${login}:${password}`).toString('base64')}`
      })
    })
    let json = await data.json()
    // Create a job to download each of the corresponding issues and their comments.
    let jobs = json.data.map(async (drupalNode) => {
      // Here you can see the name of my field. Change this to whatever your field name is.
      if (!drupalNode.attributes.field_issue_comment_id) {
        return
      }
      let issueId = drupalNode.attributes.field_issue_comment_id
      // Initialize the data we want to store.
      let myData = {
        drupalId: drupalNode.id,
        issueId,
        comments: []
      }
      // As you can see, this is hardcoded to the repo in question. You might want to change this.
      let url = `https://api.github.com/repos/eiriksm/eiriksm.dev-comments/issues/${issueId}/comments`
      // We need a Github token to make sure we do not hit any rate limiting for anonymous API
      // usage.
      const githubToken = process.env.GITHUB_TOKEN
      let githubData = await fetch(url, {
        headers: new fetch.Headers({
          // Hardcoded to my own username. This probably will not work for you.
          "Authorization": `Basic ${new Buffer(`eiriksm:${githubToken}`).toString('base64')}`
        })
      })
      let githubJson = await githubData.json()
      myData.comments = githubJson
      // This last part is more or less taken from the API docs for Gatsby.js:
      // https://www.gatsbyjs.org/docs/node-apis/#sourceNodes
      let nodeMeta = {
        id: createNodeId(`github-comments-${myData.drupalId}`),
        parent: null,
        mediaType: "application/json",
        children: [],
        internal: {
          type: `github__comment`,
          content: JSON.stringify(myData)
        }
      }
      let node = Object.assign({}, myData, nodeMeta)
      node.internal.contentDigest = createContentDigest(node)
      createNode(node)
    })
    // Run all of the jobs async.
    await Promise.all(jobs)
  }
  catch (err) {
    // Make sure we crash if something goes wrong.
    throw err
  }
}

After we have set this up, we can build the site and start exploring the data in graphql. For example by running npm run develop, and visiting the graphql endpoint in the browser, in my case http://localhost:8000/___graphql:

$ npm run develop

> [email protected] develop /home/eirik/github/eiriksm.dev
> gatsby develop
# ...
# Bunch of output
# ...
⠀
You can now view eiriksm.dev in the browser.
⠀
  http://localhost:8000/
⠀
View GraphiQL, an in-browser IDE, to explore your site's data and schema
⠀
  http://localhost:8000/___graphql

This way we can now find Github comments with graphql queries, and filter by their Drupal ID. See example animation below, where I find the comments belonging to my blog post "Reflections on my migration journey from Disqus comments".

Graphql browsing

In the end I ended up with the following graphql query, which I then can use in the blogpost component:

allGithubComment(filter: { drupalId: { eq: $drupal_id } }) {
  nodes {
    drupalId
    comments {
      body
      id
      created_at
      user {
        login
      }
    }
  }
}

This gives me all Github comments that are associated with the Drupal ID of the current page. Convenient.

After this it's only a matter of using this inside of the blog post component. Since this site mix and match comment types a bit (I have a mix of Disqus exported comments and Github comments), it looks something like this:

let data = this.props.data
if (
  data.allGithubComment &&
  data.allGithubComment.nodes &&
  data.allGithubComment.nodes[0] &&
  data.allGithubComment.nodes[0].comments
) {
  // Do something with the comments.
}

Step 4: Make it possible to fetch the comments client side

This is the part that makes it feel real-time. Since the above code with the graphql storing of comments only runs when I deploy this blog, comments can get quickly outdated. And from a UX perspective, you would want to see your own comment instantly after posting it. This means that even if we could rebuild and deploy every time someone posted a comment, this would not be instant enough to make sure a person actually sees their own comment. Enter client side fetching.

In the blog post component, I also make sure to do another check to github when you are viewing the page in a browser. This is because it is not necessary for the build step to fetch it once again, but I want the user to see the latest updates to the comments. So in the componentDidMount function I have something like this:

componentDidMount() {
  if (
    typeof window !== `undefined` &&
    // Again, only doing this if the article references a github comment id. And again, this
    // references the field name, and the field name for me is field_issue_comment_id.
    this.props.data.nodeArticle.field_issue_comment_id
  ) {
    // Fetch the latest comments using Githubs open API. This means the person using it will 
    // use from their own API limit, and I do not have to expose any API keys client side.
    window
      .fetch(
        "https://api.github.com/repos/eiriksm/eiriksm.dev-comments/issues/" +
          this.props.data.nodeArticle.field_issue_comment_id +
          "/comments"
      )
      .then(async response => {
        let json = await response.json()
        // Now we have the comments, do some more work mixing them with the stored ones, 
        // and render.
      })
  }
}

Step 5 (bonus): Substitute with Gitlab

Ressa asked in a (Github) comment on my last blogpost about doing this comment setup with Gitlab:

Do you know if something like this is possible with Gitlab?

Fair question. Gitlab is an alternative to Github, but differs in that the software it uses is itself Open Source, and you can self-host your own version of Gitlab. So let's see if we can substitute the moving parts with Gitlab somehow? Well, turns out we can!

The first parts of the project is the same: Create a field, have an account on Gitlab, create a repo for the comments. Then, to source the comment nodes, all I had to change was this:

--- a/gatsby-node.js
+++ b/gatsby-node.js
@@ -125,15 +125,25 @@ exports.sourceNodes = async({ actions, createNodeId, createContentDigest }) => {
           issueId,
           comments: []
         }
-        let url = `https://api.github.com/repos/eiriksm/eiriksm.dev-comments/issues/${issueId}/comments`
-        const githubToken = process.env.GITHUB_TOKEN
+        let url = `https://gitlab.com/api/v4/projects/eiriksm%2Feiriksm.dev-comments/issues/${issueId}/notes`
+        const githubToken = process.env.GITLAB_TOKEN
         let githubData = await fetch(url, {
           headers: new fetch.Headers({
-            "Authorization": `Basic ${new Buffer(`eiriksm:${githubToken}`).toString('base64')}`
+            "PRIVATE-TOKEN": githubToken
           })
         })
         let githubJson = await githubData.json()
+        if (!githubJson[0]) {
+          return
+        }
         myData.comments = githubJson
+        // Normalize it slightly.
+        myData.comments = myData.comments.map(comment => {
+          comment.user = {
+            login: comment.author.username
+          }
+          return comment
+        })
         let nodeMeta = {
           id: createNodeId(`github-comments-${myData.drupalId}`),
           parent: null,

As you might see, I took a shortcut and re-used some variables, meaning the graphql is querying against GithubComment. Which is a bad name for a bunch of Gitlab comments. But I think cleaning those things up would be an exercise for the reader. The last part with client side updates was not possible to do in a similar way. The API for that requires authenticating the request. Much like the above code from sourcing the comments. But while the sourcing happens at build time, client side updates happens in the browser. Which would mean that for this to work, there would be a secret token in your JavaScript for everyone to see. So that is not a good option. But it is indeed theoretically possible, and could for example be achieved with a small proxy server, or serverless function.

The even cooler part is that the exact same code also works for a self-hosted Gitlab instance. So you can indeed run open source software and own your own data with this approach. That being said, not using a server was one of the requirements for me, so that would defeat the purpose. In such a way, one might as well use Drupal as a headless comment storage.

I would still like to finish with it all working in Gitlab too though. In an animated gif, as I always end my posts. Even though you are not allowed to test it yourself (since, you know, the token is exposed in the JavaScript), I think it serves as an illustration. And feel free to (Github) comment if you have any questions or feel I left something out.

Mar 15 2020
Mar 15

Very recently I relaunched this blog using Gatsby.js, which is in the category of static page generators. Having comments on a static webpage is a common requirement, and a popular way to do so is to use a third party service, like Disqus.

I have used Disqus on my blog for a long, long time. The first time I went from using Drupal comments to using Disqus was when I migrated to Drupal 8 (in july 2014, while Drupal 8 was in an alpha version). I could be mentioning this to sound very distinguished, but the main reason I am mentioning it this: Back then, for security reasons, I was running Drupal 8 more or less like a static page, since I explicitly disallowed PHP sessions. This meant it was not possible to use the Drupal comment system. After migrating to Gatsby.js it was only natural for me to keep using Disqus, since all the comments were already there, and it still works with the "static site builder" Gatsby.js.

What changed?

There are several trade-offs with using a third-party solution like Disqus. One of them is privacy, and the other is not owning your own data. For example, you have no logs. So while I have comments on my blog, and many posts have comments on them, I do not have any insight into how many people are trying to add comments, but failing. This was the case on my blog post on Geography in web page performance. Shortly after I published it, I got an email from a fellow Drupal community member, ressa. The email was related to the blog post, so I asked him "Why did you not submit a comment?". After all, questions and answers could potentially benefit others. Well, the reason was that he tried to comment, but had to give up after fighting captcha pictures from Disqus for several minutes. And although I had thought of it before, this was the final push to look into Disqus alternatives

Requirements

Keeping the comments and not starting from scratch

The first requirement was that I should be able to keep all of the comments from Disqus, even when using a new platform. This can either be done by migrating the comment data to a new platform, or by "archiving" Disqus comments and embedding them in my pages. With that thought process in place, let's go ahead and look at technical solutions.

Technical options and their consequences

My preferred option would be to use something open source, but not host it myself. I tried out Commento, which is open source with a hosted version. To be honest, it seems like a very good option, but it is not free (free as in $0). I could host it myself for free, but that would require me to maintain the server for it. They also provide an import of Disqus comments, which would satisfy my requirement to keep the existing comments. In the end, I decided to not go with this since I had to either pay for it or host myself.

Since self-hosting was out of the picture I then went in another rather radical direction: Commenting through Drupal.org(!). Drupal.org provides an API, so in theory that should for sure work. Instead of repeating it here, I will now just post a quote from my email to the Drupal infra team asking for permission:

I want to replace disqus as comment provider on my blog, so I had this idea of letting every blog post be an issue on a project on d o, and you would comment there to get it to show up as a comment on the blog post. This way I would get account creation, spam prevention and email alerts "for free". I could theoretically also incentivize commenting by handing out issue credits for valuable comments (which in my opinion serves as community contribution anyway).

So there you have it. I would get loads of stuff "for free". This seems like a great deal for me. So I guess I am asking kindly for permission technically to do this, as well as asking if that would be an OK use of issue credits?

As you probably can see from the quote, I felt I had a fantastic idea for myself, but realized that this might not be the intended use of Drupal.org. After a great conversation with Tim Lehnen (CTO at the Drupal Association) we both agreed that this is a bit outside the intended use of a community project, also considering the members and supporting partners would be paying for parts of my blog infrastructure. Although this was not the option I went for, the option would make it possible to not self-host. And Drupal.org is open source. However I would not own my own data (which technically would be the same as Disqus). I also would not be able to import the comment history into the new platform.

Now that I was already down the road of using issue comments, my next step was to research Github as the comment hosting provider. Also here I would get account creation, spam prevention and email alerts "for free". In addition one can lock a thread for more comments.

My initial research led me to many other people doing the same thing. So the idea is for sure not new. There is even a service wrapping the whole thing called utteranc.es (adding embedded Github login and auto creation of issues when configured). Again, this would mean not owning my own data. And not being able to import comments into a new platform.

Reflections on choices available

Thinking about how your comments are stored is an interesting exercise. Ideally, I would want it stored where I own it. I would want to control the method of storing it (for example using an open source CMS). I would also want to control policy and thoughts behind privacy and moderation. Would I save comments on a third party provider, they would be able to delete all of my comments if they disagreed with me somehow. Or they could in an intrusive or abusive way compromise the privacy of users wanting to comment on my blog. These are real questions to ask.

Even with that in the back of my mind, I ended up moving to comments via Github. The convenience of storage, spam prevention and email alerts trumped the need for freedom, at least for now. To be fair, Github already hosts the code for this blog, and runs the deployments of the blog, so the trust is already there. I did however do a couple of twists to make things smoother, safer and more SEO friendly.

Short technical walkthrough

I promise I will write a longer blog post on this (with code examples), but the current set-up looks something like this (edit: I have since published this promised technical write up):

I wanted to keep my Disqus comments, but as mentioned, I can not import Disqus into github issue comments. But, Disqus provides a dump of all my data. Fortunately there is also a Gatsby source plugin for this format, called gatsby-source-disqus-xml. Using this approach has the added bonus of Disqus comments now being printed on all pages (instead of loaded with JavaScript), so they are now indexable and searchable through Google! Quick SEO win!

I wanted the new comment system to be transparently appended to archived comments. So I also import issue comments for blog posts using the same method. Basically that involves writing a quick source plugin using Gatsby's Node APIs.

Now, comment data is more or less the same, no matter where it first appeared (Github or Disqus). Since comment data is mostly similar I can use the same React component to render imported comments from Disqus, alongside the new (dynamic) comments from Github. And so now it can look like this:

Disqus and Github side by side

I also wanted comments to feel real-time. Since they are rendered through building the codebase, I would not want to wait for a full build when people are posting comments. They should be available right away! But since Gatsby has a client-side layer as well, I can update the state of the blog post client side. To make this philosophy a bit more quote friendly:  "it's built eventually, but visible immediately".

Expanding on this, I also plan to be able to have a copy of the comments stored somewhere else than Github. But that sounds like another blog post, and this journey has already made the blog post long enough. Let's instead finish off with an animated GIF of the comment system in action. And of course, if you want to test it yourself, feel free to comment on this post!

Mar 03 2020
Mar 03

If you have made a couple of migrations for Drupal 8, you might have encountered the fact that some time you actually forgot something, or maybe your migration actually had a bug? But you already launched the site, so doing a new migration is out of the question. What do you do? Well, an update hook can save your day.

So, you might go ahead and write that update hook. And this update hook might want to do some queries in the old database. How do you then write mysql queries to the database you have defined as the migrate database? After I first discovered this, I found myself searching for the code snippets where I did do this, so I decided to instead put this in a blog post as a reference for myself.

A pretty common way of having your databases defined in such cases would be like this:

$databases['default']['default'] = array (
  'database' => 'my_database',
  'username' => 'user',
  'password' => 'pass',
  'prefix' => '',
  'host' => 'localhost',
  'port' => '',
  'namespace' => 'Drupal\\Core\\Database\\Driver\\mysql',
  'driver' => 'mysql',
);
$databases['migrate']['default'] = array (
  'database' => 'my_old_database',
  'username' => 'user',
  'password' => 'pass',
  'prefix' => '',
  'host' => 'localhost',
  'port' => '',
  'namespace' => 'Drupal\\Core\\Database\\Driver\\mysql',
  'driver' => 'mysql',
);

What we want to do in our update hook is to get the connection information for the one with the ‘migrate’ key, so we can do queries in the migrate database. And we want to do it properly, right? Not just hardcode credentials and use any old sql client?

So what I usually do is this:

// The 2 arguments should correspond to the database target and key. The key in
// our example is 'migrate' and the target is 'default'.
/** @var \Drupal\Core\Database\Connection $db */
$db = \Drupal\Core\Database\Database::getConnection('default', 'migrate');
// Db is now a connection, the same as you would get if you would use the
// service "database". You can proceed to call methods on it like this:
$gifs = $db->select('file_managed', 'fm')
  ->fields('fm')
  ->condition('fm.filemime', 'image/gif')
  ->execute();
foreach ($gifs as $gif) {
  // Do something with the gif.
}

If you find yourself using this a lot, you can also create a service for it. You can use this pattern to define the service:

my_module.migrate_db:
  class: Drupal\Core\Database\Connection
  factory: Drupal\Core\Database\Database::getConnection
  # The 2 arguments should correspond to the database target and key. The key
  # in our example is 'migrate' and the target is 'default'.
  arguments: [default, migrate]

And then you can use it like this:

/** @var \Drupal\Core\Database\Connection $db */
$db = \Drupal::service('my_module.migrate_db');
// Db is now a connection, the same as you would get if you would use the
// service "database". You can proceed to call methods on it like this:
$gifs = $db->select('file_managed', 'fm')
  ->fields('fm')
  ->condition('fm.filemime', 'image/gif')
  ->execute();
foreach ($gifs as $gif) {
  // Do something with the gif.
}

So that is 2 ways you can get the connection for your migrate database. Hope that helps someone out there!

To celebrate, lets have a look at an animated gif called “database”.

Feb 22 2020
Feb 22

Here is a quick tip if you want to create a step definition that has an argument with multiple lines. A multiline string argument if you like.

I wanted to test that an email was sent, with a specific subject, to a specific person, and containing a specific body text.

My idea was to create a step definition that looked something like this:

Then an email has been sent to "[email protected]" with the subject "Subject example" and the body “one of the lines in the body

plus this is the other line of the body, after an additional line break”

So basically my full file is now this:

@api @test-feature
Feature: Test this feature
  Scenario: I can use this definition
    Then an email has been sent to "[email protected]" with the subject "Subject example" and the body “one of the lines in the body

    plus this is the other line of the body, after an additional line break”

My step definition looks like this:

  /**
   * @Then /^an email has been sent to :email with the subject :subject and the body :body$/
   */
  public function anEmailHasBeenSentToWithTheSubjectAndTheBody($email, $subject, $body) 
  {
      throw new PendingException();
  }

Let’s try to run that.

$ ./vendor/bin/behat --tags=test-feature

In Parser.php line 393:
                                                                                                                                                    
  Expected Step, but got text: "    plus this is the other line of the body, after an additional line break”" in file: tests/features/test.feature  

Doing it that way simply does not work. You see, by default a line break in the Gherkin DSL has an actual meaning, so you can not do a line break in your argument, expecting it to just pass along everything up until the closing quote. What we actually want is to use a PyString. But how do we use them, and how do we define a step to receive them? Let’s start by converting our step definition to use the PyString multiline syntax:

@api @test-feature
Feature: Test this feature
  Scenario: I can use this definition
    Then an email has been sent to "[email protected]" with the subject "Subject example" and the body
    """
    one of the lines in the body

    plus this is the other line of the body, after an additional line break
    """

Now let’s try to run it:

$ ./vendor/bin/behat --tags=test-feature                                                                                        
@api @test-feature
Feature: Test this feature

  Scenario: I can use this definition                                                                 # tests/features/test.feature:3
    Then an email has been sent to "[email protected]" with the subject "Subject example" and the body
      """
      one of the lines in the body
      
      plus this is the other line of the body, after an additional line break
      """

1 scenario (1 undefined)
1 step (1 undefined)
0m0.45s (32.44Mb)

 >> default suite has undefined steps. Please choose the context to generate snippets:

A bit closer. Our output actually tells us that we have a missing step definition, and suggests how to define it. That’s better. Let’s try the suggestion from the output, now defining our step like this:

  /**
   * @Then an email has been sent to :email with the subject :subject and the body
   */
  public function anEmailHasBeenSentToWithTheSubjectAndTheBody2($email, $subject, PyStringNode $string)
  {
      throw new PendingException();
  }

The difference here is that we do not add the variable name for the body in the annotation, and we specify that we want a PyStringNode type parameter last. This way behat will know (tm).

After running the behat command again, we can finally use the step definition. Let's have a look at how we can use the PyString class.

  /**
   * @Then an email has been sent to :email with the subject :subject and the body
   */
  public function anEmailHasBeenSentToWithTheSubjectAndTheBody2($email, $subject, PyStringNode $string)
  {
      // This is just an example.
      $mails = $this->getEmailsSomehow();
      // This is now the important part, you get the raw string from the PyStringNode class.
      $body_string = $string->getRaw();
      foreach ($mails as $item) {
          // Still just an example, but you probably get the point?
          if ($item['to'] == $mail && $item['subject'] == $subject && strpos($item['body'], $body_string) !== FALSE) {
              return;
          }
      }
      throw new \Exception('The mail was not found');
  }

And that about wraps it up. Writing tests are fun, right? As a bonus, here is an animated gif called "Testing".

Feb 08 2020
Feb 08

New year, new possibilities, as we say in Norway. Which is why I have relaunched my blog using Gatsby.js. I could write a blog post about that, but I am not going to do that today. There is a lot of tutorials on how to set up Gatsby with Drupal (one of my personal favorites is this one from Lullabot), and there is even an official page in the Gatsby.js documentation.

I could probably write many blog posts about different aspects I tweaked and looked at in the migration, but one field I feel is not often talked about is geography and performance.

With regards to servers, many Drupal sites (at least basic ones) are probably geographically limited by the actual server that is supposed to serve the web requests, and the location of this particular server. With a static site, made for example with Gatsby.js, you can deploy it to a Content Delivery Network (CDN) and have the same static html files on servers all around the world. This could mean that a website visitor from Tokyo to your static site could get a response from a server in Tokyo. The traditional Drupal site however might be on a server in Ireland, and then a visitor from Tokyo would quite often have to send their request all around the world to get responses.

This idea is not very new. In fact, there are several providers that let's you deploy your static site on their CDN for free (more or less). They will then serve your static HTML from different parts of the world, depending on the visitor. What a world to live in. But instead of comparing their service and the user experience of deploying, I decided to compare them by which ones were being performant from all parts of the world. A geographic performance cup if you like.

The competitors in the cup are:

  • Surge.sh
  • Zeit Now
  • Netlify
  • S3 with Cloudfront (a service from Amazon Web Services - AWS)

Instead of doing a very long analysis, let's just start with the results!

CDN ping

The fastest service is S3 with Cloudfront. S3 is a static file storage, and Cloudfront is the CDN service from Amazon.

In the same way I could write many things about my migration to Gatsby, I could also speculate and write many things about this result. Instead I want to just show some animated gifs about interesting aspects of the different geography results for the providers. I am going to do them in reverse order, best result last.

Fourth place: Surge.sh:

Surge.sh geography

Third place: Netlify:

Netlify geography

Then, slightly behind on second place, Zeit Now:

Zeit now geography

Lastly, the winner, here is AWS S3 with Cloudfront:

S3 with Cloudfront geography

Conclusions and reflections

The numbers are one thing, but let's talk a bit about their significance. The tests were performed from AWS datacenters, and the 2 services scoring highest is either an AWS service (S3/Cloudfront), or uses AWS for their service (Zeit Now). Meaning the actual numbers does not necessarily mean that Netlify is 144% slower than S3/Cloudfront. It also does not mean I think any of these services have been proven to be better or worse than others.

I think it means that now that we are able to serve static HTML pages for our blogs or websites in a somewhat dynamic way, we can make the performance more democratic and less unfair. I don't want to discriminate readers of my blog based on their location (or anything else for that matter). Performance matters, but performance also differs from different parts of the world.

I guess what I am trying to say is: Let's make the world a better place by thinking about everyone that lives there, no matter where they live. So I will finish this post with an animated gif about just that. The world.

Apr 16 2019
Apr 16

Last week Drupalcon North America was held in Seattle, where Dries opened the conference with the traditional "Driesnote". In the presentation, Dries talked about automated updates for Drupal, a thing I am very passionate about myself. He then went on to say:

I hope that in Drupalcon Amsterdam...in six months… I will be able to stand on stage and actually give some sort of demo. That would be my goal. So obviously… I would need your help with that… but that would be fantastic.

This triggered a thought: with the tools we have today, and as composer support is fairly decent, we can actually demo this somewhat utopic goal now. Which is why I made a video of just that. So without further ado: here is the demo!

[embedded content]

Automatic updates and the future

This demo demonstrates tools that are available now. Instead of only focusing on those tools (and promoting violinist.io), I want to expand on the subject of automated updates in Drupal, and its future.

Like for Dries, automatic updates is an important issue to me. Not only because I promote running automated composer updates with violinist.io, but because having these features will make Drupal more viable for hobbyists and beginners. Which in turn is something that can help us reach more users, and increase our diversity. Consequently, having automated updates is important for the “non-professional” group of Drupal users.

In the presentation, Dries also points out some important positive effects of having automated updates. First, it will help securing your site (or your client's site) when time-critical security updates are released. Second, it will make it easier for organizations and agencies to maintain websites. This means that having automated updates is important for the “professional” group of Drupal users as well.

This brings us to the next segment, which mostly applies to agencies and organizations using Drupal professionally. Two issues are often raised about having automated updates. First, that moving and/or overwriting the files of your codebase is a security risk. Second, that updating files on your live server can be a no-go. Maybe you have a version control system in place. Or perhaps you have a continuous integration/deployment pipeline. Or, you have setups that deploy their codebase to multiple front-servers. The two issues are valid concerns, but usually they are less of a concern to "non-professional" users. This implies that having automated updates is important for the “professional” AND “non-professional”, but the preferred implementation for these two groups might conflict.

In my personal opinion, we can support both. Let me explain how.

My suggestion is that we can have a system in place that is pluggable in all parts. This means that the "non-professional" can use plugins that are useful for updating via the file system, and hopefully set-and-forget the automatic updates for their site. It also means that the "professional", the one with the pipeline and version control, could have entirely different plugins for updates. To get back to the video above, a pre-flight check (which is how we determine if we can update) can mean checking for pull requests for the available update, and checking that the tests pass. The plugin for updating could simply merge the pull request, since automated deploys are run for the master branch. Now, different teams have different requirements, but this means that you could use a Pantheon or Platform.sh plugin for automated updates. Or, maybe you have a custom plugin for your team that you use across projects.

I believe this feature can help automation for "professional" teams of all sizes, and make the very same system usable for "non-professionals" that want to set-and-forget. This is also why I believe having automated updates in core, is in no way in conflict with doing automated updates like in the video above. It will only complement the toolbox we currently have!

If you want to read more about the Automatic Updates initiative, there is more info here. It is an exciting future, and we can all help out in making the future more user-friendly and secure. I know I will!

Feb 28 2019
Feb 28

“Oh snap”, said the project manager. “The client has this whole range of rich articles they probably are expecting to still work after the migration!”

The project was a relaunch of a Drupal / Commerce 1 site, redone for Drupal 8 and Commerce 2. A couple of weeks before the relaunch, and literally days before the client was allowed in to see the staging site, we found out we had forgotten a whole range of rich articles where the client had carefully crafted landing pages, campaign pages and “inspiration” pages (this is a interior type of store). The pages were panel nodes, and it had a handful of different panel panes (all custom).

In the new site we had made Layout builder available to make such pages.

We had 2 options:

  • Redo all of them manually with copy paste.
  • Migrate panel nodes into layout builder enabled nodes.

“Is that even possible?”, said the project manager.

Well, we just have to try, won’t we?

Creating the destination node type

First off, I went ahead and created a new node type called “inspiration page”. And then I enabled layout builder for individual entities for this node type.

Now I was able to create “inspiration page” landing pages. Great!

Creating the migration

Next, I went ahead and wrote a migration plugin for the panel nodes. It ended up looking like this:

id: mymodule_inspiration
label: mymodule inspiration
migration_group: mymodule_migrate
migration_tags:
  - mymodule
source:
  # This is the source plugin, that we will create.
  plugin: mymodule_inspiration
  track_changes: TRUE
  # This is the key in the database array.
  key: d7
  # This means something to the d7_node plugin, that we inherit from.
  node_type: panel
  # This is used to create a path (not covered in this article).
  constants:
    slash: '/'
process:
  type:
    plugin: default_value
    # This is the destination node type
    default_value: inspiration_page
  # Copy over some values
  title: title
  changed: changed
  created: created
  # This is the important part!
  layout_builder__layout: layout
  path:
    plugin: concat
    source:
      - constants/slash
      - path
destination:
  plugin: entity:node
  # This is the destination node type
  default_bundle: inspiration_page
dependencies:
  enforced:
    module:
      - mymodule_migrate

As mentioned in the annotated configuration, we need a custom source plugin for this. So, let’s take a look at how we make that:

Creating the migration plugin

If you have a module called “mymodule”, you create a folder structure like so, inside it (just like other plugins):

src/Plugin/migrate/source

And let’s go ahead and create the “Inspiration” plugin, a file called Inspiration.php:

uuid = $uuid;
  }

  /**
   * {@inheritdoc}
   */
  public static function create(ContainerInterface $container, array $configuration, $plugin_id, $plugin_definition, MigrationInterface $migration = NULL) {
    return new static(
      $configuration,
      $plugin_id,
      $plugin_definition,
      $migration,
      $container->get('state'),
      $container->get('entity.manager'),
      $container->get('module_handler'),
      $container->get('uuid')
    );
  }

}

Ok, so this is the setup for the plugin. For this specific migration, there were some weird conditions for which of the panel nodes were actually inspiration pages. If I copy-pasted it here, you would think I was insane, but for now I can just mention that we were overriding the public function query. You may or may not need to do the same.

So, after getting the query right, we are going to do some work inside of the prepareRow function:

  /**
   * {@inheritdoc}
   */
  public function prepareRow(Row $row) {
    $result = parent::prepareRow($row);
    if (!$result) {
      return $result;
    }
    // Get all the panes for this nid.
    $did = $this->select('panels_node', 'pn')
      ->fields('pn', ['did'])
      ->condition('pn.nid', $row->getSourceProperty('nid'))
      ->execute()
      ->fetchField();
    // Find all the panel panes.
    $panes = $this->getPanelPanes($did);
    $sections = [];
    $section = new Section('layout_onecol');
    $sections[] = $section;
    foreach ($panes as $delta => $pane) {
      if (!$components = $this->getComponents($pane)) {
        // You must decide what you want to do when a panel pane can not be
        // converted.
        continue;
      }
      // Here we used to have some code dealing with changing section if this
      // and that. You may or may not need this.
      foreach ($components as $component) {
        $section->appendComponent($component);
      }
    }
    $row->setSourceProperty('layout', $sections);
    // Don't forget to migrate the "path" part. This is left out for this
    // article.
    return $result;
  }

Now you may notice there are some helper methods there. They look something like this:

  /**
   * Helper.
   */
  protected function getPanelPanes($did) {
    $q = $this->select('panels_pane', 'pp');
    $q->fields('pp');
    $q->condition('pp.did', $did);
    $q->orderBy('pp.position');
    return $q->execute();
  }

  /**
   * Helper to get components back, based on pane configuration.
   */
  protected function getComponents($pane) {
    $configuration = @unserialize($pane["configuration"]);
    if (empty($configuration)) {
      return FALSE;
    }
    $region = 'content';
    // Here would be the different conversions between panel panes and blocks.
    // This would be very varying based on the panes, but here is one simple
    // example:
    switch ($pane['type']) {
      case 'custom':
        // This is the block plugin id.
        $plugin_id = 'my_custom_content_block';
        $component = new SectionComponent($this->uuid->generate(), $region, [
          'id' => $plugin_id,
          // This is the title of the block.
          'title' => $configuration['title'],
          // The following are configuration options for this block.
          'image' => '',
          'text' => [
            // These values come from the configuration of the panel pane.
            'value' => $configuration["body"],
            'format' => 'full_html',
          ],
          'url' => $configuration["url"],
        ]);
        return [$component];

      default:
        return FALSE;
    }
  }

So there you have it! Since we now have amazing tools in Drupal 8 (namely Layout builder and Migrate) there is not task that deserves the question “Is that even possible?”.

To finish off, let's have an animated gif called "inspiration". And I hope this will give some inspiration to other people migrating landing pages into layout builder.

Oct 29 2018
Oct 29

In just a few weeks the Norwegian Drupal association will host the annual Drupalcamp oslo (9-10th of November). If you have not already booked your tickets, now is the time!

Great featured speakers

We are very pleased with our program this year. In addition to the rest of the program, we are proud of our invited featured speakers:

Senior technical architect justafish from Lullabot is coming to speak about the JavaScript modernization initiative! If you are not already aware of the work going on in core in this area, don't miss this opportunity to get a first hand view at the exciting progress!

CEO and co-founder of 1xINTERNET baddysonja is having a session about how "Drupal is full of opportunities". Come and get inspired about the Drupal ecosystem, with a focus on contribution and volenteering!

Also joining us is security team member Stella Power, Managing Director and founder of Annertech.

Open source in the public sector

But not only that: The first half of Friday will be dedicated to the subject "open source in the public sector". It will be a segment that will be free to attend for everyone, trying to bring attention to the subject especially for Norway, where we still have a way to go in this area (my own subjective opinion). It will feature national and international case studies as well as Jeffrey A. “jam” McGuire talking about international trends.

What are you waiting for?

The preliminary program is available here, and we still have early bird tickets for just a few days more.

Welcome everyone! See you there!

Jun 07 2018
Jun 07

Today I encountered a problem I did not think about earlier. After I pushed a fix to a project I am working on, the CI builds started showing errors. And the problem was coming from a message like this:

The service "mymodule.attachments_manager" has a dependency on a non-existent service "metatag.manager".

In many cases when you see that, it probably means your module was installed before a module it depends on. For example, in this case, it would seem that this module depends on metatag, and so declaring it as a dependency would fix the issue. And for sure, it would. But sometimes dependencies are not black and white.

This particular service does some handling of the attachments when used together with metatag. It does so, because it is a module we use across projects, and we can not be sure metatag is used in any given project. So it's only used in a way that is something like this:

/**
 * Implements hook_page_attachments().
 */
function mymodule_page_attachments(array &$attachments) {
  if (!\Drupal::moduleHandler()->moduleExists('metatag')) {
    return;
  }
  \Drupal::service('mymodule.attachments_manager')->handlePageAttachments($attachments);
}

Now, what this means, is that for the attachments manager to be useful, we need metatag. If we do not have metatag, we do not even need this service. So basically, the service depends on metatag (as it uses the service metatag.manager), but the module does not (as it does not even need its own service if metatag is not installed).

Now, there are several ways you could go about fixing this for a given project. Creating a new module that depends on metatag could be one way. But today, let's look at how we can make this service have an optional dependency on another service.

At first the service definition looked like this:

mymodule.attachments_manager:
  class: Drupal\mymodule\MyModuleAttachmentsManager
  arguments: ['@current_route_match', '@module_handler', '@metatag.manager', '@entity.repository']

This would contruct a class instance of MyModuleAttachmentsManager with the following function signature:

public function __construct(RouteMatchInterface $route_match, 
  ModuleHandlerInterface $module_handler, 
  MetatagManager $metatag_manager, 
  EntityRepositoryInterface $entity_repo) {
}

Now, this could never work if this module was installed before metatag (which it very well could, since it does not depend on it). A solution then would be to make the metatag.manager service optional. Which is something we can do by removing it from the constructor and create a setter for it.

public function __construct(RouteMatchInterface $route_match, 
  ModuleHandlerInterface $module_handler, 
  EntityRepositoryInterface $entity_repo) {
  // Constructor code.
}

/**
 * Set meta tag manager.
 *
 * @param \Drupal\metatag\MetatagManager $metatagManager
 *   Meta tag manager.
 */
public function setMetatagManager(MetatagManager $metatagManager) {
  $this->metatagManager = $metatagManager;
}

OK, so far so good. It can now be constructed without having a MetaTagManager. But how do we communicate this in the service definition? Turns out the documentation for Symfony service container has the answer.

When you define a service, you can specify calls to be made when creating the service, which would be ignored if the service does not exist. Like so:

mymodule.attachments_manager:
  class: Drupal\mymodule\MyModuleAttachmentsManager
  arguments: ['@current_route_match', '@module_handler', '@entity.repository']
  calls:
    - [setMetatagManager, ['@?metatag.manager']]

So there we have it! The service can be instantiated without relying on the metatag.manager service. And if it is available in the service container, the method setMetatagManager will be called with the service, and our service will have it available in the cases where we need it.

Now let's finish off with an animated gif related to "service container".

Apr 19 2018
Apr 19

In many cases, a route name is something you might need to get something done in Drupal 8. For example to generate a link, or maybe create a local task in your module.

Some examples:

The path can be found in a routing.yml file

Some times you can just search for the path in your codebase, and then find the corresponding route name. Let's say that I wanted to link to the page mysite.com/admin/config/regional/translate. If I just search the codebase for this path, it would be revealed in the file locale.routing.yml:

locale.translate_page:
  path: '/admin/config/regional/translate'
  defaults:
    _controller: '\Drupal\locale\Controller\LocaleController::translatePage'
    _title: 'User interface translation'
  requirements:
    _permission: 'translate interface'

To link to this page using the API for generating links, I would then do something like this:

$link = Link::fromTextAndUrl(t('Translate interface'), Url::fromRoute('locale.translate_page'));

So to conclude, the route name for that particular page is the key in the routing file, in this case locale.translate_page.

The path can not be found in a routing.yml file

Now, this is what I really wanted to write about in this blog post. Getting the route name directly from a routing file is simple enough, but where do you look if the path can not be found in a routing file?

Find route name with PHPStorm

My first trick is to utilize the IDE I use, PHPStorm.

Start by setting a breakpoint in index.php on the line that looks like this:

$response->send();

Next step, refresh your browser on the page you want to know the route name for, and hopefully trigger your breakpoint. Then you click on the icon for "evaluate expression". On my work computer this has the shortcut key alt-f8, but you can also find it in the debugger toolbar, or via the menu (Run -> Evaluate expression).

Then evaluate the following code:

\Drupal::routeMatch()->getRouteName()

That should give you the name of the route. As illustrated below in a gif:

Evaluate expression

Find route name with any development enviroment.

Now, I realize that not everyone uses PHPStorm, so here is one solution that should work without having xdebug and an IDE set up:

Following the same tactic as above, let's open up index.php again. Now, just change the following code:

 $response = $kernel->handle($request);
+print_r(\Drupal::routeMatch()->getRouteName());
 $response->send();

The difference here is adding the line with print_r.

Now visit the page you want to know the route name for. This will print the name of the route as the very first output of your Drupal site. Since you probably do not want this for your live Drupal site, this is best done on a development copy.

Find route name with Drupal Console

Drupal Console maintainer Jesus Manuel Olivas pointed out in the comments a rather cool way to browse the router list of routes:

drupal debug:router | peco | awk '{print $1}' | xargs drupal debug:router

This requires you to have Peco and Drupal console installed You can look at the animation of it in use here.

Other options

You can also use the module webprofiler which is a part of the devel module. This may or may not involve more steps than necessary, depending on your project. But to be fair, that is also an option.

To finish off, here is an animated gif in the category "route". Let me know your tips and tricks in the comments!

Mar 02 2018
Mar 02

If you are like me, you might have already started planning the upgrade to Drupal 8.5, now that the first release candidate is out. It's awesome by the way, among other things, thanks to the incredible work done with layout builder. And if you are more like me, you are managing your sites with composer. Then, depending on the rest of your project, you might (also like me), have encountered some initial problems upgrading to Drupal 8.5

Having hit my fair share of composer oddities with running the Violinist.io composer monitor and upgrade service, I wanted to compile a couple of error messages along with solutions, to the folks struggling with this out there.

Installation request for webflo/drupal-core-require-dev (locked at 8.4.5, required as ~8.4) -> satisfiable by webflo/drupal-core-require-dev[8.4.5].

If you have installed an out of the box version of https://github.com/drupal-composer/drupal-project, this might be an error message you encounter. Full error message, for reference:

./composer.json has been updated
> DrupalProject\composer\ScriptHandler::checkComposerVersion
Loading composer repositories with package information
Updating dependencies (including require-dev)
Your requirements could not be resolved to an installable set of packages.

  Problem 1
    - webflo/drupal-core-require-dev 8.4.5 requires drupal/core 8.4.5 -> satisfiable by drupal/core[8.4.5] but these conflict with your requirements or minimum-stability.
    - webflo/drupal-core-require-dev 8.4.5 requires drupal/core 8.4.5 -> satisfiable by drupal/core[8.4.5] but these conflict with your requirements or minimum-stability.
    - webflo/drupal-core-require-dev 8.4.5 requires drupal/core 8.4.5 -> satisfiable by drupal/core[8.4.5] but these conflict with your requirements or minimum-stability.
    - Installation request for webflo/drupal-core-require-dev (locked at 8.4.5, required as ~8.4) -> satisfiable by webflo/drupal-core-require-dev[8.4.5].


Installation failed, reverting ./composer.json to its original content.

The reason this fails is that the project you have created is depending on the dev packages for drupal core, which are tied to a specific version of core. So to update core, we also need to update the dev packages for core.

The solution to this is pretty simple:
Open your composer.json file and replace the lines for drupal/core and webflo/drupal-core-require-dev with the following:

"drupal/core": "~8.5"
// ...and
"webflo/drupal-core-require-dev": "~8.5"

Afterwards you can go ahead and run:

composer update drupal/core webflo/drupal-core-require-dev --with-dependencies

Edit: balsama pointed out in the comments that you can also run the command composer require drupal/core:~8.5.0 webflo/drupal-core-require-dev:~8.5.0 --no-update. However. This would move the package webflo/drupal-core-require-dev to "require" instead of "require-dev", which is probably not what you want. You could of course do a similar thing in 2 commands (one for require and one for require-dev), which would yield a similar result as updating by hand.

Installation request for symfony/config (locked at v3.2.14) -> satisfiable by symfony/config[v3.2.14].

This probably comes from the fact that you also have some other packages depending on this specific Symfony package in your project. Like drush or drupal console. This would not be a problem in itself, were it not for the fact that drupal/core 8.5 relies on the package symfony/dependency-injection that has specifically listed "symfony/config": "<3.3.7", as a conflict. Here is a full error message, for reference:

Loading composer repositories with package information
Updating dependencies (including require-dev)
Your requirements could not be resolved to an installable set of packages.

  Problem 1
    - Conclusion: don't install drupal/core 8.5.0-rc1
    - Conclusion: don't install drupal/core 8.5.0-beta1
    - Conclusion: don't install drupal/core 8.5.0-alpha1
    - Conclusion: don't install drupal/core 8.6.x-dev
    - Conclusion: remove symfony/config v3.2.14
    - Installation request for drupal/core ~8.5 -> satisfiable by drupal/core[8.5.0-alpha1, 8.5.0-beta1, 8.5.0-rc1, 8.5.x-dev, 8.6.x-dev].
    - Conclusion: don't install symfony/config v3.2.14
    - drupal/core 8.5.x-dev requires symfony/dependency-injection ~3.4.0 -> satisfiable by symfony/dependency-injection[3.4.x-dev, v3.4.0, v3.4.0-BETA1, v3.4.0-BETA2, v3.4.0-BETA3, v3.4.0-BETA4, v3.4.0-RC1, v3.4.0-RC2, v3.4.1, v3.4.2, v3.4.3, v3.4.4].
    - symfony/dependency-injection 3.4.x-dev conflicts with symfony/config[v3.2.14].
    - symfony/dependency-injection v3.4.0 conflicts with symfony/config[v3.2.14].
    - symfony/dependency-injection v3.4.0-BETA1 conflicts with symfony/config[v3.2.14].
    - symfony/dependency-injection v3.4.0-BETA2 conflicts with symfony/config[v3.2.14].
    - symfony/dependency-injection v3.4.0-BETA3 conflicts with symfony/config[v3.2.14].
    - symfony/dependency-injection v3.4.0-BETA4 conflicts with symfony/config[v3.2.14].
    - symfony/dependency-injection v3.4.0-RC1 conflicts with symfony/config[v3.2.14].
    - symfony/dependency-injection v3.4.0-RC2 conflicts with symfony/config[v3.2.14].
    - symfony/dependency-injection v3.4.1 conflicts with symfony/config[v3.2.14].
    - symfony/dependency-injection v3.4.2 conflicts with symfony/config[v3.2.14].
    - symfony/dependency-injection v3.4.3 conflicts with symfony/config[v3.2.14].
    - symfony/dependency-injection v3.4.4 conflicts with symfony/config[v3.2.14].
    - Installation request for symfony/config (locked at v3.2.14) -> satisfiable by symfony/config[v3.2.14].

The solution here is to indicate you also want to update this package, even if it's not specifically required. So if the failing command was the following:

composer update drupal/core --with-dependencies

Go ahead and change it to this:

composer update drupal/core symfony/config --with-dependencies

Edit: To clarify based on a comment from balsama: It does not help to add the flag --with-all-dependencies here, since it is not related to a "sibling" dependency, or a nested dependency of the packages to be updated.

If you have other error messages, I would be glad to help out with a solution, and post the result here.

Credits

Thanks to zaporylie for looking into this with me, and to Berdir for pointing out the fact that core is not the package that requires symfony/config.

Let's finish this post with an animated gif of a composer

Nov 27 2017
Nov 27

Violinist.io is a new service that is continuously trying to update your composer dependencies. When a new update is found, a pull request is created on the github repo for the project in question, for example your Drupal site. If you have a good testing setup, this will trigger your tests, and hopefully pass. Now, if you have continuous deployment set up, you can basically merge and deploy updates while sitting in a coffee shop on your phone. Which is now something I have done several times!

I am planning to write a longer blog post about a more complete continuous deployment setup, but just wanted to share a couple of quick fun animated gifs about how Violinist.io works

A couple of weeks ago a new version of Drupal console came out. After it was tagged on Github, an update was available through composer. Since Violinist picked this up, it opened up a new pull request on all of my projects that depend on this. That looks something like this:

I captured this animation because I was fascinated about the short time window between the release and the pull request. As you can see in the animation, it was only around 10 minutes! Now all left for me was to see that the tests passed, read through the changelog (including links to all commits) and merge in the update. Minutes after it was automatically deployed to the production server. About as easy as it gets!

But it's not only other Github hosted projects, or generic php packages that gets updated. For a typical Drupal project I also depend on modules from Drupal.org, and I download these modules with composer. Violinist.io supports those as well. Here is one example (from this very site you are reading) where a new pull request with a full changelog was posted only 8 minutes after it was released on Drupal.org.

Since admin_toolbar is a module I use on many projects, I now could just navigate from pull request to pull request, and update all of my sites within minutes, while still on my phone. A real time saver!

Full disclosure: As you probably understand from the enthusiastic description, I am also the creator of the service. It is completely free for open source projects, and up to one private project. Feel free to reach out if you have any questions or comments! To finish it off, here is an animated gif about enthusiasm.

Aug 09 2017
Aug 09

The annual meeting of Drupal enthusiasts in Norway (and elsewhere) will take place on the 11th of November, with community sprints happening November 12th.

Every year, the camp attracts visitors from Drupal professionals and hobbyists from Norway, but also from the surrounding countries. If you want to meet Drupal entusiasts from our region, this is a great chance to do so.

We also want to invite people who wants to speak to submit their session proposals to our website. Whether you are a seasoned conference speaker or if you want to have your first session, you are very welcome to submit your talk to Drupal Camp Oslo.

If you prefer to attend, we are just as welcome to you as well! Tickets are now available for purchase, and at the moment they are extra early-bird cheap! We also have different price tiers if you attend as a hobbyist or student, and we hope for a diverse audience of attendees, both in the sessions and in the sprints on Sunday!

If you have any questions regarding the event, feel free to reach out to me in the comments, by mail or on twitter

See you in Oslo in November!

May 25 2015
May 25

This will be the second part of the code examples, and the third in the series about Drupal and the Internet of Things.

If you haven't read the other parts of it, you are going to be all fine. But if you want to, the front page has a complete list

In this part we will look at simplifying the request flow for the client, while still keeping a certain level of security for our endpoint. There are several ways of doing this, but today we will look at a suggestion on how to implement API keys per user.

Let me just start first by saying there are several additional steps you could (and ideally should) implement to get a more secure platform, but I will touch on these towards the end of the article.

First, let's look at a video I posted in the first blog post. It shows an example of posting the temperature to a Drupal site.

[embedded content]

Architecture

Let's look at the scenario we will be implementing

  • A user registers an account on our Drupal 8 site
  • The user is presented with a path where they can POST their temperatures (for example example.com/user/5/user_temp_post)
  • The temperature is saved for that user when that request is made

See any potential problems? I sure hope so. Let's move on.

Registering the temperature in a room and sending to Drupal

As last time, I will not go into details on the implementation of any micro controller or hardware specific details in the blog post. But the code is available on github. I will quickly go through the technical steps and technologies used here:

  • I use a Raspberry pi 2, but the code should work on any model Raspberry pi
  • I use a waterproof dsb18b20 sensor, but any dsb18b20 should work. I have a waterproof one because I use it to monitor my beer brewing :)
  • The sensor checks the temperature at a certain interval (per default, 1 minute)
  • The temperature data is sent to the Drupal site and a node is created for each new registration
  • To authenticate the requests, the requests are sent with a x-user-temp header including the API key

This scenario is a bit different from the very real time example in the video above, but it is both more flexible (in terms of having a history of temperatures) and real-life (since temperatures seldom have such real-time changes as the one above).

Receiving temperatures in Drupal

The obvious problem with the situation described above, is the authentication and security of the transferred data. Not only do we not want people to be able to just POST data to our site with no authentication, we are also dealing with temperatures per user. So what is to stop a person to just POST a temperature on behalf of another user? Last post dealt with using the same user session as your web browser, but today we are going to look at using API keys.

If you have ever integrated a third party service to Drupal (or used a module that integrates a third party service) you are probably familiar with the concept of API keys. API keys are used to specify that even though a "regular" request is made, a secret token is used to prove that the request originates from a certain user. This makes it easy to use together with internet connected devices, as you would not need to obtain (and possibly maintain) a session cookie to authenticate as your user.

Implementation details

So for this example, I went ahead and implemented a "lo-fi" version of this as a module for Drupal 8. You can check out the code at github if you are eager to get all the details. Also, I deployed the code on Pantheon so you can actually go there and register an account and POST temperatures if you want!

The first step is to actually generate API keys to users that wants one. My implementation just generates one for users when they visit their "user temperatures" tab for the first time.

Side note: The API key in the picture is not the real one for my user.

Next step is to make sure that we use a custom access callback for the path we have defined as the endpoint for temperatures. In my case, I went with making the endpoint per user, so the path is /user/{uid}/user_temp_post. In Drupal 7 you would accomplish this custom access check by simply specifying something like this in your hook_menu:

'access callback' => 'my_module_access_callback',

In Drupal 8, however, we are using a my_module.routing.yml file for routes we are defining. So we also need to specify in this file what the criteria for allowing access should be. For a very good example of this, I found the user.module to be very helpful. My route for the temperature POST ended up like this:

user_temp.post_temperatures:
  path: '/user/{user}/user_temp_post'
  defaults:
    _controller: '\Drupal\user_temp\Controller\UserTempController::post'
    _title: 'Post user temperatures'
  requirements:
    _access_user_temp_post: 'TRUE'

In this case '_access_user_temp_post' is what will be the criteria of allowing access. You can see this in the user_temp.services.yml file of the module. From there you can also see that Drupal\user_temp\Access\PostTempAccessCheck is the class responsible for checking access to the route. In this class we must make sure to return a Drupal\Core\Access\AccessResult to indicate if the user is allowed access or not.

Some potential questions about the approach

From there on in, the code for the POST controller should provide you with the answers you need. And if the code is not enough, you can try to read the tests of the client part or the Drupal part. I will proceed with making assumptions about theoretical questions to the implementation:

How is this different from using the session cookie?

It is different in 2 aspects. The API key will not expire for reasons beyond your control. Or more precisely, the device's control. You can also reset the API key manually if you would want it to expire. The other big difference is that if your API key should be compromised, your account is not compromised in any way (as would be the case if a valid session cookie were to be compromised). Beyond that, please observe that in one area this is not different from using a session cookie: The requests should be made over https, especially if you are using a wifi connection.

How can I further strengthen the security of this model?

One "easy" way to do this is to not expose the API key as part of the request. I was originally planning to implement this, but realised this might make my original point a little less clear. What I would do as a another "lo-fi" hardening would be to make the x-user-temp header just include a hash of the temperature sent and the user API key. This way, if someone were sniffing the requests, they would just see that the x-user-temp header would change all the time, and so it would take a considerable effort to actually forge the requests (compared to just observing the key in the header).

Why are you using nodes? Isn't that very much overhead for this?

This is a fair point. It might be a bit overkill for something so simple. But there are two bonus parts about using nodes:

  • We can use views to display our data.
  • We can ship the views, content types and fields as configuration with our module.

This last part is especially powerful in Drupal 8, and incredibly easy to accomplish. For the files required for this particular implementation, you can reference the config/install directory of the module.

But since you are posting nodes, why aren't you using the REST module?

I admit it, I have no good reason for this beyond that I wanted to make this post be about implementing API keys for authentication. Also, here is a spoiler alert: Code examples part 3 will actually be using the REST module for creating nodes.

What if I want to monitor both my living room and my wine cellar? This is only one endpoint per user!

I am sorry for not implementing that in my proof of concept code, but I am sure you can think of a creative solution to the problem. Also, luckily for you, the code is open source so you are free to make any changes required to monitor your wine cellar. "Pull requests welcome" as they say.

As always, if you have any question or criticism (preferably beyond the points made above) I would love to hear thoughts on this subject in the comments. To finish it all off, I made an effort to find a temperature related gif. Not sure the effort shows in the end result.

May 03 2015
May 03

As promised, I am posting the code for all the examples in the article about Drupal and the Internet of Things. Since I figured this could be also a good excuse to actually examplify different approaches to securing these communication channels, I decided to do different strategies for each code example. So here is the disclaimer. These posts (and maybe especially this one) would not necessarily contain the best-practices of establishing a communication channel from your "thing" to your Drupal site. But this is one example, and depending on the use-case, who knows, this might be easiest and most practical for you.

So, the first example we will look at is how to turn on and off your Drupal site with a TV remote control. If you did not read the previous article, or if you did not see the example video, here it is:

[embedded content]

Overview of technology and communication flow

This is basically what is happening:

  • I click the on/off button on my TV remote.
  • A Tessel microcontroller reads the IR signal
  • The IR signal is analyzed to see if it indeed is the "on/off" button
  • A request is sent to my Drupal site
  • The Drupal site has enabled a module that defines an endpoint for toggling the site maintenance mode on and off
  • The Drupal site is toggled either on or off (depending on the previous state).

See any potential problems? Good. Let's start at the beginning

Receiving IR and communicating with Drupal

OK, so this is a Drupal blog, and not a microcontroller or javascript blog. I won't go through this in detail here, but the full commented source code is at github. If you want to use it, you would need a tessel board though. If you have that, and want to give it a go, the easiest way to get started is probably to read through the tests. Let's just sum it up in a couple of bullet points, real quick:

  • All IR signals are collected by the Tessel. Fun fact: There will be indications of IR signals even when you are not pressing the remote.
  • IR signals from the same button are rarely completely identical, so some fuzzing is needed in the identification of a button press
  • Figuring out the "signature" of your "off-button" might require some research.
  • Configure the code to pass along the config for your site, so that when we know we want to toggle maintenance mode (the correct button is pressed), we send a request to the Drupal site.

Receiving a request to toggle maintenance mode

Now to the obvious problem. If you exposed a URL that would turn the site on and off, what is to stop any random person from just toggling your site status just for the kicks? Here is the part where I want to talk about different methods of authentication. Let us compare this to the actual administration form where you can toggle the maintenance mode. What is to stop people from just using that? Access control. You have to actually log in and have the correct permission (administer site configuration) to be able to see that page. Now, logging in with a micro controller is of course possible, but it is slightly more impractical than for a human. So let's explore our options. In 3 posts, this being the first. Since this is the first one, we will start with the least flexible. But perhaps the most lo-fi and most low-barrier entry. We are going to still use the permission system.

Re-using your browser login from the IR receiver

These paragraphs are included in case someone reading this needs background info about this part. If this seems very obvious, please skip ahead 2 paragraphs

Web apps these days do not require log-ins on each page (that would be very impractical), but actually uses a cookie to indicate you are still trusted to be the same user as when you logged in. So, for example, when I am writing this, it is because I have a session cookie stored in my browser, and this indicates I am authorised to post nodes on this site. So when I request a page, the cookie is passed along with it. We can also do the same passing of a cookie on a micro controller.

Sending fully authenticated requests without a browser

So to figure out how to still be authenticated as an admin user you can use your browser dev tools of your choice. Open a browser where you are logged in as a user allowed to put the site into maintenance mode. Now open your browser dev-tools (for example with Cmd-Alt-I in Chrome on a Mac). In the dev tools there will be a network tab. Keep this active while loading a page you want to get the session cookie from. You can now inspect one of the requests and see what headers your browser passed on to the server. One of these things is the header Cookie. It will include something along the lines of this (it starts with SESS):

SESS51337Tr0lloll110l00l1=acbdef123abc1337H4XX

Since I am a fan of animated gifs, here is the same explanation illustrated:

This is the session cookie for you session as an authenticated user on your site. Since we now know this, we can request the path for the toggle functionality from our microcontroller, passing this cookie along as the header, and toggle the site as we were just accessing it through the browser.

The maintenance_mode_ir module

As promised, I also posted the Drupal part of the code. It is a module for Drupal 8, and can be found on github

So what is happening in that module? It is a very basic module actually mostly generated by the super awesome Drupal console. To again sum it up in bullet points:

  • It defines a route in maintenance_mode_ir.routing.yml (example.com/maintenance_mode_ir)
  • The route requires the permission "administer site configuration"
  • The route controller checks the StateInterface for the current state of maintenance mode, toggles it and returns a JSON response about the new state
  • The route (and so the toggling) will never be accessible for anonymous users (unless you give the anonymous users the permission "administer site configuration", in which case you probably have other issues anyway)
  • There are also tests to make sure this works as expected

When do you want to use this, and what is the considerations and compromises

Now, your first thought might be: would it not be even simpler to just expose a route where requests would turn the site on and off? We wouldn't need to bother with finding the session cookie, passing that along and so on? Legitimate question and of course true in the sense that it is simpler. But this is really the core of any communications taking place between your "things" and Drupal (or any other backend) - you want to make sure they are secured in some way. Of course being able to toggle the maintenance mode is probably not something you would want to expose anyway, but you should also use some sort of authentication if it only was a monitoring of temperature. Securing it through the access control in Drupal gives you a battle tested foundation for doing this.

Limitations and considerations

This method has some limitations. Say for example you are storing your sessions in a typical cache storage (like redis). Your session will expire at some point. Or, if you are using no persistence for redis, it will just be dropped as soon as redis restarts. Maybe you are limited by your php session lifetime settings. Or maybe you just accidentally log out of the session where you "found" the cookie. Many things can make this authenticated request stop working. But if all you are doing is hooking up a remote control reader to make a video and put on your blog, this will work.

Another thing to consider is the connection of your "thing". Is your site served over a non-secure connection and you are sending requests with your "thing" connected through a public wifi? You might want to reconsider your tactics. Also, keep in mind that if your session is compromised, it is not only the toggling of maintenance mode that is compromised, but the actual administrator user. This might not be the case if we were to use another form of authentication.

Now, the next paragraph presented to you will actually be the comments section. The section where you are encouraged to comment on inconsistencies, forgotten security concerns or praise about well chosen gif animations. Let me just first remind you of the disclaimer in the first paragraph, and the fact that this a serie of posts exploring different forms of device authentications. I would say the main takeaway from this first article is that exposing different aspects of your Drupal site to "the physical world", be it remote controlled maintenance mode or temperature logging, requires you to think about how you want to protect these exposed endpoints. So please do that, enjoy this complementary animated gif (in the category "maintenance"), and then feel free to comment.

Apr 13 2015
Apr 13

The Internet of Things (or IoT for short) is probably even more of a buzzword than "Headless Drupal", but maybe not so much in Drupal land. As I am a man of buzzwords, let’s try to combine these things in one article (Also, there will be video demos)

Or maybe a couple of articles. I feeI have so much to say about this, partly because many articles one can read on the subject IoT deals with having your "thing" on your local network and playing with it over the wifi. We are not going to do that here, as your local network is not the internet. Of course you could forward your router settings to actually put that "thing" on the internet. And then that thing would be available to all hackers that would want to access it. So we are not going to that either in this article. For a first article, I want to explain how I see Drupal and IoT connecting together, and explore the patterns for this.

When we refer to “The Internet of Things” we often refer to devices capable of networking. This could be a car, cell phone or something like a Raspberry Pi, Arduino or Tessel. In this article I will simply refer to a "thing". This means a device we want to extract data from, or interact with via Drupal.

A common scenario

First, let’s look at a common way of testing out Internet of Things at home. You have a Raspberry pi and you have a temperature sensor. Now, the raspberry pi probably runs a flavour of linux, so you can actually install apache, mysql, php and finally Drupal on it. Then maybe you find a python script that reads the temperature, and you create a node in Drupal with the php filter that will exec this python script and print it inside Drupal. That will work. Except it is not a good idea. For several reasons. Let’s go through them:

  • Your "thing" will both be a sensor and a webserver. Make it do one thing, and focus on that
  • Your Drupal site will run on your local network, and to access it from (for example) your office you would have to make it publicly available in some way.
  • Your Drupal site will have the php module installed. You don’t want that.
  • Your php code will be doing system calls. Please don’t do that

Now that I have got that rant out of the way, let me also just say that if you want to go down that road, it is of course a low entry barrier, and if you are restricting access to your local network, then the security concerns are looking a little better. And of course, the php in a node part is not strictly necessary, it just fits with my arguments. So, if you are looking for that kind of tutorial, there are plenty others on the internet.

Patterns for communication

As I now have been ranting a bit, let me just point out that this is not a canonical article about IoT best practices or some absolute truths. These are just my opinions on how one could approach this exciting buzzword. Continuing on, let’s look at some ways of interacting between a Drupal site and your "thing.

If we look at it as simply one "thing" and one Drupal site, you have two actors in this communication model. So to establish a two-way communcation, we would both want the "thing" talking to Drupal, and Drupal talking to the "thing". This "thing" may represent something physical in the world, like a temperature sensor or a relay switch. So basically it is an interaction between the physical world and your site, so let’s use that metaphor. This article will deal with the first and most simple concept of this interaction:

The physical world talking to Drupal

Isolated, the wording of that heading looks kind of poetic, doesn’t it?

When the physical world is talking to Drupal, I mean it as somewhat of a “push” mode for your "thing". Let’s say you are monitoring the temperature in your apartment (physical world) and want to communicate this to your Drupal site. A simple thing to do here is to define an endpoint in your Drupal site where your "thing" would just post updates, and the Drupal site would store the information (of course with some sort of authentication). A one-way communcation to push updates from the physical world to Drupal.

Another theoretical and more intricate form could be something like this:

Say you have a physical store that also is an online store (a typical use case for Drupal). And you are about to have a sale in both places. But you want visitors in the store to have the same oppurtunity as the online visitors to get the good deals. In this scenario you could make it so that the moment the lock on the door was opened, a request is sent to the online store enabling the "sale mode". And when you close in the afternoon, the online store "sale mode" automatically gets disabled. This way, the physical store (or more precisely, the lock on the physical store) actually dictates the state of the online store.

Granted, this is a theoretical example, so let’s look at practical and implemented examples instead. I have put together a few quirky demos with varying degrees of usefulness. All examples are actual Drupal 8 sites running on Pantheon, so there is no localhost Drupal instance to talk about. This is the physical world talking to the internet.

Remote controlled Drupal 8

The first one kind of reminds of the above example, although maybe not so useful. It is a remote control to "shut down" the Drupal site (put it in maintenence mode). Or more precise: I am turning off the site with my TV remote. If you are wondering why the site refreshes a couple of times, it is because since I used one hand to film and one hand to press the remote, I had the site just update itself every 2 seconds.

[embedded content]

Temperature monitoring

The second one is a more common one. Presenting the current temperature at a path in Drupal. Here we are just polling for updates to make the video actually show that it works, but a more practical example is probably to post updates every 10 or 30 minutes. Also note that now we can view the temperature from anywhere in the world, while still having our device unreachable over the internet. If you are wondering why am using water, it is because this triggers temperature changes much faster. The glass contains cold water, the cup holds warm water.

[embedded content]

And here is a third one. Since I felt like being silly. This one displays the temperature, draws a nice historical graph of the temperature, and changes the color of the header based on the temperature. I must admit that the last part is purely client side in the video, but could theoretically be expanded to actually do this through the color module. I also must admit that the actual hot/warm color calculation could use some tweaking (more than the 6 minutes used on it), but you probably get the picture

[embedded content]

Drupal 8 as a "surveillance backend"

For the last example there is something a little more elaborate, and maybe even practical. It uses a sound sensor to listen for sound changes. When the sound trigger is triggered, it takes a picture with the webcam on my mac (you can see the light next to the camera after I snap my finger), posting it to my Drupal site, creating a node. A simple surveillance camera with Drupal as a backend. Also, a very concrete example of the physical world interacting with Drupal, as it is the snapping of my finger (very physical) that creates a node in Drupal.

[embedded content]

This article is already getting pretty lengthy, so I'm going to end it here. And before you ask: No, the code for the examples are not yet available. And yes, it will be made available. As I said, all these examples were put together quickly on a sunday morning, and they are all very hardcoded and hackily put together. I will post an update here, and probably a code-dedicated blog post about just that.

Also, I will be following up with the next scenario: Interacting with the physical world from Drupal. If you have any questions, please feel free to ask them in the comments. And I would very much be delighted to hear about alternative ways of doing this, people doing similar things or other thoughts on the subject (rants or ideas).

The ending of this post will be a lo-fi gif describing what sceptics usually call the Internet of Things - The Internet of Lightbulbs. Have a nice week!

Thanks to my co-worker zaporylie for reading through the article

Jan 22 2015
Jan 22

A couple of weeks ago I hacked together a quick proof of concept of editing the same template for using on the client side and the server side with Drupal 8. It looked like this:

Sunday hack. Make #headlessdrupal use #twig for client side templates http://t.co/OQiVya0cu8 #drupal #drupaltwig.

— eiriksm (@orkj) January 4, 2015

If you click the link you can see an animated gif of how I edit the Bartik node template and it reflects in a simple single page app. Or one of these hip headless Drupal things, if you want.

So I thought I should do a quick write up on what it took to make it work, what disadvantages comes with it, what does not actually work, and so on. But then I thought to myself. Why not make a theme that incorporates my thoughts in my last post, "Headless Drupal with head fallback". So I ended up making a proof of concept that also is a live demo of a working Drupal 8 theme with the first page request rendered on the server, and the subsequent requests rendered fully client side. They both use the same node template for both full views and the node listing on the front page. So if you are eager and want to see that, this is the link. 

Next, let's take a look at the inner workings:

Part 1: Twig js

Before I even started this, I had heard of twig.js. So my first thought was to just throw the Drupal templates to it, and see what happened. 

Well, some small problems happened.

The first problem was that some of the filters and tags we have in Drupal is not supported out of the box by twig.js. Some of these are probably Drupal specific, and some are extensions that is not supported out of the box. One example is the tag {% trans %} for translating text. But in general, this was not a big problem. Except that I did as I usually do when doing a POC. I just quickly threw together something that worked, resulting for example in that the trans tag just returns the original string. Which obviously is not the intended use for it. But at least now the templates could be rendered. Part one, complete.

Part 2: Enter REST

Next I needed to make sure I could request a node through the REST module, pass it to twig.js and render the same result as Drupal would do server side. This turned out to be the point where I ended up with the worst hacks. You see, ideally I would just have a JSON structure that represents the node, and pass it to twig.js. But there are a couple of obvious problems with that.

Consider this code (following examples are taken from the Bartik theme):

{{ label }}

This is unproblematic. If we have a node.url property and a node.label property on the object we send to twig.js, this would just work out of the box. Neither of these properties are available like that in the default REST response for a node, however, but a couple of assignments later, that problem went away as well.

Now, consider this:

{{ content|without('comment', 'links') }}

Let's start with the filter, "without". Well, at least that should be easy. We just need a filter that will make sure comment and links properties on the node.content object will not be printed here. No problem.

Now to the problem. The content variable here should include all the rendered fields of the node. As was the case of label and url, .content is not actually a property in the REST response either. This makes the default output from the REST module not so usable to us. Because to make it generic we would also have to know what fields to compose together to this .content property, and how to render them. So what then?

I'll just write a module, I thought. As I often do. Make it return more or less the render array, which I can pass directly to twig.js. So I started looking into what this looked like now, in Drupal 8. I started looking at how I could tweak the render array to look more or less like the least amount of data I needed to be able to render the node. I saw that I needed to recurse through the render array 0, 1 or 2 levels deep, depending on the properties. So I would get for example node.content with markup in all its children, but also node.label without children, just the actual title of the node. Which again made me start to hardcode things I did not want in the response, just like I just had started hardcoding things I wanted from the REST response.

So I gave up the module. After all this is just a hacked together POC, so I'll be frank about that part. And I went back to hardcoding it client side instead. Not really the most flexible solution, but at least - part two: complete.

Part 3: Putting the pieces together

Now, this was the easy part. I had a template function that could accept data. I had transformed the REST response into the pieces I needed for the template. The rest was just adding a couple of AJAX calls and some pushState for the history (which reminds me. This probably does not work in all browsers at all). And then bundling things together with some well known front-end tools. Of course, this is all in the repo if you want all the details.

Conclusions

Twig on the server and on the client. Enough said, right? 

Well. The form this demo is now, this is not something you would just start to use. But hopefully get some ideas. Or inspiration. Or maybe inspire (and inform) me of the smartest way to return a "half-rendered render array".

Also, I would love to get some discussion going regarding how to use this approach in the most maintainable way.

Some thoughts on how I would improve this if I would actually use it:

  • Request templates via ajax.
  • Improve escaping.
  • Incorporate it into a framework (right now it just vanilla js).
  • Remove hacks, actually implement all the filters.

Finally: The code is up at github. There is a demo on a test site on pantheon. And huge props just mostly go out to both twig and twig js authors. Just another day standing on the shoulders of giants.

I'm going to end this blog post with a classy gif from back in the day. And although it does not apply in the same way these gifs were traditionally used, I think we can say that things said in this blog post are not set in stone, neither in regards to construction or architectural planning.

Nov 23 2014
Nov 23

I just wanted to take a moment to talk about how I approached the hot word "headless Drupal" on my blog. It uses some sort of "headless" communication with the Drupal site, but it also leverages Drupal in a standard way. For different reasons. (by the way, if you are interested in "headless Drupal", there is a groups.drupal.org page about the subject.)

First of all, let's examine in what way this simple blog is headless. It is not headless in the way that it offers all the functionality of Drupal without using Drupals front-end. For example, these words I am typing is not typed into a decoupled web-app or command-line tool. Its only headless feature is that it loads content pages with ajax through Drupal 8's new REST module. Let's look at a typical set-up for this, and how I approached it differently.

A typical setup

A common way to build a front-end JavaScript application leveraging a REST API, is using a framework of your choice (backbone / angular / or something else *.js) and build a single-page application (or SPA for short). Basically this could mean that you have an index.html file with some JavaScript and stylesheets, and all content is loaded with AJAX. This also means that if you request the site without JavaScript enabled, then you would just see an empty page (except of course if you have some way of scraping the dynamic content and outputting plain HTML as fallback).

Head fallback

I guess the "headless" metaphor sounds strange when I change it around to talk about "head fallback". But what I mean with this is that I want a user to be able to read all pages with no JavaScript enabled, and I want Drupal (the head) to handle this. All URLs should also contain (more or less) the same content if you are browsing with JavaScript or without it. Luckily, making HTML is something Drupal always has done, so let's start there.

Now, this first part should be obvious. If a user comes to the site, we show only the output of each URL as intended with the activated theme. This is a out-of-the box feature with Drupal (and any other CMS). OK, so the fallback is covered. The next step is to leverage the REST module, and load content async with AJAX.

Head first, headless later

A typical scenario would be that for the front page I would want to request the "/node" resource with the header "Accept:application/hal+json" to get a list of nodes. Then I would want to display these in the same way the theme displays it statically on a page load. The usual way of doing this is that when the document is ready, we request the resource and build and render the page, client side. This is impractical in one way: You are waiting to load the entire document to actually render anything at all. Or maybe even worse: You could be waiting for the entire /node list to load, only to destroy the DOM elements with the newly fetched and rendered JSON. This is bad for several reasons, but one concrete example is a smart phone on a slow network. This client could start rendering your page on the first chunk of html transferred, and that would maybe be enough to show what is called the "above the fold content". This is also something that is a criteria in the often used Google PageSpeed. Meaning in theory that our page would get slower (on first page load) by building a SPA on top of the fallback head.

It is very hip with some "headless Drupal" goodness, but not at the cost of performance and speed. So what I do for the first page load, is trust Drupal to do the rendering, and then initializing the JavaScript framework (Mithril.js in my case) when I need it. Let's take for example you, dear visitor, reading this right now. You probably came to this site via a direct link. Now, why would I need to set up all client side routes and re-render this node when all you probably wanted to do, was to read this article?

Results and side-effects

OK, so now I have a fallback for JavaScript that gives me this result (first picture is without JavaScript, second is with JavaScript):

As you can see, the only difference is that the disqus comment count can not be shown on the non-js version. So the result is that I have a consistent style for both js and non-js visitors, and I only initialize the headless part of the site when it is needed.

A fun (and useful) side-effect is the page speed. Measured in Google PageSpeed this now gives me a score of 99 (with the only suggestion to increase the cache lifetime of the google analytics js)

Is it really headless, then?

Yes and no. Given that you request my site with JavaScript enabled, the first page request is a regular Drupal page render. But after that, if you choose to go to the front page or any other articles, all content is fetched with AJAX and rendered client side.

Takeaways and lessons learned

I guess some of these are more obvious than others.

  • Do not punish your visitor for having JavaScript disabled. Make all pages available for all users. Mobile first is one thing, but you could also consider no-js first. Or both?
  • Do not punish your visitor for having JavaScript enabled. If you render the page based on a AJAX request, the time between initial page load and actual render time will be longer, and this is especially bad for mobile.
  • Subsequent pages are way faster to load with AJAX, both for mobile and desktop. You really don't need to download more than the content (that is, the text) of the page you are requesting, when the client already have the assets and wrapper content loaded in the browser.

Disclaimers

First: these techniques might not always be appropriate for everyone. You should obviously consider the use case before using a similar approach

If you, after reading this article, find yourself turning off JavaScript to see what the page looks like, then you might notice that there are no stylesheets any more. Let me just point out that this would not be the case if your _first_ page request were without JavaScript. By requesting and rendering the first page with JavaScript, your subsequent requests will say to my server that you have JavaScript enabled, and thus I also assume you have stored the css in localStorage (as the js does). Please see this article for more information

Let's just sum this up with this bad taste gif in the category "speed":

Jul 27 2014
Jul 27

It has been a weekend in the spirit of headless Drupal, front-end optimizations and server side hacks. The result is I updated my blog to Drupal 8. Since you are reading this, it must mean it is live. First let's start with the cold facts (almost chronologically ordered by request lifetime):

Other front-end technologies used that does not directly relate to the request itself:

So, HHVM, huh?

Yeah, that's mostly just a novelty act. There is no real gain there. Quite the opposite, I have added some hacks to get around some limitations. HHVM does not work very well with logged in users right now, but works alright for serving anonymous content.

When I reload and look at the source code, there is no css loading. WAT?

Yeah, I am just assuming you remember the styles from last page load. Also, I have made it an image to have a 1 HTTP request CMS, right?

No, really. How does that work?

The real magic is happening by checking if you as a user already have downloaded my page earlier. If you have, I don't need to serve you css, as far as I am concerned. You should have saved that last time, so I just take care of that.

OK, so you use a cookie and save css in localstorage. Does that not screw with the varnish cache

Good question. I have some logic to internally rewrite the cached pages with a key to the same varnish hash. This way, all users trying to look at a css-less page with the css stored in localstorage will be served the same page, and php will not get touched.

What a great idea!

Really? Are you really sure you have thought of all the limitations? Because they are many. But seeing as this is my personal tech blog, and I like to experiment, it went live anyway.

Give us the code!

Sure. The theme is at github. The stupid cache module is at github. Please be aware that it is a very bad idea to use it if you have not read the code and understand what it does. And since I am feeling pretty bad ass right now, let's end with Clint Eastwood as an animated gif.

Dec 12 2013
Dec 12

So, I just upgraded my first site to Drupal 8. Yup. It's in production right now. And it has an animated gif front and center.

The site itself is a pretty simple site. It is about the mobile game Crash n Dash (check it out by the way). It contains a front page, where we also display a somewhat real time statistic of online users. And it has a high scores list. As you probably understand, this requires a custom module, so there was that. Also, we have a simple custom theme, built on the Foundation framework. So there was that.

This allowed me to learn more about making a module in Drupal 8, the guzzle library for making requests, some good ol twig for the theme. I'll cover my findings in separate posts later.

Second, a word about security. Since Drupal 8 is alpha still, who know what kind of bugs and potential security holes you can find in there, right? So I ended up disallowing login through regular channels. Since this particular server is behind varnish, disallowing on the default address was really easy, I just put this in my vcl file:

if (req.http.host == "crashndash.com") { 
  unset req.http.Cookie; 
}

What this does, is effectively denying all logging in to your site on port 80, since no user ever will get a cookie. OK. Well that does not stop someone from logging in if they find the apache port, right? So I put this in my virtual host for the domain (in the directory directive):

Order Deny,Allow
Deny from all
Allow from 127.0.0.1
Allow from x.x.x.x # <- my own ip, so I personally can log in to the site.
AllowOverride All

Of course this does not cover all kinds of other tactics that some people might want to try, but at least we are limiting the possibilities to do harm. So next project on d8 is this blog. I mean, as developers, do we really have any excuse for not moving to Drupal 8 with these simple blog sites we put up? I am moving right after I get my feet wet with the migrate module in d8, as I have really enjoyed the projects I have used migrate for earlier.

Full disclosure: I am also the author behind the website mentioned, and the game referenced in that site. It's a free game, but I think it is still fair to mention. This blog post is a cross post from Crash n Dash tech blog and is in part written for shameless self-promotion :) Let's end the post with the animated gif that actually is on the front page of that shiny new Drupal 8 site.

Oct 14 2013
Oct 14

So when I first realised that I was neglecting this blog, I somewhat found comfort in that at least it hadn't been a year, right? OK, so now it has almost been a year. Does that mean I have stopped finding solutions to my Drupal problems (as the "About me" block states)? Well, no. The biggest problem is remembering to blog about them, or finding the time. But finding the time is not a Drupal problem, and I most definitely have not found the solution to that problem. Anyway, I digress. Let's end the streak with a quick tip that I use all the time: Syncing live databases without having drush aliases.

If you use drush-aliases, you could just do drush sql-sync. But for me, even if I do, I still prefer this method, as I find it extremely easy to debug.

First I make sure I am in my Drupal root:

$ cd /path/to/drupal

Then I usually make sure I can bootstrap Drupal from drush:

$ drush st

If that output says my database is connected, then let's test the remote connection:

$ ssh [email protected] "drush  --root=/path/in/remote/server st"

If that output also says we are connected, it means we are good to go, with my favourite command ever:

$ ssh [email protected] "drush  --root=/path/in/remote/server sql-dump" | drush sql-cli

If you have a drush-alias for the site, you can also go:

drush @live-site-alias sql-dump | drush sql-cli

OK. So what does this do?

The first part says we want to ssh in to our remote live server. Simple enough. The next part in double quotes, tells our terminal to execute the command on the remote server. And the command is telling our remote server to dump the entire database to the screen. Then we pipe that output to the command "drush sql-cli" on our local machine, which basically says that we dump a mysql database into a mysql command line.

Troubleshooting:

If you get this error:

bash: drush: command not found

I guess you could get this error if you are for example on shared hosting, and use a local installation of drush in your home folder (or something). Simply replace the word drush with the full path to your drush command. For example:

$ ssh [email protected] "/path/to/drush  --root=/path/in/remote/server sql-dump" | drush sql-cli

If you get this error:

A Drupal installation directory could not be found

You probably typed in the wrong remote directory. Try again!

I'm going to end this with a semi-related animated gif. Please share your semi-related workflow commands (or animated gifs) in the comments.

Oct 27 2012
Oct 27

UPDATE: Now running Drupal 8. Info is outdated again!

Background: Read this older blogpost.

So I just made a new theme for my blog, and as it turns out, I got one extra HTTP request. I started using Font Awesome with the theme and the file is too big to be embedded as a base64 encoded font. Darnit. So up to 2 internal HTTP requests.

But anyway, I made a new direction in how to cache my css, and the result is way better for mobile.

CSS is looped through (with the core function drupal_load_stylesheet() and then is processed the same way core processes css with aggregation. This way no image paths gets messed up (and the same goes for the path for Font Awesome). Since this is heavy to do on each pageload, I cache the result of this processing, and in the page files I just print out a javascript that you can see if you view the source. The same goes for javascript, a different loop, but the same result. This is for example my CSS function:

 0) {
    // Skip this one, at least for my theme.
    continue;
  }
  // The following are nicked from Drupal core.
  $contents = drupal_load_stylesheet($stylesheet['data'], TRUE);

  // Build the base URL of this CSS file: start with the full URL.
  $css_base_url = file_create_url($stylesheet['data']);
  // Move to the parent.
  $css_base_url = substr($css_base_url, 0, strrpos($css_base_url, '/'));
  // Simplify to a relative URL if the stylesheet URL starts with the
  // base URL of the website.
  if (substr($css_base_url, 0, strlen($GLOBALS['base_root'])) == $GLOBALS['base_root']) {
    $css_base_url = substr($css_base_url, strlen($GLOBALS['base_root']));
  }

  _drupal_build_css_path(NULL, $css_base_url . '/');
  // Anchor all paths in the CSS with its base URL, ignoring external and absolute paths.
  $style .= preg_replace_callback('/url\(\s*[\'"]?(?![a-z]+:|\/+)([^\'")]+)[\'"]?\s*\)/i', '_drupal_build_css_path', $contents);
}

And then to the javascript. It is dynamically printed on cache flush, so just view the source to see what it looks like. So what the script does is do a check if you have the css or js for my page cached in localStorage. It is stored inside an object with a key, so I can clear the cache and invalidate your localstorage cache. The key is generated for each flush cache action, with the hook_flush_caches() hook. If you don't have my assets, the CSS is loaded by AJAX, and I do a synchronous GET for the js (don't get me started on why it is synchronous). So how do I avoid showing the page without styling while waiting for the css? Simple. In the head of the page there is a style tag that says #page should be display none. When my css loads, it has rules saying i should show the #page. Done.

So then you think "Man, all this foreach stuff clutters up the template file". Nope. This is the actual lines from html.tpl.php:

print client_cache_styles($css);
print client_cache_scripts($scripts);

"And what's with the non-jquery javascript syntax?" you may ask. Naturally this is because the same code also loads my javascripts, and so jquery is not included at that point in the code. And also, writing native javascript is fun.

So this makes for 4 internal HTTP requests on first pageload, and 2 after that.

Expanding this, I also used the same trick on the $page variable, so the entire page was loaded with ajax (or localstorage, if available). So I actually got a full-featured, full styled CMS in a couple of KB and 2 HTTP requests. This has some side effects, obviously, so I skipped that one on the live page.

The module client_cache will probably be made available one of these days, if I get around to it. But it's probably not a good idea for most sites!

Let's finish off with a banana musician!

Oct 27 2012
Oct 27

Where I work, we have some bigger clients where we have some advanced integration systems with their accounting systems, stock control, exports to old windows systems, and the list goes on. So these things are not something we want to (or in many cases can) run on the dev version of the site.

To keep things still in version control, and not having to turn things off when dumping in a fresh database copy, we use the $conf variables in settings.php

The file settings.php is not checked in in git, and this is also where we automatically turn off js and css aggregation, by for example setting

$conf['preprocess_css'] = 0;

And some other stuff. But we also add our own variable.

$conf['environment'] = 'development';

This way we can do a check in the top of all integration functions:

So keeping this in our production code, ensures that integration functions are not run when developing on a dev site. Also, a lot of cron functions are moved to a separate drush command, and run by the server cron instead of implementing hook_cron(). This will then never be run on the dev site.

I am sure everyone has their own way of doing similar stuff, so please feel free to share your similar tricks, or improvements, in the comments.

I guess this calls for a transport related animated gif:

Jul 03 2012
Jul 03

A while back I wrote a blog post about loading my homepage in 1 HTTP request. As I said back then, this was only an experiment, but as promised I have done some testing to see if this was any use at all.

A short repeating for those who does not want to read the whole (previous) article: The front page of this website is loaded in 1 internal HTTP request. All images, CSS, media queries, javascript are in one flat HTML file, served directly from the server cache. So, then you open your inspector and find out google makes 2 requests, and I have the twitter feed loaded async. OK. But 1 internal.

First, let me explain how I did the tests. I did it locally (so network lag should not be an issue), and with phantom js, a headless webkit browser. So this should mean that the page is fully loaded in a browser at the times presented, and that it's not just the request that is done. I also did 1000 runs on each setting, just to have a lot of numbers. I never tried it going back to normal images instead of base64 encoded though. Should probably do that too at some point. Anyway, here are the results:

First up: No caching, no aggregation, just how the frontpage of this blog is in this theme: An average of 1525 ms. Not very impressive. But the optimization effort is not very impressive either.

Second: Cache pages for anonymous users. Because the test requests are not logged in, and none of my visitors are logged in either. 131ms. That is improvement. This is localhost of course, but that is also the point.

Third: Javascript and CSS aggregation. Fewer HTTP requests finally! 125ms. Not a big improvement I must say.

Then: Turn on the boost module. Serving plain HTML pages with a couple of aggregated CSS and JS files: 85ms. Pretty darn fast. Let's try to get rid of the remaining HTTP requests.

The current version, how I now serve my homepage: 1 HTTP requests, plain HTML from boost. 92ms. Darn, that is actually slower again. Browser cache definitely plays a role for rendering fast. Can I do some more tweaking to this?

On to the last test: Get rid of all the whitespace I can find in CSS and JS, and try again in 1 HTTP request. Down to 89ms!

Ok, conclusions: If you have a feeling your visitors will visit more than one page on your site, you have no need to go all the way to 1 HTTP request. And a couple of disclaimers: This will probably vary depending on connections, how fast your computer is (for browser cache), and probably the results will vary from where in the world you are accessing my site. But with some variables out of the way, it seems that eliminating all requests is no performace gain compared to lowering the number of HTTP requests and minifying. Also, is it not interesting that getting rid of whitespace saved more than 3% in load time? Another reason why it's probably best to minify, gzip and aggregate. And also, other caching methods than boost are probably more efficient, but I keep this blog at a cheap host, so Varnish, memcache and so on is not an option. And frankly, this post can not cover both front and back end performance, right? For my next test: does anyone have any good tools on doing the same tests for a “headless mobile device”?

Jun 10 2012
Jun 10

Well, not really. I mean, you can create webforms programatically pretty easily. This tutorial will show you how easy. Or you could just use the rules module if you just want the node created. But also, I want to share the things that got me scratching my head, like creating the description of each webform component programatically.

I am sure you have had this scenario as well. You enabled the webform module and tought the site admins how to create them and add components. But each time they want to create a new webform, they send an email to you asking for help, and you practically end up creating the nodes for them, since it has already been 3 weeks since last time you told them how it was done. Ok, so let's make it dead simple for them. Create a node with a title, and put all components in the body - one per line. Ah, no more emails, and code nerds can go back to their terminal and away from clicking with a mouse.

Use case:

So I have this site where you can sign up for parts of an order. Like a co-op. Let's say we are ordering kittens. So an admin puts out the news that he is shipping out a new order, and these are the kittens that are up for grabs.

  • Grey kitten
  • Black kitten
  • Killer kitten
  • Clown kitten

We want to create a webform to find out how many kittens of each type the users of the site would want in this order.

So instead of me telling the admins to create a webform and add numeric components for each kitten, I just tell them to go ahead and click the big button that says “create order” (visible for admins only). In the title field, they give the order a name (like the name of the supplier) and in the body field they list all kittens available, one kitten per line. Optionally they can also add an URL to an animated gif of the kitten, if included on the same line in a parentheses. So much easier for them, so they can concentrate on kitten distribution instead. I also added a description field so they can use that for a closer description of the order. Ok, so this is the code (Drupal 7):

type == 'order') {
    // I am using the body field for values.
    $body = field_get_items('node', $node, 'body');
    $comp = explode("\n", $body[0]['value']);
    // $comp now holds an array with all lines in the body field.
    $i = 1;
    $components = array();
    foreach ($comp as $c) {
      if ($c == '') {
        // Skip empty lines. Maybe the non-tech admin pressed an extra enter?
        continue;
      }
      // Create the webform components array. Not sure if we need all these
      // values, but let's be sure.
      $components[$i] = array(
        'cid' => $i,
        'pid' => '0',
        // I don't trust the admin to make a key based on input :)
        'form_key' => 'kitten_' . $i,
        'name' => $c,
        // I want all lines to be numeric type component.
        'type' => 'number',
        'value' => '',
        'extra' => array(
          'title_display' => 'before',
          'private' => 0,
          'type' => 'textfield',
          'decimals' => '',
          'separator' => ',',
          'point' => '.',
          'unique' => 0,
          'integer' => 0,
          'conditional_operator' => '=',
          'excludezero' => 0,
          'field_prefix' => '',
          'field_suffix' => '',
          'description' => '',
          'attributes' => array(),
          // The number has to be positive.
          'min' => '0',
          'max' => '',
          'step' => '',
          'conditional_component' => '',
          'conditional_values' => '',
        ),
        'mandatory' => '0',
        'weight' => $i,
        'page_num' => 1,
      );
      $i++;
    }
    // Alright, let's create the node object.
    $n = new stdClass();
    $n->type = 'webform';
    // Let the user be the author.
    $n->uid = $node->uid;
    // Let the 'order' title be the webform title.
    $n->title = $node->title;
    // I put them all on the frontpage.
    $n->promote = 1;
    // And we are allowed to comment.
    $n->comment = 2;
    // Enter webform array.
    $n->webform = array(
      'confirmation' => '',
      'confirmation_format' => NULL,
      'redirect_url' => '',
      'status' => '1',
      'block' => '0',
      'teaser' => '0',
      'allow_draft' => '0',
      'auto_save' => '0',
      'submit_notice' => '1',
      'submit_text' => '',
      // Users can only submit once how many kittens they want.
      'submit_limit' => '1',
      'submit_interval' => '-1',
      'total_submit_limit' => '-1',
      'total_submit_interval' => '-1',
      'record_exists' => TRUE,
      'roles' => array(
        // Only authenticated users can submit this webform.
        0 => '2',
      ),
      'emails' => array(),
      // Here comes our $components array.
      'components' => $components,
    );
    // Here i also add the description field to the webform.
    $description = field_get_items('node', $node, 'field_description');
    if (!empty($description[0]['value'])) {
      // Blah blah.
    }
    // Save the node.
    node_save($n);
    // You could also put something like drupal_goto('node/' . $n->nid) here if
    // you want. My use case is different.
  }
}

Ok. This is all pretty straight forward, eh? So the thing that had me going nuts a while, was adding the links as the description of each component. Looking at a webform node object, one would think it would go in the “extra” array of each component. This is how a webform node with a component with a description looks like:

$node->webform['components'][1]['extra']['description'] = 'A description';

But after repeating different takes on adding it to the array (even trying to modify $n after it is saved, and saving it again), nothing seemed to do the trick. Luckily, webform has its own hooks. Enter hook_webform_component_presave(). Or “Modify a Webform component before it is saved to the database.” as it says in the documentation. Perfect! Let's go ahead and add a link to the animated gif as a description:

 -1) {
    // Take the URL out of the name of the component.
    $name = substr($component['name'], 0, strpos($component['name'], '(http'));
    // Add animated gif link as description.
    $component['extra']['description'] = l(t('View animated gif with @name', array(
      '@name' => $name)), substr($component['name'], strpos($component['name'], 'http'), -2
    ));
    // Set name to the new name (without the URL).
    $component['name'] = $name;
  }
}

Bottom line? Webform is awesome for these kind of user submissions, and now with programatically creating them just like we want, things just got real easy for the non-techies on this site. Now let's see how many killer kittens I can afford. Sorry for the large GIF this time, but I could just not help myself. Killer kitten to follow (5.7MB):

May 08 2012
May 08

Woah, that's a long title.

Today I wanted to share the little shell script I use for setting up local development sites in a one-liner in the terminal. Be advised that I don't usually write shell scripts, so if you are a pro, and I have made some obvious mistakes, please feel free to give improvements in the comment section. Also, while i have been writing this post, i noticed that Klausi also has a kick ass tutorial and shell script on his blog. Be sure to read that as well, since the article is much more thorough than mine. But since my script is for those even more lazy, I decided to post it anyway.

The idea I had was pretty simple. I constantly have the need to set up a fresh, working copy of drupal in various versions with various modules, i.e for bug testing, prototyping, or just testing a patch on a fresh install. While drush make and installation profiles are both awesome tools for a lot of things, I wanted to install Drupal without making make files and writing .info files, and at the same time generate the virtual host and edit my hosts file. And why not also download and enable the modules i need. For a while I used just

$ drush si

(site-install) on a specified directory for flushing and starting over, but part since I have little experience in writing shell scripts (I said that, right?), I thought, what the hey, let's give that a go. Fun to learn new stuff. On my computer the script is called vhost, but since this is not a describing name, and the script is all about being lazy, let's call it lazy-d for the rest of the article (lazy-d for lazy drupal. Also, it is kind of catchy).

So the basic usage is

$ ./lazy-d [domainname] [drupalversion] “[modules to download and enable]”

For example:

$ ./lazy-d example.dev 7 “views ctools admin_menu coffee”

This will download drupal 7 to example.dev in your www directory, create a database and user called example.dev, install Drupal with the site name example.dev, edit your hosts file so you can navigate to example.dev in your webbrowser, and download and enable views, ctools admin_menu and coffee modules.

Running the script like this:

$ ./lazy-d example.dev

will do the same, but with no modules (so Drupal 7 is default, as you might understand).

You can also use it to download Drupal 8, but the drush site install does not work at the moment (it used to when I wrote the script a couple of weeks back though). Drupal 6 is fully supported.

The script has been tested on ubuntu 12.04 and mac os x 10.7. On mac you may have to do some tweaking (depending on whether you have MAMP or XAMPP or whatever) to the mysql variable at the top. And probably do something else with the a2ensite and apache2ctl restart as well (again, depending on your setup).

The script obviously requires apache, php, mysql and drush (git if downloading drupal 8).

OK, to the code:

#!/bin/bash

# Configuration
VHOST_CONF=/etc/apache2/sites-available/
ROOT_UID=0
WWW_ROOT=/var/www
MYSQL=`which mysql`
DRUSH=`which drush`
# If you don't use drush aliases, uncomment this.
DRUSHRC=/home/MYUSERNAME/.drush/aliases.drushrc.php
TMP_DIR=/var/www/tempis
[email protected]
LOG_DIR=/var/log

# Check if we are running as root
if [ "$UID" -ne "$ROOT_UID" ]; then
  echo “You must be root to run this script.”
  exit
fi

# Check that a domain name is provided
if [ -n "$1" ]; then
  DOMAIN=$1
else
  echo “You must provide a name for this site”
  echo -n “Run this script like $0 mycoolsite.test.”
  exit
fi

# Downloading the correct Drupal version.
cd $WWW_ROOT
if [ -n "$2" ]; then
  VERSION=$2
else
  VERSION=7
fi

case $VERSION in
  7) $DRUSH dl --destination=$TMP_DIR
  ;;
  8) git clone --recursive --branch 8.x http://git.drupal.org/project/drupal.git $1
  ;;
  6) $DRUSH dl drupal-6 --destination=$TMP_DIR
  ;;
esac

# Move the folder to the correct place. Drupal 8 was cloned directly to the correct place.
if [ "$VERSION" -ne "8" ]; then
  cd $TMP_DIR
  mv drupal* $WWW_ROOT/$DOMAIN
  rm -rf $TMP_DIR
fi

# Create the database and user.
Q0="DROP DATABASE IF EXISTS \`$1\`;"
Q1="CREATE DATABASE IF NOT EXISTS \`$1\`;"
Q2="GRANT ALL ON \`$1\`.* TO '$1'@'localhost' IDENTIFIED BY '$1';"
Q3="FLUSH PRIVILEGES;"
SQL="${Q0}${Q1}${Q2}${Q3}"

# If you want to keep you password in the file, uncomment the following line and edit the password.
# $MYSQL -uroot --password=Mypassword -e "$SQL"

# If you have no password, uncomment the following line to skip the password prompt.
# $MYSQL -uroot -e "$SQL"

# Else, this prompts for password. If you have uncommented any of the above,
# comment this out, please.
echo "Need the MYSQL root password"
$MYSQL -uroot -p -e "$SQL"

echo "" >> $VHOST_CONF/$DOMAIN
echo "    ServerAdmin $SERVER_ADMIN" >> $VHOST_CONF/$DOMAIN
echo "    DocumentRoot $WWW_ROOT/$DOMAIN" >> $VHOST_CONF/$DOMAIN
echo "    ServerName $DOMAIN" >> $VHOST_CONF/$DOMAIN
echo "    ErrorLog $LOG_DIR/$DOMAIN-error_log" >> $VHOST_CONF/$DOMAIN
echo "    CustomLog $LOG_DIR/$DOMAIN-access_log common" >> $VHOST_CONF/$DOMAIN
echo "" >> $VHOST_CONF/$DOMAIN
echo "    " >> $VHOST_CONF/$DOMAIN
echo "      AllowOverride All" >> $VHOST_CONF/$DOMAIN
echo "    " >> $VHOST_CONF/$DOMAIN
echo "" >> $VHOST_CONF/$DOMAIN

echo "127.0.0.1 $DOMAIN" >> /etc/hosts

# This will enable the virtual host in apache. If you are on mac, just include
# the vhost directory in the apache config, and uncomment the follwing line.
a2ensite $DOMAIN

# If you are on mac, check if this will really restart your MAMP/XAMPP apache.
# Or just uncomment and restart with the pretty GUI manually.
apache2ctl restart

cd $WWW_ROOT/$DOMAIN
drush si --db-url=mysql://$1:$1@localhost/$1 --site-name=$1 --account-pass=admin
if [ -n "$3" ]; then
  drush dl $3
  drush en $3
fi

# Create Drush Aliases
if [ -f $DRUSHRC ] && [ -n "$DRUSHRC" ]; then
  echo "" >> $DRUSHRC
  echo "> $DRUSHRC
  echo "\$aliases['$1'] = array (" >> $DRUSHRC
  echo " 'root' => '$WWW_ROOT/$DOMAIN'," >>$DRUSHRC
  echo " 'uri' => 'http://$1'," >>$DRUSHRC
  echo ");" >>$DRUSHRC
  echo "?>" >> $DRUSHRC
else
  echo "No drush alias written."
fi

echo “Done. Navigate to http://$DOMAIN to see your new site. User/pass: admin.”
exit 0

All done, indeed. Pretty straight forward code. Known issues: will drop existing database if you specify one that exists. But why would you do that? Don't you know what you are doing?

To finish this off, here is an animated gif called "alien computer":

Feb 25 2012
Feb 25

UPDATE: New theme, new code. Click here to read about the way I use cache, and limit HTTP requests now.

Disclaimer: This is an experiment, and would probably not apply to most websites (if any). My mission: Load my homepage in 1 HTTP request without compromising layout or functionality. Just to be clear, the motivation is not to gain any performance from this, just to see if I can do it.

So, back to the point. Drupal has a lot of css, js and image files by default. To load them all you have to ask the server to serve them to you, simple as that. So if you have to ask less times, the page would load faster. In theory. This blog has over 40 requests on the front-page, out of the box. Luckily, Drupal has a number of tools you can use to minimize requests, and among them is the optimization you can find in core. You can find it under the “performance” settings (depending on what version you are on, this is located under “configuration->development” or “site configuration”). Cool. But this still leaves me at more than 10 HTTP requests. I want less! So what next.

So the next step was to avoid loading too many images, so I decided to start with base64 encoding my burning logo (which actually consists of one file pr. letter).

To base64 encode files is extremely simple. What I usually do is use a small php file containing only this:

// This is the file.
$file = file_get_contents('some.file'); 
print base64_encode($file);

If you open this up in a browser (or run it from the command line), you get a long string back. This is your image. To use this in a page, you just go like this:

#someid {
  background-image: url('data:image/png;base64,A_LOOOOOOONG_STRING’);
}

Simple! Things are getting somewhere.

The next step was to embed the fonts I use, without querying google fonts (hope they don’t read this, as I am not sure if I am allowed to embed them like this, does anyone know?). Same step here with the base64 encoding, and the styles went in to the head tag something like this:

@font-face {
  font-family: 'My cool font';
  font-style: normal;
  font-weight: normal;
  src: local(''My cool font ') , url("data:font/opentype;base64,A_LOOOOONG_STRING”) format('woff')
}

Awesome! The number is decreasing, without compromising the layout.

The two next steps is kind of site-specific, but basically what I tried to do, was iterate over the scripts and stylesheets, and print the contents of the files instead of embedding them. So in a template file I put something like this:

';
$style = ''
foreach ($css as $c) {
  // Do some checking of some media query stuff. 
  // ...
  // Then store the raw CSS.
  $style .= file_get_contents($c['data']);
}
//do some regex to remove some stuff, and remove whitespace.
print $style . '</style>';

Cool. We have no requests on css! And now to the JavaScript.

This was tricky. I have a combination of standard, custom and a couple of compatibility scripts on my site. I had to throw out a couple of them, and do some reordering of the scripts, and then iterate and print them out in a similar matter as the stylesheets.

Since you probably are looking at this page in a full node view (or you would not be able to read this), you may have looked in the inspector and seen that I have well over 30 HTTP requests. Ugh, those external services, eh? The main reason for this is actually the addthis stuff I have at the bottom. So I have over 30 requests extra just to look modern. BUT: If you navigate to the front page I have tossed out all sharing options and actually clock in at an awesome 4 requests. Two of them is google analytics. The other one is there for one and only reason: I use the boost module to serve static pages, so if I want to display that talking burning bird without using cached data, I just have to burden you with a lookup to search.twitter.com. I am sorry to burden you, visitor!

Mission is as much as it can be complete, and I am down to 1 internal HTTP request. Let’s celebrate with an animated gif, folks!

Feb 04 2012
Feb 04

A lot of different people has started experimenting with Phonegap and Drupal. You have Jeff Linwood and his Drupal plugin for Phonegap for iOS, and I just discovered Drupalgap as I was planning this post the last weeks, which does more or less (actually it does more, but not all) some of the same things I will try to do in this post.

If you want to get up and running real quick, Drupalgap seems great. If you want to learn the code behind it, and extend it yourself (this was my motivation), keep reading.

This post is dedicated to writing phonegap apps to post images to your nodes, right from your javascript. You just make an HTML file, include some javascript magic, and hey, we just posted images to our site using the camera on the phone.

As I have written a couple of posts now about posting content to your Drupal site from an app with the services module, I thought I should probably do a little more tutorial like post. I have started moving a lot of functions into a javascript file I call drupal.js that have several different methods of communicating with your Drupal services endpoint. Today I want to just scratch the surface, and provide a working example app.

To get this working on your site, follow these steps:

  • Use Drupal 7 and services module v3.
  • Enable the modules Services and REST server.
  • Add a new services endpoint (under structure->services)
  • Enter a path to the endpoint and select REST server. Also use session authentication if you want to log in.
  • Add json as response formatter
  • Enable the node, file and user resources.
  • Be sure to have a content type enabled, and preferably with a body field and image field (for this example)
  • Download the app and test it out!

The app can be found here (android, webOS, blackberry and symbian only. iOS will not be released, as this is just an example app that I don’t want to get into the app store).

Since this is a hobby-project at the moment, I don’t have very much time to do development on the js every day, but I will post back with more info on that in a later post. The thought is that you can run these functions from your html app, so you don’t have to write all the ajax calls yourself.

For example for nodes, you can do something like this:

var date = new Date();
var timestamp = Math.round(date.getTime() / 1000);
var node_object = {
	"type": prompt("Enter content type"),
	"title": prompt("Enter node title"),
	"body": { 
	  "und": [
	    {"value": prompt("Enter body text"); }  
	  ]
	},
	"created": timestamp, 
	"status": 1
};
url = "http://example.com/services/endpoint";
drupalServicesPostNode(node_object, url, function() {
  alert('Node posted!');
});

The last parameter is the success function (optional). There is also a standard success function in drupal.js, but it is just an alert to tell you that that it was a success.

Currently it supports logging in, logging out, posting nodes (also with posting files/pictures).

So let's go into a more detailed tutorial. To post pictures, you have to have a base64 encoded image. This is actually what you get back from Phonegap when accessing the camera, but just to watch the basics of posting nodes with images, try typing this into a click function for something (remember to be able to post cross domain javascipt and include drupal.js. And also, if you just include this code (and not logging in), remember to also be able to post as anonymous. Oh, and obviously, include jQuery)

$("#someid").click(function() {
	var data = "R0lGODlhSABIAOZ/ANtwcLu6uvXY2M/NzdNQUP76Av55Bf2PBeZHBtgtCPz09GwxMf3QBPf3lzgrK/HJye25uY2MjP39X8EBAVIwMPttBoIqK//+/qloaJGMGHJvbeSamuikpL4AAfrr62BeEt+Bgf2tBA8MC8kmJvj0OMYUFcQJCuGJiZtcFujo55kWF5WUY/34+Mw1NswVCPjj4//o2EFBGfzrAkdGRfHx8aYFBSMcHNC8r+52NKs2NcO9LN1NLbaFhMEECPDBwHZ0Pd45Bv78FHpEQcxYAs87OMMHCLE1BOjlAkcFBvfg4P//1Nzb29plDuFbLtEfBvTU0sR7MZAqBL4ZFVIiCvbsG/no59g/IMkMAuyFOHgGBsuJAv//+MUNDaAODulsB//69s0xLv77+/PPz915eeuyssorLKkeHsgQCmg9BrsPDdZcXNhkZOOSksgeHpAhIcQFAqeko+1WCOtyBu19HvRiB/bd3f2BHPvw8DAvENBCQviiJOqsq9zaB7MQDgAAAP///yH/C05FVFNDQVBFMi4wAwEAAAAh+QQJCgB/ACwAAAAASABIAAAH/4B/goOEhYaFCoMXgw9jAoeQkZKTkYuEDxCGbGMehwp3lKGih2GESW2PhGwAloYPiaOxoQKggmEEHVWEPkSwhmF7rR53F6Wyx39ia74CE0Wpfwp5PpUbixcbBAoAtcijDyPUjBNnmYJkea2HZLoQHWwcXMbeoQotAIU+EyUcg2pjk5JYG9NBDIEJL+iJetDhSSEPBx8IUlBmwyQWIARsyFOlxARzCif5MMFC0B1jJ1rA8mDC4iQOJ8QIYHHQh5iSISPVKVFrg8M/EMgM8jDhhKBihhI9yINzo6Ocki6cWPOAw4g9gli0UmBizSIFIAVh+iOgCCwQBDol7ZZTAcEJE//6GbqgxkSiMCBanXr0wESnVfMKvYI6KAyACWASHtrQASsvYwJw6eLQoQ4vX4WAqfuDlF6SIiAi6cvDoUW6ZCCcPQJQOc8GDQMOYVNnmK2kwIVLiIn0YEKLHmDW/GGxBgI5CC/cIOFhRoSDABE0BCjEjlA83JGGzQUhVJEAYz7acBnRJhEBEPr4CfEjQoUbPw78yNdQSKAlmghF3flZiIOaeWRIJAgbYHRAhENhmMAGRBNgIJ8NNVggH3wR0FAIRql09JEoLIhDXQ9qQWCULXmAYcJuf7xQAgGNSGGDfA6YYIENEcQGCUxZ1SSKADcs1kFCCoyQxCBPmAAGTwPm0QH/Fy0sMKEDE8xogwMawJGCK3kYs0EbtkWCgQgzWEnIGrn8wZBif4DQAQBTVTWCkiVMIcKENpjwnh82OEmjhYOUBYsAI1KSRA5z4glbNHmUkYg7G5TyxAglVOFWB3CRmcWEE6pgwosiSAjjdGL5ZctmkYAAwAyY+jEDBh2EBtQECpJx0IiGIbZBDYViakEHTlLQRqEU0CgIZXXIogAYG8CRqnwiaLBEMhNwMUEHb4gqyGcgQIDEsn4sMEEXC6hQxpwUWDBnBH8chqYodTSmQHzcNjtAC1yINwFAuYlxA7d+UDDBG1x0sAaebsDrBxxllKDWKHVMYNEAuS4rAgU1tADG/wRDKsLdesxSUOgCHZjQQR8vEuApe25MwJQoxtzBBSt/BBAxtwvk0AZOg3BAwIvyLUDBhBbA1cPPIsjIbNATCPeHBw+QOsgLqbCgRg+6/DGAwdzmiYSNg5DRA7wLnCxCF878DF8RKlBAgRsdFNGBRXecsPAhbICg5Tth0LDEl/zCR3QErbDRQXwQGkyBCRM4CWMbSyKOeBszpFmsJJSZU0cPRfAwM7czsmfDDFf+gUt8FAwOdAd3PlhGByWUwEURNfw8wGmSPCHwNYfV0PeTuPoN3xIemFHDiyCb228NRWAtAtttjCCtpxjIM8kFAJRgiQAXb7u7BXc6sMCLNuTQgf+n3soYbpTL7tpGG2+owLMQE6A4yBLPmjJGKxu40MfuEHoqwgIL8F4XYicff0mLWl9b1gJ64LoaGMxbeVjCAKAzAwfUTxFVE4QHwIAAI/StdIpjjwqK4LbUQYgLrkOfAk1QBBUW0G0hPBglsOEEOgyBZ6ny18m6BRc34NAPFjBBCc4wAbOlagFXuEIHjNgvuCjuOX/okiHqMIIEGMAAUdBeoRzABTfk6nDuS5UDRviGNvwQaFdwAhewhjQLhCkRBgnFGN4QBwMkIAtIcIDZbKCCBBZwfE/imbcQszkRmMEJCehBCCkmLQyE7g4jaAEkPICzBxQBCCgolAWMB0RAymf/Vyfz2YNU0IERbM4BHUBAHDpQg+9RoAsmMIEDRTAlIdQADpN8giXu0AYPTghkiqMA5lxJNhXExwHg+iWsmCiflNHBAEB4A6XgErAdCoEAkSCOWlhQRTRMqHR3Wh6sIFUEE7TPDV1Y4pOk1YcQ5qkHTqiAASpAByC4IFrOiA+YIiAAaBiCBbo5ihoS4AUtuqEEwKoBXGJpAi684Q1wMZgNRgCXNFggbCMoQg8QcMUr0kGNE6gBD+AQgND9ATuDcIeHCAAEOpwhC3NyQOqA6AInmKAHLkiACxDXBRx2ClYshMu04tnROOz0DTUQAmeaRokHEAFjGiwDAujggh50QAga/9CAwRzAQQQ4QZVV9STQnNFQLnChBxu9YgWAcIYiIHUB9HnAuiAhBopOQJd4KUIcKuAEzFWrObqq4RXjcFMC6uoKicQcWoHQUQTEsgfJnM4JcGaLQtzhYg5dUQs64AJ5IuAMbevDzBxQAysaAAE9mMBMm1lDr57BBUCQpwFcmloCOuACUuFPNOb2hz1wVqep7UAHGDtPBFT1UstyAx2hGVKsidC0FYgDHWQLTcwlTj64/IMPAEDZzBwkAYTtwU2J2lE6xMGXDsAhBS5pR7E+yQTE7egBDjDPRE5ABTE1hgLUICBPmBMBfBWvYA/AgANUQLbe1COd2hDbvjq3iRwlsP+BQxCCAwMhtZ4ClSAEgBlEcOEKz6QDAgA8zwMUQAYhMLA8UYCGNcSAWVWsgAtKcEY/KDcOBC4AfRlQ4dO2UAV+oA8hSLUE/ZbABc+8In0PHIICOJkBUIYyFUqguPYkoAJX4AIzKdZZA4SAAQY4gAzAPNt7TkAIfJLEEio0nDIsN8wp9rIMgiABCTj5zlBQp3xU0FIihpFZKphAfEtcADKfFrQtmCskAvCcC7BhAhwN84kZQIIgBKEBf1BCA4LABxSQDV6knWpV7+sAWgZNr2GeZwWaLAMDXxEBbQBBBiehrDBxgahiLsClG0ACCWzhDymIgC3hQjQL6DXAbjVBG0r/8AZ40uEAX6awDJ48TwPMwQcojYSy4JMFI8yzyQUgAQkakAI4aICWCs0sATbb5c+2DXHTMkGEp33nOzNADjiAATLgkCsRTGEKaPiAwPGQK5DldNTCJW4FjItWE5zhDMQFN73pzQcdSFEUAahxvAIN3puOl7qzjcOIVXngXAeBBHc+gg5u8GuFLAFVu8PTfwNsggEX+MAd7WgI6D1uEqg8AyYNiQI0EHMbfDjEI5ZnBUyMYhXDmQH0LgAVdJCBD2SXMIJgdN9scOQkp5rJ9o4y1O/M6QzEQANBx/pJ4aBxEbi5jnCm787pbOd639nsZ+ea2gtBAzg8OGiRZjqlLd0AeT98IANBOLEMdPADKl1w75EIgAbUe2t5mrwBvJYAz/Dwgwz8oEpphjwlaCD5GdjAAlzg6KqPcARxF35On4tAAEIvelnQYAAAGEEcmIACz/9gBSsg6RJoX3t6hCEJAqjCF5xW/OY7//nQj770p0/96lv/+tjPvvYpEQgAIfkEBQoAfwAsAAAAAEgASAAAB/+Af4KDhIWDF2GGiouMjY6Phhd3Yg+QlpeYjmEALWR3maChli9lIz6iqKmFYQ8mZKqwqRdklYIsLBeEubG8fxdVLL5/CriCT2QChGEeHom9qWF1e3t1n4R3J3nJgmFVaw/Wz6IsdT4AbNvcTx154WEQbWwK4tAsbGrpf0ltHRDBtmRKcJhHj5Gkf4ousCHwwpkAAhPGeCDkYUOZFwUNTnKmKMwJANvqrIkYDNEfFgTAJMlo8EGVXYqq5NmQS8GeIidqihF0osMrlouS7GTEAgQYgizWsAkmRs0eMWs6jAGKaOKhJLUY7ZkgZleVhn/EgGnDJc9MqkkIrBQ0ay0jMRP/aB66wMIHmw5cfAgYOohGrgAaBsAKU4SAglwsTryMFCZXFRMgFAkAsKZIFWGCFACG48CPgwARNARAdSdPDwiJkoxxS4gFmVweJshT5GFNC4yEeFAQ4ad3594aUL2IyiHXCwJ1Eo6ZJ6YHm6yFPOSJPAgOhRq/e9twEIEGKgVkeuzJXIJ1osZ52NTZ0OEEh0R0CXkDc+pPAM8dVPDmPiDfOAKondTGHv+QI8AdAIRHQBkTqMFGTU/g9ocHJmyQhwJL2OAHBUVMQIEfNtgwQw48pKBKEoedpAYB21wgwAk+cLEBAB1M0EN6bD1xwgO53FECCHnsMUNvC5zhYW9E1iBE/3exsMBBEXvsckEdY0xAAIIdmFAGABw11RULanABQB5ILlDCBAsg2ZsbXWw3miouFtFCV4OAMEEJVSgAAQEjSOGDMxeMceUfZJigRg0abjiBCtn1RkEHH4oQASxgTtDGn3+EscEEF2Y6hhodEDHUBSBI8cQwauRgwm82dGFBbyIkaoMJafY2qSgX5ELGGVmO4cMDNK5xXh4Q0KhGHQrUwWCAN1gwQXYL1ErBqyCW8CGScIQyyWUegFDjnTaWoUYiChQhgAIQEXACASZMcMInGrhhQqJ+xJrkAhS4YQFvSIqwBChhCLDBqQoQUUQRXJRQAhcEBENhJcdN0EEHJRQhbP8KDpiggppqLjCBxBbQi+QMofyixjweADBBEW2MYOUndXTAgSAPEFGGwmW0IUAEC3RALcee1TCBG/wC/SYoAJwwURUnTHBGGz2o9ccLEwBgCwQjEGHCggQI0QWaQCNpQRGNckwyKBds0EYtd2zAhQlFgJGMByVcZAsBbfwYFRdvHBn2hiUUrSa/gmXCwhhcZJVYD0W4EgYLeVQNHxsjAKhAGll2UKsIu3X8s5oO/BZcJneMUASdglRBxBlgAPDJSO7m8kAZJjwBxwI9fPwzBbXWq0LvalKQ6NmFLPHvIsp2sAdBJ4FwhRVgvPJAB0V0ANIGFQvgaIddFG1BrQ70ADz/ksLDCscASwwA2gwOHK/IE3mwI4YzCozxBgI7tFDHBW1M0K4JXOhAGyJQJv9dK2gacoDPwkaBsgkuW75ggQdyNYg6QIRh4HhHCVxQATqUIA8PEIPpElaCDpRhSEhywwJ9Ayn8uEFNNuCNDT4HuqMNYwMnSMIuWOA2hI3gBGMowf0MYAcugKEFK5oAF0bAjxwIzgFFqEGkLKA5ENWgTeTrjAiw0y8RwYF5bBFDzsAyjDE0rgc1ckIcDGAAHEyACBPL2xva0ESgOet3FmhXrahYKxss4Df6GhkcTMQN77ClZgCQUG2oV4QzAIGNdkjABJrghA70oAUIK0EPhAC08L2h/0Zc8Nt1uvDHaPHLAoga3HZCZ8N3gCA5wqhCEP3nAicgAAfUwwIdEtCDHjCocePrjbMSxoVUromR+ilTC4FmAwgSQgEoiuAdLuADMhAglEXoARE+hgMDVAAITthBB65whWAq6gwlMMHnbOCGjwHPWQes1wyYlBBCiMEDHCCADyyUTZdNYAdsrEAFcMAFJ1yBhtppwxuKwCiO2UAFJjigCNzAhc5EIAABIKQl6iKAFoygCi+qmxLP0M0DHAAHPUiAE8wgOFiBwZ3ayQ4VDxg+FYToDwK4TCZ8AIEHtOEVdyCDaUxgAifgQA9YcEEH4oCADpStXiPIT6J4J7YV+oGKr//SgKagEwm6ICQMHIDAHwAwHl+coAx4+ZjE3vBIOgwNaBTQY5m60BkHqGACXRDeArL5oQDk1BEXeMEYAjQMbbjILQ8AQ4dC6QIg0KEC3nSC+Oi1gD4oMTuP6oIbVPAGuJVgBO0imgNM8ghWjOBUf+BAeTI1CBY84JJ9Kygd2MhGBDBuBN+zQBo+NgJ6OeBjb3jD24j6MSn6wZmQuAAHBvuAEZQBlq0FQQ9M0IQrdNYFawxoJdXauHbtC0mtomUCXDBdJ7jgVQ7giCXusKD+maIQAiACGBIw0DMs1JGQNYBte8CF/sJtAjSkYg+wiwBbIgAMnbHheluwsnap4QkKuMP/HQRQBimAYbYDlcJ0TYAA2gKhl9lM6UE5dp3pZpcOTsjqI5agXkI1mAusI8IagIhWIgDUm1pYgNB6cIbZehMILjiDLVH8Qo61swcdNkACsGMDjTJiCfRU0coO5gQgEIEL/HiDFayAA8iigDcey+YjA0qHOOQ3AckEndAeGYc3vBC5jQjAZwQxHP/BDbsVaEMTEBCHHTjBDl5AQ7/umlLImpS2bATCquzYgQQYAAhFoIAQEPIIOPhhBgO4g52IerBHNiEBOGhCB5rwZY45qwdAEGgIQlCBAzDgAPr1G+h64IRvtkEI+8OEpS+tATNQzwQ9uAIdrJClCbzBnHctQocr/xACBhjgAAV4dZuLDMMNViABZZgfKOBQNBEg4dtIyIGxGwdguH7MBT5mgAxgzYAQeNMFxg0eF67gBStESRQBEJmjQvk2vsXbUTXowBmSbAAGFOAA+U30oor20AmkAQs+oDQoloDCMgUXblxIwBVq8L1poXHgAT2ADArAakMbIA4dCpkI7NqBPowBBjD5jgbU5CxOKxvIEpsYrbMLWYMXQAarbvcBUNyDN5SgDcAeAQfCEQs5O6rBB+twHMDphAQgIL89L4DWt/5zWCPABBPrgBTUoG1xhAEOIborUVNKW4EG1NA+/7nWR07yRKfBDGrwBFBoAAevfYzWPn52CEx6gH8Q0J3rBSBBELrOBChs4FxAKcQNMGCBPBCgCQJt9chlMPLF/5zzMiBBA0hwBD7o4Al3aHHk+7I+FDBhCEPQAh/4cIQj/OADGci97nVwgwYoYQurtwQN1BcaDfzg+BrQQATgEIAl+GULW4h58KdP/epb//rYz772t08V7svCEYEAADs=";
	url = "http://example.com/endpoint"; // Your services endpoint
	var data = {
		"file": data,
		"filename": "fancy.gif"
	};
	drupalServicesPostFile(data, url, postwith);
});
function postwith(data) {
	var date = new Date();
	var timestamp = Math.round(date.getTime() / 1000);
	var node_object = {
		"type": "page", // your content type
		"title": "My title",
		"body": { "und": { 0 : { "value": "Some body text" } } },
		"field_image": { "und": { 0 : { //be sure to change this to your image field name
			"fid": data.fid, //this is the fid we got back from services
			"alt": "Some alt text",
			"title": "Some title text" } } },
		"created": timestamp,
		"status": 1
	};
	url = "http://example.com/endpoint"; // Your services endpoint
	drupalServicesPostNode(node_object, url);
}
What this does, is make a file called "fancy.gif", execute the success function postwith, which posts a node with the file id recieved from posting the image. Or more specifically: This would make a node with that pretty burning bird I have on my site. The crazy, long string there, is the base64 version of it. So let's take a look at how to do the same with a custom picture. I'll do something like this:
$("#someid").click(function() {
	navigator.camera.getPicture(picsuccess, onfail,  { quality: 50 });
});
function picsuccess (data) {
	url = "http://example.com/endpoint"; // Your services endpoint
	var data = {
		"file": data,
		"filename": "test.jpg"
	};
	drupalServicesPostFile(data, url, postwith);
}
This shoots the file all the way to your Drupal site with the function drupalServicesPostFile (declared in drupal.js). Then it executes a success function called postwith, which is the same as above. drupalServicesPostNode does the rest. And then we go to our site and look at our fancy new picture node, created right from our phone's camera.

To check out my small javascript called drupal.js, you can go here

Hope this helps someone researching the same questions I did, and not finding the answers.

And to celebreate another exciting evening of javascipt, here is an exciting animation:

Jan 21 2012
Jan 21

As I have briefly mentioned in an earlier post, you can easily post to your Drupal site from a phonegap app. The reason for this is that the cross domain browser restriction does not apply for the apps you are building because it is basically a local file (the file:// protocol). While this is really swell, you don't actually develop on your phone, so it would be practical to try out all this in your browser before you deploy to your phone.

If you try it in your browser it will most likely give you "Origin null is not allowed by Access-Control-Allow-Origin." or just a silent fail if you are using jQuery. For example your success function does not get called in your $.ajax. So how do you do cross-domain javascript on your computer?

There might be more methods, but the far easiest I have found I to start chrome with the following parameter: "--disable-web-security". On mac, the parameter is "--args --disable-web-security". If you are using windows, you can just make a shortcut to chrome, edit the "target" in the shortcut to include --disable-web-security in the end, for example "C:\Users\myuser\AppData\Local\Google\Chrome\Application\chrome.exe --disable-web-security". Remember to name it differently so you don't accidently surf the web with cross-domain disabled. Now that you have this in place, go ahead and make awesome apps to post to your Drupal site. Or just wait around for my boilerplate template coming soon.

Dec 22 2011
Dec 22

Cloning a production site to your local development environment is super easy. Often you have it in git, or maybe even as a make file. Anyway, you just grab the code and restore the site from a backup_migrate dump. But maybe you have a KING SIZE files directory that you don’t want (or have the time) to download. Enter Apache rewrite magic!

You can just redirect your local files directory to the external files directory, so that localhost.dev/sites/default/files point to example.com/sites/default/files. To do this, you need apache modules proxy and proxy_http. The rest is just a couple of lines in the virtual host config. Example:


    RewriteEngine On
    RewriteCond %{REQUEST_FILENAME} !-f
    RewriteRule ^(.*)$ http://example.com/sites/default/files/$1 [P]

Voila!

Dec 01 2011
Dec 01

Today I had a bunch of old nodes from a migrated d6 site that did not have comments enabled, although the content type had comments enabled. This could also be the case if you have created a bunch of nodes, and all of sudden change your mind and want to enable comments on them anyway. One could always edit each one of them and turn on the comments, but that just can't be the only way, I thought, and did some research.

At first I didn't find any out-of-the-box way to enable comments for multiple nodes. Programatically enabling comments for all nodes by looping through nodes and enabling comments in a simple script could always be done, but my solution was even simpler. I used the module Views bulk operations. The easiest way was if VBO had an action already created for enabling comments, but as that wasn’t the case, it is really not a long snippet to execute on each row. Here is the steps I used to achieve what I wanted.

  1. Downloaded and enabled views bulk operations
  2. Created a view displaying all nodes of the content type I wanted to enable comments on
  3. Added a VBO field
  4. Enabled the Execute PHP action for this field, and also the “select all nodes” checkbox
  5. Added a page display
  6. Viewed the view page, selected all nodes and executed the PHP action.
  7. Entered the following snippet as action
$entity->comment = 2;
node_save($entity);

Done!

Nov 23 2011
Nov 23

So, today i decided to have a go at the services module, to make an app post nodes to my site. With services module enabled you can do a number of things from non-drupal sites, for example you can post nodes through your phonegap app by javascript. And that is exactly what i wanted to do today.

Ok here is the javascript code, the services part of it is pretty straight forward to set up anyway, but finding how to create nodes with json and javascript was not that easy. You can probably figure out how to log in based on this, and this snippet posts to a content type with a geofield, and posts the users location onto my server, along with some other values.

function getLocationAndPost() {
  'use strict';
  navigator.geolocation.getCurrentPosition(function (position) {
    var url = 'http://example.com/enpoint' + '/node.json'; //endpoint URL
    var lat = position.coords.latitude;
    var lon = position.coords.longitude;
    var date = new Date();
    var timestamp = Math.round(date.getTime() / 1000);
    var node_object = {
      "type": "some_type", //set this to your content type
      "uid": 1,  
      "field_geo": { "und": { 0 : { //my geofield is called field_geo
        "wkt": "POINT (" + lon + " " + lat + ")",
        "geo_type": "point",
        "lat": lon,
        "lon": lat,
        "top": lon,
        "bottom": lon,
        "left": lat,
        "right": lat } }
      },
      "title": "Big brother sees you",
      "body": { "und": { 0 : { "value": "Put some body value here if you want" } } },
      "created": timestamp, 
      "status": 1 // Set to 1 for Published. O for unpublished.
    };
    $.ajax({
      url: url,
      type: "POST",
      data: node_object,
      success: function() {
        alert('Success!');
      }
    });
  });
}
Remember, in phonegap, this should be fired when phonegap has loaded! Also, this entire thing is wrapped in the geolocation success function, just for shorter code. You probably want to have this as a named function, as well as an error function

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web