Nov 06 2018
Nov 06

This is part 2 in this series that explores how to use paragraph bundles to store configuration for dynamic content. The example I built in part 1 was a "read next" section, which could then be added as a component within the flow of the page. The strategy makes sense for component-based sites and landing pages, but probably less so for blogs or content heavy sites, since what we really want is for each article to include the read next section at the end of the page. For that, a view that displays as a block would perfectly suffice. In practice, however, it can be really useful to have a single custom block type, which I often call a "component block", that has an entity reference revisions field that we can leverage to create reusable components.

This strategy offers a simple and unified interface for creating reusable components and adding them to sections of the page. Combined with Pattern Lab and the block visibility groups module, we get a pretty powerful tool for page building and theming.

The image below captures the configuration screen for the "Up next" block you can find at the bottom of this page. As you see, it sets the heading, the primary tag, and the number of items to show. Astute readers might notice, however, that there is a small problem with this implementation. It makes sense if all the articles are about Drupal, but on sites where there are lots of topics, having a reusable component with a hard-coded taxonomy reference makes less sense. Rather, we'd like the related content component to show content that is actually related to the content of the article being read.

For the purpose of this article, let's define the following two requirements: first, if the tagged content component has been added as a paragraph bundle to the page itself, then we will respect the tag supplied in its configuration. If, however, the component is being rendered in the up next block, then we will use the first term the article has been tagged with.

To do that, we need three things: 1) we need our custom block to exist and to have a delta that we can use, 2) we need a preprocess hook to assign the theme variables, and 3) we need a twig template to render the component. If you're following along in your own project, then go ahead and create the component block now. I'll return momentarily to a discussion about custom block and the config system.

Once the up next block exists, we can create the following preprocess function:

function component_helper_preprocess_block__upnextblock(&$variables) {
  if ($current_node = \Drupal::request()->attributes->get('node')) {
    $variables['primary_tag'] = $current_node->field_tags->target_id;
    $variables['nid'] = $current_node->id();
    $paragraph = $variables['content']['field_component_reference'][0]['#paragraph'];
    $variables['limit'] = $paragraph->field_number_of_items->getValue()[0]['value'];
    $variables['heading'] = $paragraph->field_heading->getValue()[0]['value'];
  }
}

If you remember from the first article, our tagged content paragraph template passed those values along to Pattern Lab for rendering. That strategy won't work this time around, though, because theme variables assigned to a block entity, for example, are not passed down to the content that is being rendered within the block.

You might wonder if it's worth dealing with this complexity, given that we could simply render the view as a block, modify the contextual filter, place it and be done with it. What I like about this approach is the flexibility it gives us to render paragraph components in predictable ways. In many sites, we have 5, 10 or more component types. Not all (or even most) of them are likely to be reused in blocks, but it's a nice feature to have if your content strategy requires it. Ultimately, the only reason we're doing this small backflip is because we want to use the article's primary tag as the argument, rather than what was added to the component itself. In other component blocks (an image we want in the sidebar, for example) we could simply allow the default template to render its content.

In the end, our approach is pretty simple: Our up next block template includes the paragraph template, rather than the standard block {{ content }} rendering. This approach makes the template variables we assigned in the preprocess function available:

{% include "@afro_theme/paragraphs/paragraph--tagged-content.html.twig" %}

A different approach to consider would be adding a checkbox to the tagged content configuration, such as "Use page context instead of a specified tag". That would avoid having us having an extra hook and template. Other useful configuration fields we've for dynamic component configuration include whether the query should require all tags, or any tag, when multiple are assigned, or the ability to specify whether the related content should exclude duplicates (useful when you have several dynamic components on a page but you don't want them to include the same content).

As we wrap up, a final note I'll add is about custom blocks and the config system. The apprach I've been using for content entities that also become config (which is the case here), is to first create the custom block in my local development environment, then export the config and remove the UUID from the config while also copying the plugin uuid. You can then create an update hook that creates the content for the block before it gets imported to config:

/**
 * Adds the "up next" block for posts.
 */
function component_helper_update_8001() {
  $blockEntityManager = \Drupal::service('entity.manager')
    ->getStorage('block_content');

  $block = $blockEntityManager->create(array(
    'type' => 'component_block',
    'uuid' => 'b0dd7f75-a7aa-420f-bc86-eb5778dc3a54',
    'label_display' => 0,
  ));

  $block->info = "Up next block";

  $paragraph = Drupal\paragraphs\Entity\Paragraph::create([
    'type' => 'tagged_content',
    'field_heading' => [
      'value' => 'Up next'
    ],
    'field_number_of_items' => [
      'value' => '3'
    ],
    'field_referenced_tags' => [
      'target_id' => 1,
    ]
  ]);

  $paragraph->save();
  $block->field_component_reference->appendItem($paragraph);
  $block->save();
}

Once we deploy and run the update hook, we're able to import the site config and our custom block should be rendering on the page. Please let me know if you have any questions or feedback in the comments below. Happy Drupaling.
 

Oct 29 2018
Oct 29

The last step is to modify the javascript and styles that your theme uses to display the pullquotes that have been added in the editing interface. As you can see from the GitHub repo, there are four files that will need to be updated or added to your theme:

  1. your theme info file
  2. your theme library file
  3. the javascript file that adds the markup
  4. the scss (or css) file

In our case, the javascript finds any/all pullquote spans on the page, and then adds them as asides to the DOM, alternating between right and left alignment (for desktop). The scss file then styles them appropriately for small and large breakpoints. Note, too, that the theme css includes specific styles that display in the editing interface, so that content creators can easily see when a pullquote is being added or modified. To remove a pullquote, the editor simply selects it again (which turns the pullquote pink in our theme) and clicks the ckeditor button. 

That wraps up this simple tutorial. You can now rest assured that your readers will never miss an important quote again. The strategy is in no way bulletproof, and so its mileage may vary, but if you have questions, feedback, or suggestions on how this strategy can be improved, please add your comment below. 

Sep 26 2018
Sep 26

Now we're ready to map our variables in Drupal and send them to be rendered in Pattern Lab. If you're not familiar with it, I suggest you start by learning more about Emulsify, which is Four Kitchens' Pattern Lab-based Drupal 8 theme. Their team is not only super-helpful, they're also very active on the DrupalTwig #pattern-lab channel. In this case, we're going to render the teasers from our view as card molecules that are part of a card grid organism. In order to that, we can simply pass the view rows to the the organism, with a newly created view template (views-view--tagged-content.html.twig):

{# Note that we can simply pass along the arguments we sent via twig_tweak  #} 

{% set heading = view.args.3 %}

{% include '@organisms/card-grid/card-grid.twig' with {
  grid_content: rows,
  grid_blockname: 'card',
  grid_label: heading
} %}

Since the view is set to render teasers, the final step is to create a Drupal theme template for node teasers that will be responsible for mapping the field values to the variables that the card template in Pattern Lab expects.  

Generally speaking, for Pattern Lab projects I subscribe to the principle that the role of our Drupal theme templates is to be data mappers, whose responsibility it is to take Drupal field values and map them to Pattern Lab Twig variables for rendering. Therefore, we never output HTML in the theme template files. This helps us keep a clean separation of concerns between Drupal's theme and Pattern Lab, and gives us more predictable markup (note more, since this only applies to templates that we're creating and adding to the theme; otherwise, the Drupal render pipeline is in effect). Here is the teaser template we use to map the values and send them for rendering in Pattern Lab (node--article--teaser.html.twig):

{% set img_src = (img) ? img.uri|image_style('teaser') : null %}

{% include "@molecules/card/01-card.twig" with {
  "card_modifiers": 'grid-item',
  "card_img_src": img_src,
  "card_title": label,
  "card_link_url": url,
} %}

If you're wondering about the img object above, that's related to another custom module that I wrote several years ago to make working with images from media more user friendly. It's definitely out of date, so if you're interested in better approaches to responsive images in Drupal and Pattern Lab, have a look at what Mark Conroy has to say on the topic. Now, if we clear the cache and refresh the page, we should see our teasers rendering as cards (see "Up Next" below for a working version).

Congrats! At this point, you've reached the end of this tutorial. Before signing off, I'll just mention other useful "configuration" settings we've used, such as "any" vs. "all" filtering when using multiple tags, styled "variations" that we can leverage as BEM modifiers, and checkboxes that allow a content creator to specify which content types should be included. The degree of flexibility required will depend on the content strategy for the project, but the underlying methodology works similarly in each case. Also, stay tuned, as in the coming weeks I'll show you how we've chosen to extend this implementation in a way that is both predictable and reusable (blocks, anyone?).

Dec 20 2016
Dec 20

A couple of months ago, team ThinkShout quietly introduced a feature to the MailChimp module that some of us have really wanted for a long time—the ability to support multiple MailChimp accounts from a single Drupal installation. This happened, in part, after I reached out to them on behalf of the stakeholders at Cornell University's ILR School, where I work. Email addresses can be coveted resources within organizations, and along with complex governance requirements, it's not uncommon for a single organization to have internal groups who use separate MailChimp accounts. I'm not going to comment on whether this is a good or wise practice, just to acknowledge that it's a reality.

In our case, we currently have three groups within school who are using MailChimp extensively to reach out to their constituents. Up until this week, this required a manual export of new subscribers (whom we are tracking with entityforms), some custom code to transform the csv values into the correct format for MailChimp, and then a manual import to the respective list. However, as our most recent deployment, we are now able to support all three group's needs (including one who is using MailChimp automations). Let's dig into how we're doing it.

The important change that ThinkShout introduced came from this commit, which invokes a new alter hook that allows a developer to modify the key being used for the API object. And though this is an essential hook if we want to enable multiple keys, it also doesn't accomplish much given that mailchimp_get_api_object is called in dozens of places through the suite of MailChimp modules and therefore it's difficult to know the exact context of the api request. For that reason, we really need a more powerful way to understand the context of a given API call.

To that end, we created a new sandbox module called MailChimp Accounts. This module is responsible for five things:

  1. Allowing developers to register additional MailChimp accounts and account keys
  2. Enabling developers to choose the appropriate account for MailChimp configuration tasks
  3. Switching to the correct account when returning to configuration for an existing field or automation entity
  4. Restarting the form rendering process when a field widget needs to be rendered with a different MailChimp account
  5. Altering the key when the configuration of a MailChimp-related field or entity requires it

If you want to try this for yourself, you'll first need to download the newest release of the MailChimp module, if you're not already running it. You'll also need to download the MailChimp Accounts sandbox module. The core functionality of the MailChimp Accounts module relies on its implementation of a hook called hook_mailchimp_accounts_api_key, which allows a module to register one or more MailChimp accounts.

In order to register a key, you will need to find the account id. Since MailChimp doesn't offer an easy way to discover an account id, we built a simple callback page that allows you to retrieve the account data for a given key. You'll find this in the admin interface at /admin/config/mailchimp/account-info on your site. When you first arrive at that page, you should see the account values for your "default" MailChimp account. In this case, "default" simply means it's the API key registered through the standard MailChimp module interface, which stores the value in the variables table. However, if you input a different key on that page, you can retrieve information about that account from the MailChimp API.

The screenshot above offers an example of some of the data that is returned by the MailChimp API, but you will see additional information including list performance when you query an active account. The only essential piece of information, however, is the account_id, which we will use to register additional accounts that we can then use via hook_mailchimp_accounts_api_key(). Here's an example of how to implement that hook:


/**
 * Register API key(s) for MailChimp Accounts
 *
 * @return array
 *   The keys are the account id and the values are the API keys
 */
function mymodule_mailchimp_accounts_api_key() { 
  $keys = array(
    '2dd44aa1db1c924d42c047c96' => 'xxxxxxxx123434234xxxxxx3243xxxx3-us13',
    '411abe81940121a1e89a02abc' => '123434234xxxxxx23233243xxxxxxx13-us12',
  ); 
  return $keys; 
} 

Once there is more than one API key registered in a Drupal site, you will see a new option to select the current account to use for any MailChimp configuration taking place in the administrative interface. After selecting a different account, you will also see a notice clarifying which API key is currently active in the admin interface. You can see an example in the screenshot below.

After choosing an account to use for configuration, administrative tasks such as MailChimp subscription fields will use that key when making API calls. Therefore, the available options for lists, merge fields and interest groups will correspond to the appropriate account. Additionally, when the field widget renders the form element, the API key is altered yet again so that the subscriber will be added to the appropriate list, interest groups, etc. Additionally, I can confirm from my testing that the interface for Mailchimp Automations entities also uses the correct API key, enabling support for that MailChimp submodule.

This concludes our walkthrough of the new MailChimp Accounts module. Admittedly, it's not the most elegant solution ever created, but it not only satisfies our organization's complex governance requirements, but it also allows our stakeholders to begin thinking about new possibilities for improving their marketing and communications efforts.

We're Hiring!

Interested in working with a dynamic team at a leading university? I work at Cornell University's ILR School, and we're currently looking for the right person to help us build a design system and leverage tools like Pattern Lab and React as we architect a more flexible and scalable infrastructure. If you get excited about API-first Drupal, emerging editing interfaces, and the relationship between great content strategy, flexible design systems, and robust front-end and back-end devops, let's talk!  
 

Dec 20 2016
Dec 20

A couple of months ago, team ThinkShout quietly introduced a feature to the MailChimp module that some of us have really wanted for a long time—the ability to support multiple MailChimp accounts from a single Drupal installation. This happened, in part, after I reached out to them on behalf of the stakeholders at Cornell University's ILR School, where I work. Email addresses can be coveted resources within organizations, and along with complex governance requirements, it's not uncommon for a single organization to have internal groups who use separate MailChimp accounts. I'm not going to comment on whether this is a good or wise practice, just to acknowledge that it's a reality.

In our case, we currently have three groups within school who are using MailChimp extensively to reach out to their constituents. Up until this week, this required a manual export of new subscribers (whom we are tracking with entityforms), some custom code to transform the csv values into the correct format for MailChimp, and then a manual import to the respective list. However, as our most recent deployment, we are now able to support all three group's needs (including one who is using MailChimp automations). Let's dig into how we're doing it.

The important change that ThinkShout introduced came from this commit, which invokes a new alter hook that allows a developer to modify the key being used for the API object. And though this is an essential hook if we want to enable multiple keys, it also doesn't accomplish much given that mailchimp_get_api_object is called in dozens of places through the suite of MailChimp modules and therefore it's difficult to know the exact context of the api request. For that reason, we really need a more powerful way to understand the context of a given API call.

To that end, we created a new sandbox module called MailChimp Accounts. This module is responsible for five things:

  1. Allowing developers to register additional MailChimp accounts and account keys
  2. Enabling developers to choose the appropriate account for MailChimp configuration tasks
  3. Switching to the correct account when returning to configuration for an existing field or automation entity
  4. Restarting the form rendering process when a field widget needs to be rendered with a different MailChimp account
  5. Altering the key when the configuration of a MailChimp-related field or entity requires it

If you want to try this for yourself, you'll first need to download the newest release of the MailChimp module, if you're not already running it. You'll also need to download the MailChimp Accounts sandbox module. The core functionality of the MailChimp Accounts module relies on its implementation of a hook called hook_mailchimp_accounts_api_key, which allows a module to register one or more MailChimp accounts.

In order to register a key, you will need to find the account id. Since MailChimp doesn't offer an easy way to discover an account id, we built a simple callback page that allows you to retrieve the account data for a given key. You'll find this in the admin interface at /admin/config/mailchimp/account-info on your site. When you first arrive at that page, you should see the account values for your "default" MailChimp account. In this case, "default" simply means it's the API key registered through the standard MailChimp module interface, which stores the value in the variables table. However, if you input a different key on that page, you can retrieve information about that account from the MailChimp API.

MailChimp Account info callback page

The screenshot above offers an example of some of the data that is returned by the MailChimp API, but you will see additional information including list performance when you query an active account. The only essential piece of information, however, is the account_id, which we will use to register additional accounts that we can then use via hook_mailchimp_accounts_api_key(). Here's an example of how to implement that hook:


/**
 * Register API key(s) for MailChimp Accounts
 *
 * @return array
 *   The keys are the account id and the values are the API keys
 */
function mymodule_mailchimp_accounts_api_key() { 
  $keys = array(
    '2dd44aa1db1c924d42c047c96' => 'xxxxxxxx123434234xxxxxx3243xxxx3-us13',
    '411abe81940121a1e89a02abc' => '123434234xxxxxx23233243xxxxxxx13-us12',
  ); 
  return $keys; 
} 

Once there is more than one API key registered in a Drupal site, you will see a new option to select the current account to use for any MailChimp configuration taking place in the administrative interface. After selecting a different account, you will also see a notice clarifying which API key is currently active in the admin interface. You can see an example in the screenshot below.

MailChimp account notice and select menu

After choosing an account to use for configuration, administrative tasks such as MailChimp subscription fields will use that key when making API calls. Therefore, the available options for lists, merge fields and interest groups will correspond to the appropriate account. Additionally, when the field widget renders the form element, the API key is altered yet again so that the subscriber will be added to the appropriate list, interest groups, etc. Additionally, I can confirm from my testing that the interface for Mailchimp Automations entities also uses the correct API key, enabling support for that MailChimp submodule.

This concludes our walkthrough of the new MailChimp Accounts module. Admittedly, it's not the most elegant solution ever created, but it not only satisfies our organization's complex governance requirements, but it also allows our stakeholders to begin thinking about new possibilities for improving their marketing and communications efforts.

Nov 30 2016
Nov 30

Recent additions to Drupal 7’s MailChimp module and API library offer some powerful new ways for you to integrate Drupal and MailChimp. As of version 7.x-4.7, the Drupal MailChimp module now supports automations, which are incredibly powerful and flexible ways to trigger interactions with your users. Want to reach out to a customer with product recommendations based on their purchase history? Have you ever wished you could automatically send your newsletter to your subscribers when you publish it in Drupal? Or wouldn’t it be nice to be able to send a series of emails to participants leading up to an event without having to think about it? You can easily do all of those things and much more with MailChimp automations.

First of all, big thanks to the team at ThinkShout for all their great feedback and support over the past few months of development. To get started, download the newest release of the MailChimp module and then enable the MailChimp Automations submodule (drush en mailchimp_automations -y). Or, if you’re already using the MailChimp module, update to the latest version, as well as the most recent version of the PHP API library. Also note that although MailChimp offers many powerful features on their free tier, automations are a paid feature with plans starting at $10/month.

Once you follow the readme instructions for the MailChimp module (and have configured your API key), you’re ready to enable your first automation. But before we do that, I’d like to take a step back and talk a bit more conceptually about automations and how we’re going to tie them to Drupal. The MailChimp documentation uses the terms “automations” and “workflows” somewhat interchangeably, which can be a bit confusing at first. Furthermore, any given workflow can have multiple emails associated with it, and you can combine different triggers for different emails in a single workflow. Triggers can be based on a range of criteria, such as activity (clicking on an email), inactivity (failing to open an email), time (a week before a given date), or, as in our case, an integration, which is MailChimp’s term for it since the API call is the trigger.

The api resource we’re using is the automation email queue, which requires a workflow id, a workflow email id, and an email address. In Drupal, we can now accomplish this by creating a MailChimp Automation entity, which is simply a new Drupal entity that ties any other Drupal entity—provided it has an email field—with a specific workflow email in MailChimp. Once you have that simple concept down, the sky’s the limit in terms of how you integrate the entities on your site with MailChimp emails. This means, for example, that you could tie an entityform submission with a workflow email in MailChimp, which could, in turn, trigger another time-based email a week later (drip campaign, anyone?).

This setup becomes even more intriguing when you contrast it to an expensive marketing solution like Pardot. Just because you may be working within tight budget constraints doesn’t mean you can’t have access to powerful, custom engagement tools. Imagine you’re using another one of ThinkShout’s projects, RedHen CRM, to track user engagement on your site. You’ve assigned scoring point values to various actions a user might take on your site, reading an article, sharing it on social media, or submitting a comment. In this scenario, you could track a given user’s score over time, and then trigger an automation when a user crosses a particular scoring threshold, allowing you to even further engage your most active users on the site.

I’m only beginning to scratch the surface on the possibilities of the new MailChimp Automations module. If you're interested in learning more, feel free to take a look at our recent Drupal Camp presentation, or find additional inspiration in both the feature documentation and automation resource guide. Stay tuned, as well, for an upcoming post about another powerful new feature recently introduced into the MailChimp module: support for multiple MailChimp accounts from a single Drupal site!

Nov 30 2016
Nov 30

Recent additions to Drupal 7’s MailChimp module and API library offer some powerful new ways for you to integrate Drupal and MailChimp. As of version 7.x-4.7, the Drupal MailChimp module now supports automations, which are incredibly powerful and flexible ways to trigger interactions with your users. Want to reach out to a customer with product recommendations based on their purchase history? Have you ever wished you could automatically send your newsletter to your subscribers when you publish it in Drupal? Or wouldn’t it be nice to be able to send a series of emails to participants leading up to an event without having to think about it? You can easily do all of those things and much more with MailChimp automations.

First of all, big thanks to the team at ThinkShout for all their great feedback and support over the past few months of development. To get started, download the newest release of the MailChimp module and then enable the MailChimp Automations submodule (drush en mailchimp_automations -y). Or, if you’re already using the MailChimp module, update to the latest version, as well as the most recent version of the PHP API library. Also note that although MailChimp offers many powerful features on their free tier, automations are a paid feature with plans starting at $10/month.

Once you follow the readme instructions for the MailChimp module (and have configured your API key), you’re ready to enable your first automation. But before we do that, I’d like to take a step back and talk a bit more conceptually about automations and how we’re going to tie them to Drupal. The MailChimp documentation uses the terms “automations” and “workflows” somewhat interchangeably, which can be a bit confusing at first. Furthermore, any given workflow can have multiple emails associated with it, and you can combine different triggers for different emails in a single workflow. Triggers can be based on a range of criteria, such as activity (clicking on an email), inactivity (failing to open an email), time (a week before a given date), or, as in our case, an integration, which is MailChimp’s term for it since the API call is the trigger.

The api resource we’re using is the automation email queue, which requires a workflow id, a workflow email id, and an email address. In Drupal, we can now accomplish this by creating a MailChimp Automation entity, which is simply a new Drupal entity that ties any other Drupal entity—provided it has an email field—with a specific workflow email in MailChimp. Once you have that simple concept down, the sky’s the limit in terms of how you integrate the entities on your site with MailChimp emails. This means, for example, that you could tie an entityform submission with a workflow email in MailChimp, which could, in turn, trigger another time-based email a week later (drip campaign, anyone?).

This setup becomes even more intriguing when you contrast it to an expensive marketing solution like Pardot. Just because you may be working within tight budget constraints doesn’t mean you can’t have access to powerful, custom engagement tools. Imagine you’re using another one of ThinkShout’s projects, RedHen CRM, to track user engagement on your site. You’ve assigned scoring point values to various actions a user might take on your site, reading an article, sharing it on social media, or submitting a comment. In this scenario, you could track a given user’s score over time, and then trigger an automation when a user crosses a particular scoring threshold, allowing you to even further engage your most active users on the site.

I’m only beginning to scratch the surface on the possibilities of the new MailChimp Automations module. If you're interested in learning more, feel free to take a look at our recent Drupal Camp presentation, or find additional inspiration in both the feature documentation and automation resource guide. Stay tuned, as well, for an upcoming post about another powerful new feature recently introduced into the MailChimp module: support for multiple MailChimp accounts from a single Drupal site!

Nov 24 2015
Nov 24

I'm sure there are better ways to handle performance diagnostics (using xDebug, for example), but given the procedural nature of drupal_flush_all_caches it seemed like the devel module would work just fine. I modified the code in Drupal's common.inc file to include the following:


function time_elapsed($comment,$force=FALSE) {
  static $time_elapsed_last = null;
  static $time_elapsed_start = null;

  $unit="s"; $scale=1000000; // output in seconds
  $now = microtime(true);
  if ($time_elapsed_last != null) {
    $elapsed = round(($now - $time_elapsed_last)*1000000)/$scale;
    $total_time = round(($now - $time_elapsed_start)*1000000)/$scale;
    $msg = "$comment: Time elapsed: $elapsed $unit,";
    $msg .= " total time: $total_time $unit";
    dpm($msg);
  }
  else {
      $time_elapsed_start=$now;
  }
  $time_elapsed_last = $now;
}
/**
 * Flushes all cached data on the site.
 *
 * Empties cache tables, rebuilds the menu cache and theme registries, and
 * invokes a hook so that other modules' cache data can be cleared as well.
 */
function drupal_flush_all_caches(){
  // Change query-strings on css/js files to enforce reload for all users.
  time_elapsed('_drupal_flush_css_js');
  _drupal_flush_css_js();
  time_elapsed('registry_rebuild');
  registry_rebuild();
  time_elapsed('drupal_clear_css_cache');
  drupal_clear_css_cache();
  time_elapsed('drupal_clear_js_cache');
  drupal_clear_js_cache();

  // Rebuild the theme data. Note that the module data is rebuilt above, as
  // part of registry_rebuild().
  time_elapsed('system_rebuild_theme_data');
  system_rebuild_theme_data();
  time_elapsed('drupal_theme_rebuild');
  drupal_theme_rebuild();
  time_elapsed('entity_info_cache_clear');
  entity_info_cache_clear();
  time_elapsed('node_types_rebuild');
  node_types_rebuild();
  // node_menu() defines menu items based on node types so it needs to come
  // after node types are rebuilt.
  time_elapsed('menu_rebuild');
  menu_rebuild();

  time_elapsed('actions_synchronize');
  // Synchronize to catch any actions that were added or removed.
  actions_synchronize();

  // Don't clear cache_form - in-progress form submissions may break.
  // Ordered so clearing the page cache will always be the last action.
  $core = array('cache', 'cache_path', 'cache_filter', 'cache_bootstrap', 'cache_page');
  $cache_tables = array_merge(module_invoke_all('flush_caches'), $core);
  foreach ($cache_tables as $table) {
    time_elapsed("clearing $table");
    cache_clear_all('*', $table, TRUE);
  }

  // Rebuild the bootstrap module list. We do this here so that developers
  // can get new hook_boot() implementations registered without having to
  // write a hook_update_N() function.
  _system_update_bootstrap_status();
}

The next time I cleared cache (using admin_menu, since I wanted the dpm messages available), I saw the following:

registry_rebuild: Time elapsed: 0.003464 s, total time: 0.003464 s

drupal_clear_css_cache: Time elapsed: 3.556191 s, total time: 3.559655 s

drupal_clear_js_cache: Time elapsed: 0.001589 s, total time: 3.561244 s

system_rebuild_theme_data: Time elapsed: 0.003462 s, total time: 3.564706 s

drupal_theme_rebuild: Time elapsed: 0.122944 s, total time: 3.68765 s

entity_info_cache_clear: Time elapsed: 0.001606 s, total time: 3.689256 s

node_types_rebuild: Time elapsed: 0.003054 s, total time: 3.69231 s

menu_rebuild: Time elapsed: 0.052984 s, total time: 3.745294 s

actions_synchronize: Time elapsed: 3.334542 s, total time: 7.079836 s

clearing cache_block: Time elapsed: 31.149723 s, total time: 38.229559 s

clearing cache_ctools_css: Time elapsed: 0.00618 s, total time: 38.235739 s

clearing cache_feeds_http: Time elapsed: 0.003292 s, total time: 38.239031 s

clearing cache_field: Time elapsed: 0.006714 s, total time: 38.245745 s

clearing cache_image: Time elapsed: 0.013317 s, total time: 38.259062 s

clearing cache_libraries: Time elapsed: 0.007708 s, total time: 38.26677 s

clearing cache_token: Time elapsed: 0.007837 s, total time: 38.274607 s

clearing cache_views: Time elapsed: 0.006798 s, total time: 38.281405 s

clearing cache_views_data: Time elapsed: 0.008569 s, total time: 38.289974 s

clearing cache: Time elapsed: 0.006926 s, total time: 38.2969 s

clearing cache_path: Time elapsed: 0.009662 s, total time: 38.306562 s

clearing cache_filter: Time elapsed: 0.007552 s, total time: 38.314114 s

clearing cache_bootstrap: Time elapsed: 0.005526 s, total time: 38.31964 s

clearing cache_page: Time elapsed: 0.009511 s, total time: 38.329151 s

hook_flush_caches: total time: 38.348554 s

Every cache cleared.

My initial response was to wonder how and why the cache_block would take so long. Then, however, I noticed line 59 above, which calls module_invoke_all('flush_caches'), which should have been obvious. Also, given that I was just looking for bottlenecks, I modified both module_invoke($module, $hook) in module.inc, as well as the time_elapsed to get the following:


function time_elapsed($comment,$force=FALSE) {
  static $time_elapsed_last = null;
  static $time_elapsed_start = null;
  static $last_action = null; // Stores the last action for the elapsed time message

  $unit="s"; $scale=1000000; // output in seconds
  $now = microtime(true);

  if ($time_elapsed_last != null) {
    $elapsed = round(($now - $time_elapsed_last)*1000000)/$scale;
    if ($elapsed > 1 || $force) {
      $total_time = round(($now - $time_elapsed_start)*1000000)/$scale;
      $msg = ($force)
        ? "$comment: "
        : "$last_action: Time elapsed: $elapsed $unit,";
      $msg .= " total time: $total_time $unit";
      dpm($msg);
    }
  } else {
      $time_elapsed_start=$now;
  }
  $time_elapsed_last = $now;
  $last_action = $comment;
}


/** From module.inc */
function module_invoke_all($hook) {
  $args = func_get_args();
  // Remove $hook from the arguments.
  unset($args[0]);
  $return = array();
  foreach (module_implements($hook) as $module) {
    $function = $module . '_' . $hook;
    if (function_exists($function)) {
      if ($hook == 'flush_caches') {
        time_elapsed($function);
      }
      $result = call_user_func_array($function, $args);
      if (isset($result) && is_array($result)) {
        $return = array_merge_recursive($return, $result);
      }
      elseif (isset($result)) {
        $return[] = $result;
      }
    }
  }

  return $return;
}

The results pointed to the expected culprits:

registry_rebuild: Time elapsed: 4.176781 s, total time: 4.182339 s

menu_rebuild: Time elapsed: 3.367128 s, total time: 7.691533 s

entity_flush_caches: Time elapsed: 22.899951 s, total time: 31.068898 s

features_flush_caches: Time elapsed: 7.656231 s, total time: 39.112933 s

hook_flush_caches: total time: 39.248036 s

Every cache cleared.

After a little digging into the features issue queue, I was delighted to find out that patches had already been committed to both modules (though entity api does not have it in the release yet, so you have to use the dev branch). Two module updates and two vsets later, I got the following results:

registry_rebuild: Time elapsed: 3.645328 s, total time: 3.649398 s

menu_rebuild: Time elapsed: 3.543039 s, total time: 7.378718 s

hook_flush_caches: total time: 8.266036 s

Every cache cleared.

Cache clearing nirvana reached!

Nov 24 2015
Nov 24

Cache clearing nirvana may be two vsets away

tl;dr If your D7 site uses features or has many entity types, some recent patches to the features module and the entity api module may deliver dramatic performance increases when you clear Drupal's cache. The magic:


    $ drush vset features_rebuild_on_flush FALSE
    $ drush vset entity_rebuild_on_flush FALSE

The Backstory

Given that tedbow is a good friend in our little slice of paradise, aka Ithaca, NY, we decided that we were going to embrace the entityform module on the large Drupal migration I was hired to lead. Fifty-eight entityforms and 420 fields later (even with diligent field re-use), we now see how, in some cases, a pseudo-field system has real benefits, even if it's not the most future-proof solution. As our cache clears became slower and slower (at times taking nearly 10 minutes for a teammate with an older computer), I began to suspect that entityform and/or our extensive reliance on the Drupal field system might be a culprit. Two other corroborating data points were the length of time that feature reverts took when they involved entityforms. Even deployments became a hassle because we had to carefully time them if they required the cache to be cleared, which would make the site unresponsive for logged-in users and cache-cold pages for 5 minutes or more. Clearly, something needed to be done.

Diagnosing

I'm sure there are better ways to handle performance diagnostics (using xDebug, for example), but given the procedural nature of drupal_flush_all_caches it seemed like the devel module would work just fine. I modified the code in Drupal's common.inc file to include the following:


function time_elapsed($comment,$force=FALSE) {
  static $time_elapsed_last = null;
  static $time_elapsed_start = null;

  $unit="s"; $scale=1000000; // output in seconds
  $now = microtime(true);
  if ($time_elapsed_last != null) {
    $elapsed = round(($now - $time_elapsed_last)*1000000)/$scale;
    $total_time = round(($now - $time_elapsed_start)*1000000)/$scale;
    $msg = "$comment: Time elapsed: $elapsed $unit,";
    $msg .= " total time: $total_time $unit";
    dpm($msg);
  }
  else {
      $time_elapsed_start=$now;
  }
  $time_elapsed_last = $now;
}
/**
 * Flushes all cached data on the site.
 *
 * Empties cache tables, rebuilds the menu cache and theme registries, and
 * invokes a hook so that other modules' cache data can be cleared as well.
 */
function drupal_flush_all_caches(){
  // Change query-strings on css/js files to enforce reload for all users.
  time_elapsed('_drupal_flush_css_js');
  _drupal_flush_css_js();
  time_elapsed('registry_rebuild');
  registry_rebuild();
  time_elapsed('drupal_clear_css_cache');
  drupal_clear_css_cache();
  time_elapsed('drupal_clear_js_cache');
  drupal_clear_js_cache();

  // Rebuild the theme data. Note that the module data is rebuilt above, as
  // part of registry_rebuild().
  time_elapsed('system_rebuild_theme_data');
  system_rebuild_theme_data();
  time_elapsed('drupal_theme_rebuild');
  drupal_theme_rebuild();
  time_elapsed('entity_info_cache_clear');
  entity_info_cache_clear();
  time_elapsed('node_types_rebuild');
  node_types_rebuild();
  // node_menu() defines menu items based on node types so it needs to come
  // after node types are rebuilt.
  time_elapsed('menu_rebuild');
  menu_rebuild();

  time_elapsed('actions_synchronize');
  // Synchronize to catch any actions that were added or removed.
  actions_synchronize();

  // Don't clear cache_form - in-progress form submissions may break.
  // Ordered so clearing the page cache will always be the last action.
  $core = array('cache', 'cache_path', 'cache_filter', 'cache_bootstrap', 'cache_page');
  $cache_tables = array_merge(module_invoke_all('flush_caches'), $core);
  foreach ($cache_tables as $table) {
    time_elapsed("clearing $table");
    cache_clear_all('*', $table, TRUE);
  }

  // Rebuild the bootstrap module list. We do this here so that developers
  // can get new hook_boot() implementations registered without having to
  // write a hook_update_N() function.
  _system_update_bootstrap_status();
}

The next time I cleared cache (using admin_menu, since I wanted the dpm messages available), I saw the following:

registry_rebuild: Time elapsed: 0.003464 s, total time: 0.003464 s

drupal_clear_css_cache: Time elapsed: 3.556191 s, total time: 3.559655 s

drupal_clear_js_cache: Time elapsed: 0.001589 s, total time: 3.561244 s

system_rebuild_theme_data: Time elapsed: 0.003462 s, total time: 3.564706 s

drupal_theme_rebuild: Time elapsed: 0.122944 s, total time: 3.68765 s

entity_info_cache_clear: Time elapsed: 0.001606 s, total time: 3.689256 s

node_types_rebuild: Time elapsed: 0.003054 s, total time: 3.69231 s

menu_rebuild: Time elapsed: 0.052984 s, total time: 3.745294 s

actions_synchronize: Time elapsed: 3.334542 s, total time: 7.079836 s

clearing cache_block: Time elapsed: 31.149723 s, total time: 38.229559 s

clearing cache_ctools_css: Time elapsed: 0.00618 s, total time: 38.235739 s

clearing cache_feeds_http: Time elapsed: 0.003292 s, total time: 38.239031 s

clearing cache_field: Time elapsed: 0.006714 s, total time: 38.245745 s

clearing cache_image: Time elapsed: 0.013317 s, total time: 38.259062 s

clearing cache_libraries: Time elapsed: 0.007708 s, total time: 38.26677 s

clearing cache_token: Time elapsed: 0.007837 s, total time: 38.274607 s

clearing cache_views: Time elapsed: 0.006798 s, total time: 38.281405 s

clearing cache_views_data: Time elapsed: 0.008569 s, total time: 38.289974 s

clearing cache: Time elapsed: 0.006926 s, total time: 38.2969 s

clearing cache_path: Time elapsed: 0.009662 s, total time: 38.306562 s

clearing cache_filter: Time elapsed: 0.007552 s, total time: 38.314114 s

clearing cache_bootstrap: Time elapsed: 0.005526 s, total time: 38.31964 s

clearing cache_page: Time elapsed: 0.009511 s, total time: 38.329151 s

hook_flush_caches: total time: 38.348554 s

Every cache cleared.

My initial response was to wonder how and why the cache_block would take so long. Then, however, I noticed line 59 above, which calls module_invoke_all('flush_caches'), which should have been obvious. Also, given that I was just looking for bottlenecks, I modified both module_invoke($module, $hook) in module.inc, as well as the time_elapsed to get the following:


function time_elapsed($comment,$force=FALSE) {
  static $time_elapsed_last = null;
  static $time_elapsed_start = null;
  static $last_action = null; // Stores the last action for the elapsed time message

  $unit="s"; $scale=1000000; // output in seconds
  $now = microtime(true);

  if ($time_elapsed_last != null) {
    $elapsed = round(($now - $time_elapsed_last)*1000000)/$scale;
    if ($elapsed > 1 || $force) {
      $total_time = round(($now - $time_elapsed_start)*1000000)/$scale;
      $msg = ($force)
        ? "$comment: "
        : "$last_action: Time elapsed: $elapsed $unit,";
      $msg .= " total time: $total_time $unit";
      dpm($msg);
    }
  } else {
      $time_elapsed_start=$now;
  }
  $time_elapsed_last = $now;
  $last_action = $comment;
}


/** From module.inc */
function module_invoke_all($hook) {
  $args = func_get_args();
  // Remove $hook from the arguments.
  unset($args[0]);
  $return = array();
  foreach (module_implements($hook) as $module) {
    $function = $module . '_' . $hook;
    if (function_exists($function)) {
      if ($hook == 'flush_caches') {
        time_elapsed($function);
      }
      $result = call_user_func_array($function, $args);
      if (isset($result) && is_array($result)) {
        $return = array_merge_recursive($return, $result);
      }
      elseif (isset($result)) {
        $return[] = $result;
      }
    }
  }

  return $return;
}

The results pointed to the expected culprits:

registry_rebuild: Time elapsed: 4.176781 s, total time: 4.182339 s

menu_rebuild: Time elapsed: 3.367128 s, total time: 7.691533 s

entity_flush_caches: Time elapsed: 22.899951 s, total time: 31.068898 s

features_flush_caches: Time elapsed: 7.656231 s, total time: 39.112933 s

hook_flush_caches: total time: 39.248036 s

Every cache cleared.

After a little digging into the features issue queue, I was delighted to find out that patches had already been committed to both modules (though entity api does not have it in the release yet, so you have to use the dev branch). Two module updates and two vsets later, I got the following results:

registry_rebuild: Time elapsed: 3.645328 s, total time: 3.649398 s

menu_rebuild: Time elapsed: 3.543039 s, total time: 7.378718 s

hook_flush_caches: total time: 8.266036 s

Every cache cleared.

Cache clearing nirvana reached!

Nov 03 2014
Nov 03

In part 1 of this tutorial, we covered how to configure and use Ansible for local Drupal development. If you didn't have a chance to read that article, you can download my fork of Jeff Geerling's Drupal Dev VM to see the final, working version from part 1. In this article, we'll be switching things up quite a bit as we take a closer look at the 2nd three requirements, namely:

  1. Using the same playbook for both local dev and remote administration (on DigitalOcean)
  2. Including basic server security
  3. Making deployments simple

TL;DR Feel free to download the final, working version of this repo and/or use it to follow along with the article.

Caveat Emptor

Before we dig in, I want to stress that I am not an expert in server administration and am relying heavily on the Ansible roles created by Jeff Geerling. The steps outlined in this article come from my own experience of trying use Ansible to launch this site, and I'm not aware of how they stray from best-practices. But if you're feeling adventurous, or like me, foolhardy enough to jump in headfirst and just try to figure it out, then read on.

Sharing Playbooks Between Local and Remote Environments

One of the features that makes Ansible so incredibly powerful is to be able to run a given task or playbook across a range of hosts. For example, when the Drupal Security team announced the SQL injection bug now known as "Drupalgeddon", Jeff Geerling wrote a great post about using Ansible to deploy a security fix on many sites. Given that any site that was not updated within 12 hours is now considered compromised, you can easily see what an important role Ansible can play. Ansible is able to connect to any host that is defined in the default inventory file at /etc/ansible/hosts. However, you can also create a project specific inventory file and put it the git repo, which is what we'll do here.

To start with, we'll add a file called "inventory" and put it in the provisioning folder. Inventories are in ini syntax, and basically allow you to define hosts and groups. For now, simply add the following lines to the inventory:

[dev]
yourdomain.dev

The inventory can define hostnames or IP addresses, so 192.168.88.88 (the IP address from the Vagrantfile) would work fine here as well. Personally, I prefer hostnames because I find them easier to organize and track. It will also help us avoid an issues with Ansible commands on the local VirtualBox. With our dev host defined, we are now able to set any required host-specific variables.

Ansible is extremely flexible in how you create and assign variables. For the most part, we'll be using the same variables for all our environments. But a few of them, such as the Drupal domain, ssh port, etc., will be different. Some of these differences are related to the group (such as the ssh port Ansible connects to), while other's are host-specific (such as the Drupal domain). Let's start by creating a folder called "host_vars" in the provisioning folder with a file in it named with the host name of your dev site (a-fro.dev for me). Add the following lines to it:

---
drupal_domain: "yourdomain.dev"

At this point, we're ready to dig into remote server configuration for the first time. Lately, I've been using DigitalOcean to host my virtual servers because they are inexpensive (starting at $5/month) and they have a plethora of good tutorials that helped me work through the manual configuration workflows I was using. I'm sure there are many other good options, but the only requirement is to have a server to which you have root access and have added your public key. I also prefer to have a staging server where I can test things remotely before deploying to production, so for the sake of this tutorial let's create a server that will host stage.yourdomain.com. If you're using a domain for which DNS is not yet configured, you can just add it to your system's hosts file and point to the server's IP address.

Once you've created your server (I chose the most basic plan at DO and added Ubuntu 12.04 x32), you'll want to add it to your inventory like so:

[staging]
stage.yourdomain.com

Assuming that DNS is either already set up, or that you've added the domain to your hosts file, Ansible is now almost ready to talk to the server for the first time. The last thing Ansible needs is some ssh configuration. If you're used to adding this to your ~/.ssh/config file, that's fine. That approach would work fine for now, but we'll see that it will impose some limitations as we move forward, so let's go ahead and add the ssh config to the host file (host_vars/stage.yourdomain.com):

---
drupal_domain: "stage.yourdomain.com"
ansible_ssh_user: root
ansible_ssh_private_key_file: '~/.ssh/id_rsa'

At this point, you should have everything you need to connect to your virtual server and configure it via Ansible. You can test this by heading to the provisioning folder of your repo and typing ansible staging -i inventory -m ping, where "staging" is the group name you defined in your inventory file. You should see something like the following output:

stage.yourdomain.com | success >> {
    "changed": false,
    "ping": "pong"
}

If that's what you see, then congratulations! Ansible has just talked to your server for the first time. If not, you can try running the same command with -vvvv and check the debug messages. We could run the playbook now from part 1 and it should configure the server, but before doing that, let's take a look at the next requirement.

Basic Server Security

Given that the Drupal Dev VM is really set up to support a local environment, it's missing important security features and requirements. Luckily, Jeff comes to the rescue again with a set of additional Ansible roles we can add to the playbook to help fill in the gaps. We'll need the roles installed on our system, which we can do with ansible-galaxy install -r requirements.txt (read more about roles and files). If you already have the roles installed, the easiest way to make sure they're up-to-date is with ansible-galaxy install -r requirements.txt --force (since updating a role is not yet supported by Ansible Galaxy).

In this section, we'll focus on the geerlingguy.firewall and geerlingguy.security roles. Jeff uses the same pattern for all his Ansible roles, so it's easy to find the default vars for a given role by replacing the role name (ie: ansible-role-rolename) of the url: https://github.com/geerlingguy/ansible-role-security/blob/master/defaults/main.yml. The two variables that we care about here are security_ssh_port and security_sudoers_passwordless. This role is going to help us remove password authentication, root login, change the ssh port and add a configured user account to the passwordless sudoers group.

You might notice that the role says "configured user accounts", which begs the question: where does the account get configured? This was actually a stumbling block for me for a while, as I had to work through many different issues my attempts to create and configure the role. The approach we'll take here is working, though may not be the most efficient (or best-pratice, see Caveat Emptor above). Yet there is another issue as well, because the first time we connect to the server it will be over the default ssh port (22), but in the future, we want to choose a more secure port. We're also going to need to make sure that port gets opened on the firewall.

Ansible's variable precendence is going to help us work through these issues. To start with, let's take a look at the following example vars file:

---
ntp_timezone: America/New_York

firewall_allowed_tcp_ports:
  - "{{ security_ssh_port }}"
  - "80"

# The core version you want to use (e.g. 6.x, 7.x, 8.0.x).
# A-fro note: this is slightly deceptive b/c it's really used to check out the correct branch
drupal_core_version: "master"

# The path where Drupal will be downloaded and installed.
drupal_core_path: "/var/www/{{ drupal_domain }}/docroot"

# Your drupal site's domain name (e.g. 'example.com').
# drupal_domain:  moved to group_vars

# Your Drupal site name.
drupal_site_name: "Aaron Froehlich's Blog"
drupal_admin_name: admin
drupal_admin_password: password

# The webserver you're running (e.g. 'apache2', 'httpd', 'nginx').
drupal_webserver_daemon: apache2

# Drupal MySQL database username and password.
drupal_mysql_user: drupal
drupal_mysql_password: password
drupal_mysql_database: drupal

# The Drupal git url from which Drupal will be cloned.
drupal_repo_url: "[email protected]:a-fro/a-fro.com.git"

# The Drupal install profile to be used
drupal_install_profile: standard

# Security specific
# deploy_user: defined in group_vars for ad-hoc commands
# security_ssh_port: defined in host_vars and group_vars
security_sudoers_passwordless:
  - "{{ deploy_user }}"
security_autoupdate_enabled: true

You'll notice that some of the variables have been moved to host_vars or group_vars files. Our deploy_user, for example, would work just fine for our playbook if we define it here. But since we want to make this user available to Ansible for ad-hoc commands (not in playbooks), it is better to put it in group_vars. This is also why we can't just use our ~/.ssh/config file. With Ansible, any variables added to provisioning/group_vars/all are made available by default to all hosts in the inventory, so create that file and add the following lines to it:

---
deploy_user: deploy

For the security_ssh_port, we'll be connecting to our dev environment over the default port 22, but changing the port on our remote servers. I say servers (plural), because eventually we'll have both staging and production environments. We can modify our inventory file to make this a bit easier:

[dev]
a-fro.dev

[staging]
stage.a-fro.com

[production]
a-fro.com

[droplets:children]
staging
production

This allows us to issue commands to a single host, or to all our droplets. Therefore, we can add a file called "droplets" to the group_vars folder and add the group-specific variables there:

---
ansible_ssh_user: "{{ deploy_user }}"

security_ssh_port: 4895 # Or whatever you choose
ansible_ssh_port: "{{ security_ssh_port }}"

ansible_ssh_private_key_file: ~/.ssh/id_rsa # The private key that pairs to the public key on your remote server.

Configuring the Deploy User

There are two additional issues that we need to address if we want our security setup to work. The first is pragmatic: using a string in the security_sudoers_passwordless yaml array above works fine, but Ansible throws an error when we try to use a variable there. I have a pull request issued to Ansible-role-security that resolves this issue, but unless that gets accepted, we can't use the role as is. The easy alternative is to download that role to our local system and add it's contents to a folder named "roles" in provisioning (ie. provisioning/roles/security). You can see the change we need to make to the task here. Then, we modify the playbook to use our local "security" role, rather than geerlingguy.security.

The second issue we face is that the first time we connect to our server, we'll do it as root over port 22, so that we can add the deploy_user account, and update the security configuration. Initially, I was just modifying the variables depending on whether it was the first time I was running the playbook, but that got old really quickly as I created, configured and destroyed my droplets to work through all the issues. And while there may be better ways to do this, what worked for me was to add an additional playbook that handles our initial configuration. So create a provisioning/deploy_config.yml file and add the following lines to it:

---

- hosts: all
  sudo: yes

  vars_files:
    - vars/main.yml
    - vars/deploy_config.yml

  pre_tasks:
    - include: tasks/deploy_user.yml

  roles:
    - security

Here's the task that configures the deploy_user:

---
- name: Ensure admin group exists.
  group: name=admin state=present

- name: Add deployment user
  user: name='{{ deploy_user }}'
        state=present
        groups="sudo,admin"
        shell=/bin/bash

- name: Create .ssh folder with correct permissions.
  file: >
    path="/home/{{ deploy_user }}/.ssh/"
    state=directory
    owner="{{ deploy_user }}"
    group=admin
    mode=700

- name: Add authorized deploy key
  authorized_key: user="{{ deploy_user }}"
                  key="{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
                  path="/home/{{ deploy_user }}/.ssh/authorized_keys"
                  manage_dir=no
  remote_user: "{{ deploy_user }}"

The private/public key pair you define in the "Add authorized deploy key" task and in your ansible_ssh_private_key_file variable should have access to both your remote server and your GitHub repository. If you've forked or cloned my version, then you will definitely need to modify the keys.

Our final security configuration prep step is to leverage Ansible's variable precendence to override the ssh settings to use root and the default ssh port with the following lines in provisioning/vars/deploy_config:

---
ansible_ssh_user: root
ansible_ssh_port: 22

We now have everything in place to configure the basic security we're adding to our server. Remembering that one of we want our playbooks to work both locally over Vagrant and remotely, we can first try to run this playbook in our dev environment. I couldn't find a good way to make this seamless with Vagrant, so I've added a conditional statement to the Vagrantfile:

# -*- mode: ruby -*-
# vi: set ft=ruby :

# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  # All Vagrant configuration is done here. The most common configuration
  # options are documented and commented below. For a complete reference,
  # please see the online documentation at vagrantup.com.

  # Every Vagrant virtual environment requires a box to build off of.
  config.vm.box = "ubuntu-precise-64"

  # The url from where the 'config.vm.box' box will be fetched if it
  # doesn't already exist on the user's system.
  config.vm.box_url = "http://files.vagrantup.com/precise64.box"

  # Create a private network, which allows host-only access to the machine
  # using a specific IP.
  config.vm.network :private_network, ip: "192.168.88.88"

  # Share an additional folder to the guest VM. The first argument is
  # the path on the host to the actual folder. The second argument is
  # the path on the guest to mount the folder. And the optional third
  # argument is a set of non-required options.
  config.vm.synced_folder "../a-fro.dev", "/var/www/a-fro.dev", :nfs => true

  # Configure VirtualBox.
  config.vm.provider :virtualbox do |vb|
    # Set the RAM for this VM to 512M.
    vb.customize ["modifyvm", :id, "--memory", "512"]
    vb.customize ["modifyvm", :id, "--name", "a-fro.dev"]
  end

  # Enable provisioning with Ansible.
  config.vm.provision "ansible" do |ansible|
    ansible.inventory_path = "provisioning/inventory"
    ansible.sudo = true
    # ansible.raw_arguments = ['-vvvv']
    ansible.sudo = true
    ansible.limit = 'dev'

    initialized = false

    if initialized
      play = 'playbook'
      ansible.extra_vars = { ansible_ssh_private_key_file: '~/.ssh/ikon' }
    else
      play = 'deploy_config'
      ansible.extra_vars = {
        ansible_ssh_user: 'vagrant',
        ansible_ssh_private_key_file: '~/.vagrant.d/insecure_private_key'
      }
    end
    ansible.playbook = "provisioning/#{play}.yml"
  end
end

The first time we run vagrant up, if initialized is set to false, then it's going to run deploy_config. Once it's been initialized the first time (assuming there were no errors), you can set initialized to true and from that point on, playbook.yml will run when we vagrant provision. Assuming everything worked for you, then we're ready to configure our remote server with ansible-playbook provisioning/deploy_config.yml -i provisioning/inventory --limit=staging.

Installing Drupal

Whew! Take a deep breath, because we're really at the home stretch now. In part 1, we used a modified Drupal task file to install Drupal. Since then, however, Jeff has accepted a couple of pull requests that get us really close to being able to use his Drupal Ansible Role straight out of the box. I have another pull request issued that get's us 99% of the way there, but since that hasn't been accepted, we're going to follow the strategy we used with the security role and add a "drupal" folder to the roles.

I've uploaded a branch of ansible-role-drupal, that includes the modifications we need. They're all in the provisioning/drupal.yml task, and I've outlined the changes and reasons in my pull request. If you're following along, I suggest downloading that branch from GitHub and adding it to a drupal folder in your provisioning/roles. One additional change that I have not created a pull request for relates to the structure I use for Drupal projects. I like to put Drupal in a subfolder of the repository root (typically called docroot). As many readers will realize, this is in large part because we often host on Acquia. And while we're not doing that in this case, I still find it convenient to be able to add other folders (docs, bin scripts, etc.) alongside the Drupal docroot. The final modification we make, then, is to checkout the repository to /var/www/{{ drupal_domain }} (rather than {{ drupal_core_path }}, which points to the docroot folder of the repo).

We now have all our drops in a row and we're ready to run our playbook to do the rest of the server configuration and install Drupal! As I mentioned above, we can modify our Vagrantfile to set initialized to true and run vagrant provision, and our provisioner should run. If you run into issues, you can uncomment the ansible.raw_arguments line and enable verbose output.

One final note before we provision our staging server. While vagrant provision works just fine, I think I've made my preference clear for having consistency between environments. We can do that here by modifying the host_vars for dev:

---
drupal_domain: "a-fro.dev"
security_ssh_port: 22
ansible_ssh_port: "{{ security_ssh_port }}"
ansible_ssh_user: "{{ deploy_user }}"
ansible_ssh_private_key_file: '~/.ssh/id_rsa'

Now, assuming that you already ran vagrant up with initialized set to false, then you can run your playbook for dev in the same way you will for your remote servers:

cd provisioning
ansible-playbook playbook.yml -i inventory --limit=dev

If everything runs without a hitch on your vagrant server, then you're ready to run it remotely with ansible-playbook playbook.yml -i inventory --limit=staging. A couple of minutes later, you should see your Drupal site installed on your remote server.

Simple Deployments

I'm probably not the only reader of Jeff's awesome book Ansible for Devops who is looking forward to him completing Chapter 9, Deployments with Ansible. In the meantime, however, we can create a simple deploy playbook with two tasks:

---
- hosts: all

  vars_files:
    - vars/main.yml

  tasks:
    - name: Check out the repository.
      git: >
        repo='[email protected]:a-fro/a-fro.com.git'
        version='master'
        accept_hostkey=yes
        dest=/var/www/{{ drupal_domain }}
      sudo: no

    - name: Clear cache on D8
      command:
        chdir={{ drupal_core_path }}
        drush cr
      when: drupal_major_version == 8

    - name: Clear cache on D6/7
      command:
        chdir={{ drupal_core_path }}
        drush cc all
      when: drupal_major_version < 8

Notice that we've added a conditional that checks for a variable called drupal_major_version, so you should add that to your provisioniong/vars/main.yml file. If I was running a D7 site, I'd probably add tasks to the deploy script such as drush fr-all -y, but this suffices for now. Since I'm pretty new to D8, if you have ideas on other tasks that would be helpful (such as a git workflow for CM), then I'm all ears!

Conclusion

I hope you enjoyed this 2 part series on Drupal and Ansible. One final note for the diligent reader relates to choosing the most basic hosting plan, which limits my server to 512MB of ram. I've therefore added an additional task that adds and configures swap space when not on Vagrant.

Thanks to the many committed open source developers (and Jeff Geerling in particular), devops for Drupal are getting dramatically simpler. As the community still reels from the effects of Drupalgeddon, it's easy to see how incredibly valuable it is to be able to easily run commands across  a range of servers and codebases. Please let me know if you have questions, issues, tips or tricks, and as always, thanks for reading.

Nov 03 2014
Nov 03

In part 1 of this tutorial, we covered how to configure and use Ansible for local Drupal development. If you didn't have a chance to read that article, you can download my fork of Jeff Geerling's Drupal Dev VM to see the final, working version from part 1. In this article, we'll be switching things up quite a bit as we take a closer look at the 2nd three requirements, namely:

  1. Using the same playbook for both local dev and remote administration (on DigitalOcean)
  2. Including basic server security
  3. Making deployments simple

TL;DR Feel free to download the final, working version of this repo and/or use it to follow along with the article.

Caveat Emptor

Before we dig in, I want to stress that I am not an expert in server administration and am relying heavily on the Ansible roles created by Jeff Geerling. The steps outlined in this article come from my own experience of trying use Ansible to launch this site, and I'm not aware of how they stray from best-practices. But if you're feeling adventurous, or like me, foolhardy enough to jump in headfirst and just try to figure it out, then read on.

Sharing Playbooks Between Local and Remote Environments

One of the features that makes Ansible so incredibly powerful is to be able to run a given task or playbook across a range of hosts. For example, when the Drupal Security team announced the SQL injection bug now known as "Drupalgeddon", Jeff Geerling wrote a great post about using Ansible to deploy a security fix on many sites. Given that any site that was not updated within 12 hours is now considered compromised, you can easily see what an important role Ansible can play. Ansible is able to connect to any host that is defined in the default inventory file at /etc/ansible/hosts. However, you can also create a project specific inventory file and put it the git repo, which is what we'll do here.

To start with, we'll add a file called "inventory" and put it in the provisioning folder. Inventories are in ini syntax, and basically allow you to define hosts and groups. For now, simply add the following lines to the inventory:

[dev]
yourdomain.dev

The inventory can define hostnames or IP addresses, so 192.168.88.88 (the IP address from the Vagrantfile) would work fine here as well. Personally, I prefer hostnames because I find them easier to organize and track. It will also help us avoid an issues with Ansible commands on the local VirtualBox. With our dev host defined, we are now able to set any required host-specific variables.

Ansible is extremely flexible in how you create and assign variables. For the most part, we'll be using the same variables for all our environments. But a few of them, such as the Drupal domain, ssh port, etc., will be different. Some of these differences are related to the group (such as the ssh port Ansible connects to), while other's are host-specific (such as the Drupal domain). Let's start by creating a folder called "host_vars" in the provisioning folder with a file in it named with the host name of your dev site (a-fro.dev for me). Add the following lines to it:

---
drupal_domain: "yourdomain.dev"

At this point, we're ready to dig into remote server configuration for the first time. Lately, I've been using DigitalOcean to host my virtual servers because they are inexpensive (starting at $5/month) and they have a plethora of good tutorials that helped me work through the manual configuration workflows I was using. I'm sure there are many other good options, but the only requirement is to have a server to which you have root access and have added your public key. I also prefer to have a staging server where I can test things remotely before deploying to production, so for the sake of this tutorial let's create a server that will host stage.yourdomain.com. If you're using a domain for which DNS is not yet configured, you can just add it to your system's hosts file and point to the server's IP address.

Once you've created your server (I chose the most basic plan at DO and added Ubuntu 12.04 x32), you'll want to add it to your inventory like so:

[staging]
stage.yourdomain.com

Assuming that DNS is either already set up, or that you've added the domain to your hosts file, Ansible is now almost ready to talk to the server for the first time. The last thing Ansible needs is some ssh configuration. If you're used to adding this to your ~/.ssh/config file, that's fine. That approach would work fine for now, but we'll see that it will impose some limitations as we move forward, so let's go ahead and add the ssh config to the host file (host_vars/stage.yourdomain.com):

---
drupal_domain: "stage.yourdomain.com"
ansible_ssh_user: root
ansible_ssh_private_key_file: '~/.ssh/id_rsa'

At this point, you should have everything you need to connect to your virtual server and configure it via Ansible. You can test this by heading to the provisioning folder of your repo and typing ansible staging -i inventory -m ping, where "staging" is the group name you defined in your inventory file. You should see something like the following output:

stage.yourdomain.com | success >> {
    "changed": false,
    "ping": "pong"
}

If that's what you see, then congratulations! Ansible has just talked to your server for the first time. If not, you can try running the same command with -vvvv and check the debug messages. We could run the playbook now from part 1 and it should configure the server, but before doing that, let's take a look at the next requirement.

Basic Server Security

Given that the Drupal Dev VM is really set up to support a local environment, it's missing important security features and requirements. Luckily, Jeff comes to the rescue again with a set of additional Ansible roles we can add to the playbook to help fill in the gaps. We'll need the roles installed on our system, which we can do with ansible-galaxy install -r requirements.txt (read more about roles and files). If you already have the roles installed, the easiest way to make sure they're up-to-date is with ansible-galaxy install -r requirements.txt --force (since updating a role is not yet supported by Ansible Galaxy).

In this section, we'll focus on the geerlingguy.firewall and geerlingguy.security roles. Jeff uses the same pattern for all his Ansible roles, so it's easy to find the default vars for a given role by replacing the role name (ie: ansible-role-rolename) of the url: https://github.com/geerlingguy/ansible-role-security/blob/master/defaults/main.yml. The two variables that we care about here are security_ssh_port and security_sudoers_passwordless. This role is going to help us remove password authentication, root login, change the ssh port and add a configured user account to the passwordless sudoers group.

You might notice that the role says "configured user accounts", which begs the question: where does the account get configured? This was actually a stumbling block for me for a while, as I had to work through many different issues my attempts to create and configure the role. The approach we'll take here is working, though may not be the most efficient (or best-pratice, see Caveat Emptor above). Yet there is another issue as well, because the first time we connect to the server it will be over the default ssh port (22), but in the future, we want to choose a more secure port. We're also going to need to make sure that port gets opened on the firewall.

Ansible's variable precendence is going to help us work through these issues. To start with, let's take a look at the following example vars file:

---
ntp_timezone: America/New_York

firewall_allowed_tcp_ports:
  - "{{ security_ssh_port }}"
  - "80"

# The core version you want to use (e.g. 6.x, 7.x, 8.0.x).
# A-fro note: this is slightly deceptive b/c it's really used to check out the correct branch
drupal_core_version: "master"

# The path where Drupal will be downloaded and installed.
drupal_core_path: "/var/www/{{ drupal_domain }}/docroot"

# Your drupal site's domain name (e.g. 'example.com').
# drupal_domain:  moved to group_vars

# Your Drupal site name.
drupal_site_name: "Aaron Froehlich's Blog"
drupal_admin_name: admin
drupal_admin_password: password

# The webserver you're running (e.g. 'apache2', 'httpd', 'nginx').
drupal_webserver_daemon: apache2

# Drupal MySQL database username and password.
drupal_mysql_user: drupal
drupal_mysql_password: password
drupal_mysql_database: drupal

# The Drupal git url from which Drupal will be cloned.
drupal_repo_url: "[email protected]:a-fro/a-fro.com.git"

# The Drupal install profile to be used
drupal_install_profile: standard

# Security specific
# deploy_user: defined in group_vars for ad-hoc commands
# security_ssh_port: defined in host_vars and group_vars
security_sudoers_passwordless:
  - "{{ deploy_user }}"
security_autoupdate_enabled: true

You'll notice that some of the variables have been moved to host_vars or group_vars files. Our deploy_user, for example, would work just fine for our playbook if we define it here. But since we want to make this user available to Ansible for ad-hoc commands (not in playbooks), it is better to put it in group_vars. This is also why we can't just use our ~/.ssh/config file. With Ansible, any variables added to provisioning/group_vars/all are made available by default to all hosts in the inventory, so create that file and add the following lines to it:

---
deploy_user: deploy

For the security_ssh_port, we'll be connecting to our dev environment over the default port 22, but changing the port on our remote servers. I say servers (plural), because eventually we'll have both staging and production environments. We can modify our inventory file to make this a bit easier:

[dev]
a-fro.dev

[staging]
stage.a-fro.com

[production]
a-fro.com

[droplets:children]
staging
production

This allows us to issue commands to a single host, or to all our droplets. Therefore, we can add a file called "droplets" to the group_vars folder and add the group-specific variables there:

---
ansible_ssh_user: "{{ deploy_user }}"

security_ssh_port: 4895 # Or whatever you choose
ansible_ssh_port: "{{ security_ssh_port }}"

ansible_ssh_private_key_file: ~/.ssh/id_rsa # The private key that pairs to the public key on your remote server.

Configuring the Deploy User

There are two additional issues that we need to address if we want our security setup to work. The first is pragmatic: using a string in the security_sudoers_passwordless yaml array above works fine, but Ansible throws an error when we try to use a variable there. I have a pull request issued to Ansible-role-security that resolves this issue, but unless that gets accepted, we can't use the role as is. The easy alternative is to download that role to our local system and add it's contents to a folder named "roles" in provisioning (ie. provisioning/roles/security). You can see the change we need to make to the task here. Then, we modify the playbook to use our local "security" role, rather than geerlingguy.security.

The second issue we face is that the first time we connect to our server, we'll do it as root over port 22, so that we can add the deploy_user account, and update the security configuration. Initially, I was just modifying the variables depending on whether it was the first time I was running the playbook, but that got old really quickly as I created, configured and destroyed my droplets to work through all the issues. And while there may be better ways to do this, what worked for me was to add an additional playbook that handles our initial configuration. So create a provisioning/deploy_config.yml file and add the following lines to it:

---

- hosts: all
  sudo: yes

  vars_files:
    - vars/main.yml
    - vars/deploy_config.yml

  pre_tasks:
    - include: tasks/deploy_user.yml

  roles:
    - security

Here's the task that configures the deploy_user:

---
- name: Ensure admin group exists.
  group: name=admin state=present

- name: Add deployment user
  user: name='{{ deploy_user }}'
        state=present
        groups="sudo,admin"
        shell=/bin/bash

- name: Create .ssh folder with correct permissions.
  file: >
    path="/home/{{ deploy_user }}/.ssh/"
    state=directory
    owner="{{ deploy_user }}"
    group=admin
    mode=700

- name: Add authorized deploy key
  authorized_key: user="{{ deploy_user }}"
                  key="{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
                  path="/home/{{ deploy_user }}/.ssh/authorized_keys"
                  manage_dir=no
  remote_user: "{{ deploy_user }}"

The private/public key pair you define in the "Add authorized deploy key" task and in your ansible_ssh_private_key_file variable should have access to both your remote server and your GitHub repository. If you've forked or cloned my version, then you will definitely need to modify the keys.

Our final security configuration prep step is to leverage Ansible's variable precendence to override the ssh settings to use root and the default ssh port with the following lines in provisioning/vars/deploy_config:

---
ansible_ssh_user: root
ansible_ssh_port: 22

We now have everything in place to configure the basic security we're adding to our server. Remembering that one of we want our playbooks to work both locally over Vagrant and remotely, we can first try to run this playbook in our dev environment. I couldn't find a good way to make this seamless with Vagrant, so I've added a conditional statement to the Vagrantfile:

# -*- mode: ruby -*-
# vi: set ft=ruby :

# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  # All Vagrant configuration is done here. The most common configuration
  # options are documented and commented below. For a complete reference,
  # please see the online documentation at vagrantup.com.

  # Every Vagrant virtual environment requires a box to build off of.
  config.vm.box = "ubuntu-precise-64"

  # The url from where the 'config.vm.box' box will be fetched if it
  # doesn't already exist on the user's system.
  config.vm.box_url = "http://files.vagrantup.com/precise64.box"

  # Create a private network, which allows host-only access to the machine
  # using a specific IP.
  config.vm.network :private_network, ip: "192.168.88.88"

  # Share an additional folder to the guest VM. The first argument is
  # the path on the host to the actual folder. The second argument is
  # the path on the guest to mount the folder. And the optional third
  # argument is a set of non-required options.
  config.vm.synced_folder "../a-fro.dev", "/var/www/a-fro.dev", :nfs => true

  # Configure VirtualBox.
  config.vm.provider :virtualbox do |vb|
    # Set the RAM for this VM to 512M.
    vb.customize ["modifyvm", :id, "--memory", "512"]
    vb.customize ["modifyvm", :id, "--name", "a-fro.dev"]
  end

  # Enable provisioning with Ansible.
  config.vm.provision "ansible" do |ansible|
    ansible.inventory_path = "provisioning/inventory"
    ansible.sudo = true
    # ansible.raw_arguments = ['-vvvv']
    ansible.sudo = true
    ansible.limit = 'dev'

    initialized = false

    if initialized
      play = 'playbook'
      ansible.extra_vars = { ansible_ssh_private_key_file: '~/.ssh/ikon' }
    else
      play = 'deploy_config'
      ansible.extra_vars = {
        ansible_ssh_user: 'vagrant',
        ansible_ssh_private_key_file: '~/.vagrant.d/insecure_private_key'
      }
    end
    ansible.playbook = "provisioning/#{play}.yml"
  end
end

The first time we run vagrant up, if initialized is set to false, then it's going to run deploy_config. Once it's been initialized the first time (assuming there were no errors), you can set initialized to true and from that point on, playbook.yml will run when we vagrant provision. Assuming everything worked for you, then we're ready to configure our remote server with ansible-playbook provisioning/deploy_config.yml -i provisioning/inventory --limit=staging.

Installing Drupal

Whew! Take a deep breath, because we're really at the home stretch now. In part 1, we used a modified Drupal task file to install Drupal. Since then, however, Jeff has accepted a couple of pull requests that get us really close to being able to use his Drupal Ansible Role straight out of the box. I have another pull request issued that get's us 99% of the way there, but since that hasn't been accepted, we're going to follow the strategy we used with the security role and add a "drupal" folder to the roles.

I've uploaded a branch of ansible-role-drupal, that includes the modifications we need. They're all in the provisioning/drupal.yml task, and I've outlined the changes and reasons in my pull request. If you're following along, I suggest downloading that branch from GitHub and adding it to a drupal folder in your provisioning/roles. One additional change that I have not created a pull request for relates to the structure I use for Drupal projects. I like to put Drupal in a subfolder of the repository root (typically called docroot). As many readers will realize, this is in large part because we often host on Acquia. And while we're not doing that in this case, I still find it convenient to be able to add other folders (docs, bin scripts, etc.) alongside the Drupal docroot. The final modification we make, then, is to checkout the repository to /var/www/{{ drupal_domain }} (rather than {{ drupal_core_path }}, which points to the docroot folder of the repo).

We now have all our drops in a row and we're ready to run our playbook to do the rest of the server configuration and install Drupal! As I mentioned above, we can modify our Vagrantfile to set initialized to true and run vagrant provision, and our provisioner should run. If you run into issues, you can uncomment the ansible.raw_arguments line and enable verbose output.

One final note before we provision our staging server. While vagrant provision works just fine, I think I've made my preference clear for having consistency between environments. We can do that here by modifying the host_vars for dev:

---
drupal_domain: "a-fro.dev"
security_ssh_port: 22
ansible_ssh_port: "{{ security_ssh_port }}"
ansible_ssh_user: "{{ deploy_user }}"
ansible_ssh_private_key_file: '~/.ssh/id_rsa'

Now, assuming that you already ran vagrant up with initialized set to false, then you can run your playbook for dev in the same way you will for your remote servers:

cd provisioning
ansible-playbook playbook.yml -i inventory --limit=dev

If everything runs without a hitch on your vagrant server, then you're ready to run it remotely with ansible-playbook playbook.yml -i inventory --limit=staging. A couple of minutes later, you should see your Drupal site installed on your remote server.

Simple Deployments

I'm probably not the only reader of Jeff's awesome book Ansible for Devops who is looking forward to him completing Chapter 9, Deployments with Ansible. In the meantime, however, we can create a simple deploy playbook with two tasks:

---
- hosts: all

  vars_files:
    - vars/main.yml

  tasks:
    - name: Check out the repository.
      git: >
        repo='[email protected]:a-fro/a-fro.com.git'
        version='master'
        accept_hostkey=yes
        dest=/var/www/{{ drupal_domain }}
      sudo: no

    - name: Clear cache on D8
      command:
        chdir={{ drupal_core_path }}
        drush cr
      when: drupal_major_version == 8

    - name: Clear cache on D6/7
      command:
        chdir={{ drupal_core_path }}
        drush cc all
      when: drupal_major_version < 8

Notice that we've added a conditional that checks for a variable called drupal_major_version, so you should add that to your provisioniong/vars/main.yml file. If I was running a D7 site, I'd probably add tasks to the deploy script such as drush fr-all -y, but this suffices for now. Since I'm pretty new to D8, if you have ideas on other tasks that would be helpful (such as a git workflow for CM), then I'm all ears!

Conclusion

I hope you enjoyed this 2 part series on Drupal and Ansible. One final note for the diligent reader relates to choosing the most basic hosting plan, which limits my server to 512MB of ram. I've therefore added an additional task that adds and configures swap space when not on Vagrant.

Thanks to the many committed open source developers (and Jeff Geerling in particular), devops for Drupal are getting dramatically simpler. As the community still reels from the effects of Drupalgeddon, it's easy to see how incredibly valuable it is to be able to easily run commands across  a range of servers and codebases. Please let me know if you have questions, issues, tips or tricks, and as always, thanks for reading.

Oct 27 2014
Oct 27

A couple of months ago, after a harrowing cascade of git merge conflicts involving compiled css, we decided it was time to subscribe to the philosophy that compiled CSS doesn't belong in a git repository. Sure, there are other technical solutions teams are tossing around that try to handle merging more gracefully, but I was more intererested in simply keeping the CSS out of the repo in the first place. After removing the CSS from the repo, we suddenly faced two primary technical challenges:

  • During development, switching branches will now need to trigger a recompliation of the stylesheets
  • Without the CSS in the repo, it's hard to know how to get the code up to Acquia

In this article, I'll describe the solutions we came up with to handle these challenges, and welcome feedback if you have a different solution.

Local Development

If you're new to using tools like Sass, Compass, Guard and LiveReload, I recommend taking a look at a project like Drupal Streamline. For the purpose of this post, I'm going to assume that you're already using Compass in your project. Once the CSS files have been removed, you'll want to compass compile to trigger an initial compilation of the stylesheet. However, having to remember to compile every time you switch to a new branch introduces not only an inconvenience, but also a strong possiblily for human error.

Luckily, we can use git hooks to remove this risk and annoyance. In this case, we'll create a post-checkout hook that triggers compiling every time a new branch is checked out:

  1. Create a file called post-checkout in the .git/hooks folder
  2. Add the following lines to that file:
    #! /bin/sh
    # Start from the repository root.
    cd ./$(git rev-parse --show-cdup)
    compass compile
    
  3. From the command line in the repository root, type chmod +x .git/hooks/post-checkout

Assuming you have compass correctly configured, you should see the stylesheets getting re-compiled the next time you git checkout [branch], even if you're not already running Guard and LiveReload.

Deploying to Acquia

Now that CSS is no longer being deployed when we push our repo up to Acquia, we need to figure out how we're going to get it there. It would be possible to force-add the ignored stylesheets before I push the branch up, but I don't really want all those additional commits on my development branches in particular. Luckily, Acquia has a solution that we can hack which will allow us to push the files up to Dev and Stage (note, we'll handle prod differently).

Enter LiveDev

Acquia has a setting that you can toggle on both the dev and test environments that allows you to modify the files on the server. It's called 'livedev', and we're going to exploit its functionality to get our compiled CSS up to those environments. After enabling livedev in the Acquia workflow interface, you are now able to scp files up to the server during deployment. Because I like to reduce the possibility of human error, I prefer to create a deploy script that handles this part for me. It's basically going to do three things:

  1. Compile the css
  2. scp the css files up to Acquia livedev for the correct environment
  3. ssh into Acquia's server and checkout the code branch that we just pushed up.

Here's the basic deploy script that we can use to accomplish these goals:

#!/bin/bash

REPO_BASE='[project foldername here (the folder above docroot)]'

# check running from the repository base
CURRENT_DIR=${PWD##*/}
if [ ! "$CURRENT_DIR" = $REPO_BASE ]; then
  echo 'Please be sure that you are running this command from the root of the repo.'
  exit 2
fi

# Figure out which environment to deploy to
while getopts "e:" var; do
    case $var in
        e) ENV="${OPTARG}";;
    esac
done

# Set the ENV to dev if 'e' wasn't passed as an argument
if [ "${#ENV}" -eq "0" ]; then
  ENV='dev'
fi

if [ "$ENV" = "dev" ] || [ "$ENV" = "test" ]; then
  # Set the css_path and livedev path
  CSS_PATH='docroot/sites/all/themes/theme_name/css/'

  # Replace [[email protected]] with your real Acquia Cloud SSH host
  # Available in the AC interface under the "Users and keys" tab
  ACQUIA_LIVEDEV='[[email protected]]:~/$ENV/livedev/'

  # Get the branch name
  BRANCH_NAME="$(git symbolic-ref HEAD 2>/dev/null)" ||
  BRANCH_NAME="detached"     # detached HEAD
  BRANCH_NAME=${BRANCH_NAME##refs/heads/}

  echo "Pushing $BRANCH_NAME to acquia cloud $ENV"
  git push -f ac $BRANCH_NAME # This assumes you have a git remote called "ac" that points to Acquia

  echo "Compiling css"
  compass compile

  # Upload to server
  echo "Uploading styles to server"
  scp -r $CSS_PATH "$ACQUIA_LIVEDEV~/$ENV/livedev/$CSS_PATH":

  # Pull the updates from the branch to livedev and clear cache
  echo "Deploying $BRANCH_NAME to livedev on Acquia"
  ssh $ACQUIA_LIVEDEV "git checkout .; git pull; git checkout $BRANCH_NAME; cd docroot; exit;"

  echo "Clearing cache on $ENV"
  cd docroot
  drush [DRUSH_ALIAS].$ENV cc all -y

  echo "Deployment complete"
  exit
fi

# If not dev or test, throw an error
echo 'Error: the deploy script is for the Acquia dev and test environments'

Now I don't pretend to be a shell scripting expert and I'm sure this script could be improved; however, it might be helpful to explain a few things. To start with, you will need to chmod +x [path/to/file]. I always put scripts like this in a bin folder at the root of the repo. There are a few other variables that you'll need to change if you want to use this script, such as REPO_BASE, CSS_PATH and ACQUIA_LIVEDEV. Also, the script assumes that you have a git remote called "ac", which should point to your Acquia Cloud instance. Finally, the drush cache clear portion assumes that you have a custom drush alias created for your livedev environment for both dev and test; if not, you can remove those lines. To deploy the site to dev, you would run the command bin/deploy, or bin/deploy -e test to deploy to the staging environment.

Deploying to Prod

Wisely, Acquia doesn't provide keys to run livedev on the production environment, and this approach is probably more fragile than we'd like anyway. For the production environment, we're going to use an approach that force-adds the stylesheet when necessary.

To do this, we're again going to rely on a git hook to help reduce the possibility of human error. Because our development philosophy relies on a single branch called "production" that we merge into and tag, we can use git's post-merge hook to handle the necessary force-adding of our stylesheet.

#! /bin/sh

BRANCH_NAME="$(git symbolic-ref HEAD 2>/dev/null)" ||
BRANCH_NAME="detached"
BRANCH_NAME=${BRANCH_NAME##refs/heads/}
CSS_PATH="docroot/sites/all/themes/theme_name/css/"

if [ "$BRANCH_NAME" = "production" ]; then
  compass compile
  git add $CSS_PATH -f
  git diff --cached --exit-code > /dev/null
  if [ "$?" -eq 1 ]; then
    git commit -m 'Adding compiled css to production'
  fi
fi

As with the post-checkout hook, you'll need to make sure this file is executable. Note that after the script stages the css files, git is able to confirm whether there are differences in the current state of the files, and only commit the files when there are changes. After merging a feature branch into the production branch, the post-merge hook gets triggered, and I can then add a git tag, push the code and new tag to the Acquia remote, and then utilize Acquia's cloud interface to deploy the new tag.

Conclusion

While this may seem like a lot of hoops to jump through to keep compiled CSS out of the repository, the deploy script actually fits very nicely with my development workflow, because it allows me to easily push up the current branch to dev for acceptance testing. In the future, I'd like to rework this process to utilize Acquia's Cloud API, but frankly, my tests with the API thus far have returned unexpected results, and I haven't wanted to submit one of our coveted support tickets to figure out why the API isn't working correctly. If you're reading this and can offer tips for improving what's here, sharing how you accomplish the same thing, or happen to work at Acquia and want to talk about the bugs I'm seeing in the API, please leave a comment. And thanks for reading!

Update

Dave Reid made a comment below about alternatives to LiveDev and the possibility of using tags to accomplish this. As I mentioned above, LiveDev works well for me (on dev and test) because it fits well into my typical deployment workflow. The problem I see with using tags to trigger a hook is that we are in the practice of tagging production releases, but not for dev or test. Thinking through Dave's suggestion, however, led to me to an alternative approach to LiveDev that still keeps the repo clean using Git's "pre-push" hook:

#! /bin/sh

PUSH_REMOTE=$1
ACQUIA_REMOTE='ac' #put your Acquia remote name here

if [ $PUSH_REMOTE = $ACQUIA_REMOTE ]; then
  compass compile
  git add docroot/sites/all/themes/ilr_theme/css/ -f
  git diff --cached --exit-code > /dev/null
  if [ "$?" -eq 1 ]; then
    git commit -m "Adding compiled css"
  fi
fi


The hook receives the remote as the first argument, which allows us to check whether we're pushing to our defined Acquia remote. If we are, the script then checks for CSS changes, and adds the additional commit if necessary. The thing I really like about this approach is that the GitHub repository won't get cluttered with the extra commit, but the CSS files can be deployed to Acquia without livedev.

Oct 27 2014
Oct 27

A couple of months ago, after a harrowing cascade of git merge conflicts involving compiled css, we decided it was time to subscribe to the philosophy that compiled CSS doesn't belong in a git repository. Sure, there are other technical solutions teams are tossing around that try to handle merging more gracefully, but I was more intererested in simply keeping the CSS out of the repo in the first place. After removing the CSS from the repo, we suddenly faced two primary technical challenges:

  • During development, switching branches will now need to trigger a recompliation of the stylesheets
  • Without the CSS in the repo, it's hard to know how to get the code up to Acquia

In this article, I'll describe the solutions we came up with to handle these challenges, and welcome feedback if you have a different solution.

Local Development

If you're new to using tools like Sass, Compass, Guard and LiveReload, I recommend taking a look at a project like Drupal Streamline. For the purpose of this post, I'm going to assume that you're already using Compass in your project. Once the CSS files have been removed, you'll want to compass compile to trigger an initial compilation of the stylesheet. However, having to remember to compile every time you switch to a new branch introduces not only an inconvenience, but also a strong possiblily for human error.

Luckily, we can use git hooks to remove this risk and annoyance. In this case, we'll create a post-checkout hook that triggers compiling every time a new branch is checked out:

  1. Create a file called post-checkout in the .git/hooks folder
  2. Add the following lines to that file:
    #! /bin/sh
    # Start from the repository root.
    cd ./$(git rev-parse --show-cdup)
    compass compile
    
  3. From the command line in the repository root, type chmod +x .git/hooks/post-checkout

Assuming you have compass correctly configured, you should see the stylesheets getting re-compiled the next time you git checkout [branch], even if you're not already running Guard and LiveReload.

Deploying to Acquia

Now that CSS is no longer being deployed when we push our repo up to Acquia, we need to figure out how we're going to get it there. It would be possible to force-add the ignored stylesheets before I push the branch up, but I don't really want all those additional commits on my development branches in particular. Luckily, Acquia has a solution that we can hack which will allow us to push the files up to Dev and Stage (note, we'll handle prod differently).

Enter LiveDev

Acquia has a setting that you can toggle on both the dev and test environments that allows you to modify the files on the server. It's called 'livedev', and we're going to exploit its functionality to get our compiled CSS up to those environments. After enabling livedev in the Acquia workflow interface, you are now able to scp files up to the server during deployment. Because I like to reduce the possibility of human error, I prefer to create a deploy script that handles this part for me. It's basically going to do three things:

  1. Compile the css
  2. scp the css files up to Acquia livedev for the correct environment
  3. ssh into Acquia's server and checkout the code branch that we just pushed up.

Here's the basic deploy script that we can use to accomplish these goals:

#!/bin/bash

REPO_BASE='[project foldername here (the folder above docroot)]'

# check running from the repository base
CURRENT_DIR=${PWD##*/}
if [ ! "$CURRENT_DIR" = $REPO_BASE ]; then
  echo 'Please be sure that you are running this command from the root of the repo.'
  exit 2
fi

# Figure out which environment to deploy to
while getopts "e:" var; do
    case $var in
        e) ENV="${OPTARG}";;
    esac
done

# Set the ENV to dev if 'e' wasn't passed as an argument
if [ "${#ENV}" -eq "0" ]; then
  ENV='dev'
fi

if [ "$ENV" = "dev" ] || [ "$ENV" = "test" ]; then
  # Set the css_path and livedev path
  CSS_PATH='docroot/sites/all/themes/theme_name/css/'

  # Replace [[email protected]] with your real Acquia Cloud SSH host
  # Available in the AC interface under the "Users and keys" tab
  ACQUIA_LIVEDEV='[[email protected]]:~/$ENV/livedev/'

  # Get the branch name
  BRANCH_NAME="$(git symbolic-ref HEAD 2>/dev/null)" ||
  BRANCH_NAME="detached"     # detached HEAD
  BRANCH_NAME=${BRANCH_NAME##refs/heads/}

  echo "Pushing $BRANCH_NAME to acquia cloud $ENV"
  git push -f ac $BRANCH_NAME # This assumes you have a git remote called "ac" that points to Acquia

  echo "Compiling css"
  compass compile

  # Upload to server
  echo "Uploading styles to server"
  scp -r $CSS_PATH "$ACQUIA_LIVEDEV~/$ENV/livedev/$CSS_PATH":

  # Pull the updates from the branch to livedev and clear cache
  echo "Deploying $BRANCH_NAME to livedev on Acquia"
  ssh $ACQUIA_LIVEDEV "git checkout .; git pull; git checkout $BRANCH_NAME; cd docroot; exit;"

  echo "Clearing cache on $ENV"
  cd docroot
  drush [DRUSH_ALIAS].$ENV cc all -y

  echo "Deployment complete"
  exit
fi

# If not dev or test, throw an error
echo 'Error: the deploy script is for the Acquia dev and test environments'

Now I don't pretend to be a shell scripting expert and I'm sure this script could be improved; however, it might be helpful to explain a few things. To start with, you will need to chmod +x [path/to/file]. I always put scripts like this in a bin folder at the root of the repo. There are a few other variables that you'll need to change if you want to use this script, such as REPO_BASE, CSS_PATH and ACQUIA_LIVEDEV. Also, the script assumes that you have a git remote called "ac", which should point to your Acquia Cloud instance. Finally, the drush cache clear portion assumes that you have a custom drush alias created for your livedev environment for both dev and test; if not, you can remove those lines. To deploy the site to dev, you would run the command bin/deploy, or bin/deploy -e test to deploy to the staging environment.

Deploying to Prod

Wisely, Acquia doesn't provide keys to run livedev on the production environment, and this approach is probably more fragile than we'd like anyway. For the production environment, we're going to use an approach that force-adds the stylesheet when necessary.

To do this, we're again going to rely on a git hook to help reduce the possibility of human error. Because our development philosophy relies on a single branch called "production" that we merge into and tag, we can use git's post-merge hook to handle the necessary force-adding of our stylesheet.

#! /bin/sh

BRANCH_NAME="$(git symbolic-ref HEAD 2>/dev/null)" ||
BRANCH_NAME="detached"
BRANCH_NAME=${BRANCH_NAME##refs/heads/}
CSS_PATH="docroot/sites/all/themes/theme_name/css/"

if [ "$BRANCH_NAME" = "production" ]; then
  compass compile
  git add $CSS_PATH -f
  git diff --cached --exit-code > /dev/null
  if [ "$?" -eq 1 ]; then
    git commit -m 'Adding compiled css to production'
  fi
fi

As with the post-checkout hook, you'll need to make sure this file is executable. Note that after the script stages the css files, git is able to confirm whether there are differences in the current state of the files, and only commit the files when there are changes. After merging a feature branch into the production branch, the post-merge hook gets triggered, and I can then add a git tag, push the code and new tag to the Acquia remote, and then utilize Acquia's cloud interface to deploy the new tag.

Conclusion

While this may seem like a lot of hoops to jump through to keep compiled CSS out of the repository, the deploy script actually fits very nicely with my development workflow, because it allows me to easily push up the current branch to dev for acceptance testing. In the future, I'd like to rework this process to utilize Acquia's Cloud API, but frankly, my tests with the API thus far have returned unexpected results, and I haven't wanted to submit one of our coveted support tickets to figure out why the API isn't working correctly. If you're reading this and can offer tips for improving what's here, sharing how you accomplish the same thing, or happen to work at Acquia and want to talk about the bugs I'm seeing in the API, please leave a comment. And thanks for reading!

Update

Dave Reid made a comment below about alternatives to LiveDev and the possibility of using tags to accomplish this. As I mentioned above, LiveDev works well for me (on dev and test) because it fits well into my typical deployment workflow. The problem I see with using tags to trigger a hook is that we are in the practice of tagging production releases, but not for dev or test. Thinking through Dave's suggestion, however, led to me to an alternative approach to LiveDev that still keeps the repo clean using Git's "pre-push" hook:

#! /bin/sh

PUSH_REMOTE=$1
ACQUIA_REMOTE='ac' #put your Acquia remote name here

if [ $PUSH_REMOTE = $ACQUIA_REMOTE ]; then
  compass compile
  git add docroot/sites/all/themes/ilr_theme/css/ -f
  git diff --cached --exit-code > /dev/null
  if [ "$?" -eq 1 ]; then
    git commit -m "Adding compiled css"
  fi
fi


The hook receives the remote as the first argument, which allows us to check whether we're pushing to our defined Acquia remote. If we are, the script then checks for CSS changes, and adds the additional commit if necessary. The thing I really like about this approach is that the GitHub repository won't get cluttered with the extra commit, but the CSS files can be deployed to Acquia without livedev.

Oct 22 2014
Oct 22

As I mentioned in my hello world post, I've been learning Ansible via Jeff Geerling's great book Ansible for Devops. When learning new technologies, there is no substitute for diving in and playing with them on a real project. This blog is, in part, the byproduct of my efforts to learn and play with Ansible. Yet embedded within that larger goal were a number of additional technical requirements that were important to me, including:

  1. Setting up a local development environment using Vagrant
  2. Installing Drupal from a github repo
  3. Configuring Vagrant to run said repo over NFS (for ST3, LiveReload, Sass, etc.)
  4. Using the same playbook for both local dev and remote administration (on DigitalOcean)
  5. Including basic server security
  6. Making deployments simple

In this blog entry, we'll look at the first three requirements in greater detail, and save the latter three for another post.

Configuring Local Development with Vagrant

At first glance, requirement #1 seems pretty simple. Ansible plays nicely with Vagrant, so if all you want to do is quickly spin up a Drupal site, download Jeff's Drupal Dev VM and you'll be up and running in a matter of minutes. However, when taken in the context of the 2nd and 3rd requirements, we're going to need to make some modifications to the Drupal Dev VM.

To start with, the Drupal Dev VM uses a drush make file to build the site. Since we want to build the site based on our own git repository, we're going to need to find a different strategy. This is actually a recent modification to the Drupal Dev VM, which previously used an Ansible role called "Drupal". If you look carefully at that github repo, you'll actually notice that Jeff accepted one of my pull requests to add the functionality we're looking for from this role. The last variable is called drupal_repo_url, which you can use if you want to install Drupal from your own repository rather than Drupal.org. We'll take a closer look at this in a moment.

Installing Drupal with a Custom Git Repo

Heading back to the Drupal Dev VM, you can see that the principle change Jeff made was to remove geerlingguy.drupal from the dependency list, and replace it with a new task defined in the drupal.yml file. After cloning the Dev VM onto your system, remove - include: tasks/drupal.yml from the tasks section and add - geerlingguy.drupal to the roles section.

After replacing the Drupal task with the Ansible Drupal role, we also need to update the vars file in the local repo with the role-specific vars. There, you can update the drupal_repo_url to point to your github url rather than the project url at git.drupal.org.

Configuring Vagrant for NFS

At this point, we would be able to meet the first two requirements with a simple vagrant up, which would provision the site using Ansible (assuming that you've already installed the dependencies). Go ahead and try it if you're following along on your local machine. But there's a problem, because our third requirement is going to complicate this setup. Currently, Drupal gets downloaded and installed on the VM, which complicates our ability to edit the files using our IDE of choice and also being able to run the necessary Ruby gems like Sass and LiveReload.

When I was initially working through this process, I spent quite a few hours trying to configure my VM to download the necessary Ruby gems so I could compile my stylesheets with Compass directly on the VM. The biggest drawback for me, however, was that I didn't really want to edit my code using Vim over ssh. What I really needed was to be able to share my local git repo of the site with my Vagrant box via NFS, hence the 3rd requirement.

In order to satisfy this 3rd requirement, I ended up removing my dependency on the Ansible Drupal role and instead focussed on modifying the Drupal task to meet my needs. Take a look at this gist to see what I did.

Most of the tasks in that file should be pretty self-explanatory. The only one that might be suprising is the "Copy the css files" task, which is necessary because I like to keep my compiled CSS files out of the repo (more on this coming soon). Here's a gist of an example vars file you could use to support this task.

One other advantage of our modified Drupal task is that we can now specify an install profile to use when installing Drupal. I currently have a pull request that would add this functionality to the Ansible Drupal Role, but even if that gets committed, it won't solve our problem here because we're not using that role. We could, however, simply modify the "Install Drupal (standard profile) with drush" to install our custom profile if that's part of your typical workflow. If I were installing a D7 site here, I would definitely use a custom profile, since that is my standard workflow, but since we're installing D8 and I haven't used D8 profiles yet, I'm leaving it out for now.

The next step we need to take in order to get our site working correctly is to modify the Vagrantfile so that we share our local site. You might have noticed in the vars file that the drupal_css_path variable points to a folder on my system named "a-fro.dev", which is, not suprisingly, the folder we want to load over NFS. This can be accomplished by adding the following line to the Vagrantfile:

config.vm.synced_folder "../a-fro.dev", "/var/www/a-fro.dev", :nfs => true

Note that the folder we point to in /var/www should match the {{ drupal_domain }} variable we previously declared. However, since this is now pointing to a folder on our local system (rather than on the vm), we'll run into a couple of issues when Ansible provisions the VM. Vagrant expects the synced_folder to exist, and will throw an error if it does not. Therefore, you need to make sure an point to an existing folder that includes the path specified in {{ drupal_core_path }}. Alternatively, you could clone a-fro.com repo into the folder above your drupal-dev-vm folder using the command git clone [email protected]:a-fro/a-fro.com.git a-fro.dev. Additionally, you will probably receive an error when the www.yml task tries to set permissions on the www folder. The final change we need to make, then, is to remove the "Set permissions on /var/www" task from provisioning/tasks/www.yml

With this final change in place, we should now be able to run vagrant up and the the site should install correctly. If it doesn't work for you, one possible gotcha is with the task that checks if Drupal is already installed. That task looks for the settings.php file, and if it finds it, the Drush site-install task doesn't run. If you're working from a previously installed local site, the settings.php file may already exist.

Conclusion

This completes our first three requirements, and should get you far enough that you could begin working on building your own local site and getting it ready to deploy to your new server. You can find the final working version from this post on GitHub. In the next blog post, we'll look more closely at the last three requirements, which I had to tackle in order to get the site up and running. Thanks for reading.

Oct 22 2014
Oct 22

As I mentioned in my hello world post, I've been learning Ansible via Jeff Geerling's great book Ansible for Devops. When learning new technologies, there is no substitute for diving in and playing with them on a real project. This blog is, in part, the byproduct of my efforts to learn and play with Ansible. Yet embedded within that larger goal were a number of additional technical requirements that were important to me, including:

  1. Setting up a local development environment using Vagrant
  2. Installing Drupal from a github repo
  3. Configuring Vagrant to run said repo over NFS (for ST3, LiveReload, Sass, etc.)
  4. Using the same playbook for both local dev and remote administration (on DigitalOcean)
  5. Including basic server security
  6. Making deployments simple

In this blog entry, we'll look at the first three requirements in greater detail, and save the latter three for another post.

Configuring Local Development with Vagrant

At first glance, requirement #1 seems pretty simple. Ansible plays nicely with Vagrant, so if all you want to do is quickly spin up a Drupal site, download Jeff's Drupal Dev VM and you'll be up and running in a matter of minutes. However, when taken in the context of the 2nd and 3rd requirements, we're going to need to make some modifications to the Drupal Dev VM.

To start with, the Drupal Dev VM uses a drush make file to build the site. Since we want to build the site based on our own git repository, we're going to need to find a different strategy. This is actually a recent modification to the Drupal Dev VM, which previously used an Ansible role called "Drupal". If you look carefully at that github repo, you'll actually notice that Jeff accepted one of my pull requests to add the functionality we're looking for from this role. The last variable is called drupal_repo_url, which you can use if you want to install Drupal from your own repository rather than Drupal.org. We'll take a closer look at this in a moment.

Installing Drupal with a Custom Git Repo

Heading back to the Drupal Dev VM, you can see that the principle change Jeff made was to remove geerlingguy.drupal from the dependency list, and replace it with a new task defined in the drupal.yml file. After cloning the Dev VM onto your system, remove - include: tasks/drupal.yml from the tasks section and add - geerlingguy.drupal to the roles section.

After replacing the Drupal task with the Ansible Drupal role, we also need to update the vars file in the local repo with the role-specific vars. There, you can update the drupal_repo_url to point to your github url rather than the project url at git.drupal.org.

Configuring Vagrant for NFS

At this point, we would be able to meet the first two requirements with a simple vagrant up, which would provision the site using Ansible (assuming that you've already installed the dependencies). Go ahead and try it if you're following along on your local machine. But there's a problem, because our third requirement is going to complicate this setup. Currently, Drupal gets downloaded and installed on the VM, which complicates our ability to edit the files using our IDE of choice and also being able to run the necessary Ruby gems like Sass and LiveReload.

When I was initially working through this process, I spent quite a few hours trying to configure my VM to download the necessary Ruby gems so I could compile my stylesheets with Compass directly on the VM. The biggest drawback for me, however, was that I didn't really want to edit my code using Vim over ssh. What I really needed was to be able to share my local git repo of the site with my Vagrant box via NFS, hence the 3rd requirement.

In order to satisfy this 3rd requirement, I ended up removing my dependency on the Ansible Drupal role and instead focussed on modifying the Drupal task to meet my needs. Take a look at this gist to see what I did.

Most of the tasks in that file should be pretty self-explanatory. The only one that might be suprising is the "Copy the css files" task, which is necessary because I like to keep my compiled CSS files out of the repo (more on this coming soon). Here's a gist of an example vars file you could use to support this task.

One other advantage of our modified Drupal task is that we can now specify an install profile to use when installing Drupal. I currently have a pull request that would add this functionality to the Ansible Drupal Role, but even if that gets committed, it won't solve our problem here because we're not using that role. We could, however, simply modify the "Install Drupal (standard profile) with drush" to install our custom profile if that's part of your typical workflow. If I were installing a D7 site here, I would definitely use a custom profile, since that is my standard workflow, but since we're installing D8 and I haven't used D8 profiles yet, I'm leaving it out for now.

The next step we need to take in order to get our site working correctly is to modify the Vagrantfile so that we share our local site. You might have noticed in the vars file that the drupal_css_path variable points to a folder on my system named "a-fro.dev", which is, not suprisingly, the folder we want to load over NFS. This can be accomplished by adding the following line to the Vagrantfile:

config.vm.synced_folder "../a-fro.dev", "/var/www/a-fro.dev", :nfs => true

Note that the folder we point to in /var/www should match the {{ drupal_domain }} variable we previously declared. However, since this is now pointing to a folder on our local system (rather than on the vm), we'll run into a couple of issues when Ansible provisions the VM. Vagrant expects the synced_folder to exist, and will throw an error if it does not. Therefore, you need to make sure an point to an existing folder that includes the path specified in {{ drupal_core_path }}. Alternatively, you could clone a-fro.com repo into the folder above your drupal-dev-vm folder using the command git clone [email protected]:a-fro/a-fro.com.git a-fro.dev. Additionally, you will probably receive an error when the www.yml task tries to set permissions on the www folder. The final change we need to make, then, is to remove the "Set permissions on /var/www" task from provisioning/tasks/www.yml

With this final change in place, we should now be able to run vagrant up and the the site should install correctly. If it doesn't work for you, one possible gotcha is with the task that checks if Drupal is already installed. That task looks for the settings.php file, and if it finds it, the Drush site-install task doesn't run. If you're working from a previously installed local site, the settings.php file may already exist.

Conclusion

This completes our first three requirements, and should get you far enough that you could begin working on building your own local site and getting it ready to deploy to your new server. You can find the final working version from this post on GitHub. In the next blog post, we'll look more closely at the last three requirements, which I had to tackle in order to get the site up and running. Thanks for reading.

Oct 08 2014
Oct 08

Welcome! This site has been a while in the making, but I'm really excited to share it with you. Back in Austin at DrupalCon, I was inspired by Jeff Geerling's "Devops for Humans" presentation and immediately decided that I needed to start using Ansible. Well, it's been a long road, but the site is now live and I'm really looking forward to sharing the ups and downs of the journey. Oh, and if you don't have it already, Jeff's book Ansible for Devops is well worth it. More soon...

Jul 23 2014
Jul 23

Metal Toad has had the privilege to work over the past two years with DC Comics. What makes this partnership even more exciting, is that the main dccomics.com site also includes sites for Vertigo Comics and Mad Magazine. Most recently Metal Toad was given the task of building the new search feature for all three sites. However, while its an awesome privilege to work with such a well known brand as DC, this does not come without a complex set of issues for the three sites when working with Apache Solr search and Drupal.

Problem: Say you have a Drupal installation that presents three separate domains to the public (using the great Domain Access module), and you'd like to set up a search system for these sites. On this project we used Search API (the base search module), with the Search API Solr Search (tying it to the solr server) & Search API Views (letting you use Views to create your Solr queries) modules giving us Solr integration. These modules together give us Solr access and the ability to create one Solr index for all three domains.

I quickly found Search API Solr doesn't currently have a way to store each entity's Domain Access information - example: the Batman character entity has Domain Access assigned to it, which is a list of the domains that entity is available on. You cannot currently query by that information when searching, meaning searches were returning results from all three domains, not the site the user was currently browsing.

One Solution: Add a field to each of the indexed entities with its Domain Access ID's (keep in mind that an entity can belong to multiple domains) on insertion into the Solr index, and alter the Solr query with a Filter Query (fq) on the current domain.

This thread on drupal.org has some good direction on this, but I needed a slightly more generic solution. So without further discussion, here's the code:

/**
 * Alter Solr documents before they are sent to Solr for indexing.
 *
 * @param array $documents
 *   An array of SearchApiSolrDocument objects ready to be indexed, generated
 *   from $items array.
 * @param SearchApiIndex $index
 *   The search index for which items are being indexed.
 * @param array $items
 *   An array of items being indexed.
*/
 
function MY_MODULE_search_api_solr_documents_alter(array &$documents, SearchApiIndex $index, array $items) {
  // Adds a domain field to all documents.
  foreach ($documents as $document) {
    // Assumes the nid is an indexed field
    $nid = $document->getField('is_nid');
    // Grab the node from the db for it's Domain Access info
    $node = node_load($nid['value']);
    // Insert the domains into the document
    foreach ($node->domains as $key => $value) {
      $document->addField('im_domain_access', $value, null);
    }
  }
}
 
/**
 * Lets modules alter a Solr search request before sending it.
 *
 * Apache_Solr_Service::search() is called afterwards with these parameters.
 * Please see this method for details on what should be altered where and what
 * is set afterwards.
 *
 * @param array $call_args
 *   An associative array containing all three arguments to the
 *   SearchApiSolrConnectionInterface::search() call ("query", "params" and
 *   "method") as references.
 * @param SearchApiQueryInterface $query
 *   The SearchApiQueryInterface object representing the executed search query.
*/
function MY_MODULE_search_api_solr_query_alter(array &$call_args, SearchApiQueryInterface $query) {
  $domain = domain_get_domain();
  $id = $domain['domain_id'];
  // Add our domain field to the Solr filter querys
  $call_args['params']['fq'][] = 'im_domain_access:' . $id;
}

I hope this helps someone else who has the same problem to solve. Feel free to make any suggestions/ask any questions in the comments.

May 02 2014
May 02
Share this

Hello, My name is Aaron Winborn, and I was the recipient of the Society for Venturism's charity last year, to receive a future cryonic preservation at the facilities of the Cryonics Institute for when the time comes. I'm indebted to many of you for your contributions, and I want to thank you from the bottom of my heart for the peace of mind that this gives me. I still have an albeitedly diminished bucket list of things to do, but I don't stay up fretting over the things I'm incapable of accomplishing, in large part to this assurance. I know the odds are still not in my favor, but at least I have a significantly better chance of revival than if I were buried or cremated.

That said, This past year has been both challenging and a blessing. Challenging because of all the difficulties brought on by having to adjust to the continuing degeneration brought on by Amyotrophic Lateral Schlerosis, ALS, better known in the United States as Lou Gehrig's Disease, or Motor Neuron Disease in other parts of the world. Although I am not yet completely paralyzed, or locked in as they say, I am confined to my wheelchair, and cannot move my hands or arms. My breathing capacity is no longer measurable, and I cannot go for more than thirty seconds without mechanical ventilation before I'm in distress. I am not yet on a vent with a tracheostomy, but we are considering that as the next step to prolong my life. It's a difficult decision to make, however, because of the extraordinary amount of care that I would require around the clock, not to mention the possible loss of a quality of life. It's no wonder that only about ten percent of patients choose a tracheostomy, and only fifty percent of those go on to survive another year.

If that sounds scary to think about, well yes, it is. I could go on with a report of challenges we face, including the utter loss of the ability to speak or to understand spoken language, to the loss of the ability to eat or drink, to the devastation this awful disease has wreaked on my wife and our two young daughters, but I wouldn't be able to do it justice in a few short paragraphs, especially when I want to make sure that I leave space for the good things in my life. So on with the good.

First, I have, after a year or so of giving up reading anything not available on the Internet, have reawakened my love of literature. I've rediscovered the ebook format, and am now devouring about two books a week. Mostly science fiction, but dotted with the occasional contemporary fiction. I'm also still participating in the Drupal community, with a friend who volunteers two hours a week and a tricked up communication device.

Although I have been largely holed up this winter, I still manage to get out every couple of months to see a movie with some friends, and it's been fun sitting at the picture window and watching the girls play in the snow. Oh, how I look forward to the warmer seasons when I'll be able to "walk" the neighborhood again.

I also have been exploring new ways of communicating with my sweetie. Certainly challenging, because of my inability to use the verbal bandwidth, and because so much of her time is taken up as both my primary caregiver and being almost a single parent. On top of that, my day is so broken up and consumed with my caregiving that I find it difficult to even focus on an email that I find myself consolidating my efforts and try to cheat, by counting in my mind a quick CC in an email, or say a mention in a magazine article or a blog post as a valid form of communication. But I know in my heart that doesn't fully count, so I continue to find new ways to let her know how special she is to me.

I am enjoying the simple things in life. I know that's a cliche, but as with all good cliches, there's an element of truth to it. From when our cat decided that my lap is warm and available for napping, to the spontaneous hugs my youngest daughter gives my leg, to watching my older daughter play computer games, to watching my wife's beautiful smile. These are the things that make up life, and I am so excited to have another day of it each morning I awaken.

Stay strong,
Aaron Winborn

This letter first appeared in the latest issue of Long Life magazine: http://www.cryonics.org/images/uploads/magazines/LLV46_N01.pdf

Attachment Size 20140306_165859.jpg 86.99 KB
Jul 09 2013
Jul 09
Share this

I have been invited to the wonderful opportunity to come speak as part of the keynote address for Drupal Camp NYC 13 this weekend. You should come too, at least if you're in New York, and you have an interest in Drupal, or just want a chance to watch me make a fool out of myself on stage.

I have been invited, presumably, because of my work with the Media project of Drupal. Although I would question their choice, as my work would not have been possible without the work of the giants on whose shoulders I stand, not to mention the work of countless other developers who have and continue to contribute to making the Media project as awesome as it is. I am, as the saying goes, but a drop.

But regardless of my thoughts on the matter, I'm the person they've chosen to come chew your ears for ten minutes on Saturday, July 13, in the evening, following Larry Garfield, one of those giants I mentioned. So please, come to Drupal Camp NYC 13, to experience what it means to be part of the Drupal community, to learn from some of the giants in some of the many offerings available in the sessions over the weekend, to help make history in one of the sprints following the camp, or just to hear an old fool pontificate because the organizers didn't know what they were getting themselves into.

By the way, I've been told that 100% of the proceeds from the camp after expenses will be donated to the Special Needs Trust that has been set up for my family and me, because of my ongoing struggles with Amyotrophic Lateral Sclerosis, otherwise known as ALS or Lou Gehrig's Disease. Just one more reason you need to be there. Plus, you'll get to hear me do my Stephen Hawking impersonation with the speech assist device I'll be using. Now if only they could make a speech-writing assist device...

Jun 17 2013
Jun 17
Share this

ONLINE RIGHT NOW ON REDDIT: http://www.reddit.com/r/IAmA/comments/1glbmg/iama_chief_assistive_techno...

On Tuesday, June 18, at one o'clock EDT, I will be on a panel for an Ask Me Anything (AMA) on Reddit - http://www.reddit.com/r/IAmA/ - The topic will be ALS and Assistive Technology.

So why should you attend?

First, it's only an hour or so, and it'll look better if more than one or two people show up. Besides, it'll be a great opportunity to spend your lunch hour with me. Being online simply makes it that more simple.

Next, if you have any pressing questions, such as how do you manage to write awesome modules for Drupal when your hands are completely useless, then this is your opportunity!

My qualifications: I was diagnosed with Lou Gehrig's Disease about two years ago, just before my newborn's first birthday. At first, my arms and hands were weak, so I purchased a magic touch pad and keyboard for the mac. By September, I needed to supplement this with Dragon Dictate (Naturally Speaking on the PC). This combination served me well until last year, when my hands became too weak to control the touch pad, so I began looking at eye gaze solutions.

The first iteration was a custom built eye gaze tracking system built by my father, from an open source concept over at http://www.eyewriter.org/ . It was cumbersome and difficult to calibrate, however, so beyond a couple of proof of concept demonstrations, I didn't really use that much.

Then about July I got a head tracking piece of software for the Mac, which served me well for a few months. However, it was doomed from the start, as my neck strength was already failing.

So in September of last year, I finally got a Tobii PCEye, and used it to control the mouse, while I continued to use Dragon to dictate code and emails to the computer.

Finally, this January, my voice had degraded to such an extent that I gave up struggling to keep training and retraining Dragon, and now use the Tobii, in combination with Dasher, an open source word predictor for use with eye gaze systems, to control all aspects of the computer.

I'm planning to get a stand alone Tobii system next month, which will allow me to speak when I have lost that ability entirely, using my own voice banked with Model Talker, and have also begun a trial using a brain computer interface (BCI) for the possible loss of eye movement in the future.

By far the best thing I have done during the course of this debilitating illness has been to try to stay one step ahead, by training myself to use the next bit of software or hardware before I actually need it. I believe that where medicine has completely failed patients with ALS, technology has taken up the banner, and offers the only hope.

So join me Tuesday at 1:00 for an AMA on Reddit, to have a chance to chat with me live. I'll post the URL here and on Twitter: http://twitter.com/aaronwinborn soon before the session starts.

Jun 13 2013
Jun 13
Share this

First, some background. My name is Aaron Winborn, and I am a developer for Drupal, which is an open source content management system, used to make web sites. I also the father of two young girls, who bring much joy into my life, and married to a beautiful woman. You may have heard of her, her name is Wonder Woman.

[embedded content]

Just over two years ago, I was diagnosed with ALS, also known as Lou Gehrig's Disease. In short, that means that my mind will increasingly become trapped in my body as the motor neurons continue to die, and the muscles atrophy and waste away, until my diaphragm dies, bringing me with it.

My hands and arms are already completely paralyzed, and I'm confined to a power wheelchair. My diaphragm strength is largely diminished, and I am using breathing assistance 24/7, and I am at imminent risk for respiratory failure.

Even if I am fortunate enough to survive another year, which is only likely if I opt for a tracheostomy, my chances of surviving much longer become increasingly unlikely, as pneumonia becomes a specter haunting the late stages of ALS. There is no cure for this awful disease. My family gets to take care of all my needs and wipe the drool off my face, until I die, and leave them to pick up the pieces.

But yes, there is a silver lining to this all, such as it is. Kim Suozzi made a similar plea to the Internet a year ago today, and came up with the brilliant idea of freezing her body in the hopes of a distant advanced technology being able to revive her someday. Her body now rests at liquid nitrogen temperatures. http://www.reddit.com/r/AskReddit/comments/uvaqe/today_is_my_23rd_birthd...

I approached the organization responsible for raising the funds to help her out, the Society for Venturism, last November, and they agreed to take on my case as well. http://venturist.info/aaron-winborn-charity.html

But I am actually telling you all this in order to come up with a sort of reverse bucket list.

I've had a full life, with no regrets. I've done some travel, have lived in some cool places, like the Netherlands and London. I've made lots of good friends, and continue to do so. I've contributed to my debt to society, working hard throughout my life, as a teacher, a waiter, an open source software developer. I've worked with a few interesting characters, like Elisabeth Kubler-Ross, and even lived in a Buddhist monastery before I met the woman of my dreams.

But I'm not ready to hang up my jacket quite yet.

When I was ten, I came up with three things that I wanted to be when I grew up: a teacher, a writer, and an astronaut. I've been two of the things, which is not bad. As an aside, I once told that to some people, and was asked, "Oh, what did you write?" To which I replied, "I didn't say I've written anything."

Joking aside, I'm looking for some grandiose ideas of things to do after I've died, and have hopefully been revived. And by that, I mean the sky's the limit. Don't worry about whether something seems technically feasible. This is your opportunity to think big. Like, go skinny dipping in the methane oceans of Neptune.

I want to do so much more with my life, but it's not in the cards this go around. I've become a spectator in life, living vicariously through my daughters, and relegated to typing with my eyes at fifteen words per minute on a good day.

But I'm not complaining. I awaken each morning as I always have, excited to take on the day. This is just a way to do some more brainstorming, to come up with a list of things to do during the next century, should we be so fortunate.

Stay strong,

Aaron

PS This question was originally banned from the Internet at http://www.reddit.com/r/AskReddit/comments/1g7h69/anything_awesome_i_sho...

May 20 2013
May 20
Share this

I want to thank the good folks at ThinkShout and ZivTech for organizing the Drupal DoGooders Happy Hour to benefit my family and me, as well as giving people attending DrupalCon an opportunity to hang out and have some drinks. Even though I will not be in Portland this week, I plan to be present in spirit, beginning with a virtual appearance there. Join the crew this evening (May 20) at about 4:00 PDT to raise a glass in toast of doing Drupal Good and for a quick Q & A with me beginning about 4:30.

What a long strange trip it's been.

From Sunnyvale in 2007 when I conceived the Embedded Media Field module, to Boston DrupalCon in 2008, where I presented my first State of the Media session, to DC in 2009 where we launched the Media sprint supporting the Media suite of modules, to Chicago 2011 and Denver 2012.

These are the fun times that I recall fondly, doing good with my fellow cohorts. And by doing good, I mean really doing good things. Because where else in the business world can you spontaneously form a group of competitors, build something awesome, and give it freely to the rest of the world?

I'm really going to miss that this year. I mean that even though I continue to contribute to Drupal whatever and whenever I can, I am going to miss seeing you guys this year. There is a magic that happens when you get three or more Drupalers together in the same room. But circumstance has had its way with me these past two years and until we have a DrupalCon "Three Mile Island", I will have to be content with a virtual appearance.

So, join me on Monday evening to see my Stephen Hawking impersonation.

Feb 17 2013
Feb 17
Share this

Last night, I was sent an email from a friend whose email was hacked. I am seeing a lot of that in the past year or 2, so I thought I would share my response to help train folks into better password habits. And seriously, I think that it would be a good practice to install the Password policy module on all your Drupal sites, to help enforce better habits for everyone. That module can be configured to force passwords similar to what I described here, and much more, such as requiring that passwords be periodically changed.

Image CC-BySA from http://gawdoflolz.deviantart.com/art/My-password-322798011

Dear (Friend),

I got those emails, it does look like it's possible that your email was hacked. You did the right thing, by changing your password. However, we need to do a few other things to try to minimize the damage.

1st, it is entirely possible, in fact probable, that they did not actually hack your computer. Identity theft is rampant, and in this interconnected world, does not even require any access to your computer.

That said, it is still possible that your computer has a virus. That would be the 1st thing to check. If you have an antivirus program, you need to ensure that it has been updated. That may require a fee, if you are using a paid antivirus program subscription.

If you do not have an antivirus program, I would highly suggest Avast, which I have been using for years. You can safely use the free version of it, as it is not crippled in any way from the paid version. You can find it at http://avast.com.

After, and only after you have scanned your computer for viruses, then you can get on with the business of securing your accounts against identity theft.

You will need to change your email password yet again, I am sorry to say. Additionally, you will want to change the security questions, which I believe that Yahoo will ask.

Treat the security questions as passwords in themselves, as these are most commonly used to hack in to an email account. That means that you should not use anything resembling what they actually ask for, such as your mother's maiden name or your 1st dog. That can be discovered with Google these days.

Next, a word about passwords. As you may have heard by now, you need to have a password that cannot be guessed. Unfortunately, that is not enough. You also need to have a mix of cases, at least one number, and a special character, such as a punctuation mark. Additionally, you need to have a different password for every account that you have.

I cannot stress that last paragraph enough. It is too easy for a hacker to get into, say an account with a forum, and use that to get into your Wells Fargo account. For instance, to use myself as an example, about 6 years ago, I accidentally broadcasted my password into a chat room, and about 2 weeks later, I got an email from a woman wondering where her Gucci bag was that she had purchased from my eBay account. It turns out that someone in Russia had hacked into my eBay account and listed about 100 fake Gucci bags.

I know that this sounds daunting, but it is necessary. Fortunately, you can use what is called an algorithm to remember your dozens of new passwords that you'll need to create. You can use that to create a new password for any site, and you will always remember it. Additionally, it will be secure for all intents and purposes.

Basically, you will choose a passphrase, modify and, and apply it to any site. For example, and please do not use this example, let say you choose "apple" as your passphrase. We will modify that to have a punctuation mark and a number, so that it will be "@pp1E". Then you would append that to the 1st 4 characters of whatever site that you are creating an account for. For instance, for eBay, your password would be "[email protected]", and your Hotmail account would be "[email protected]". This will make your passwords immune to so-called dictionary attacks, where they try to figure out your password by entering random words from the dictionary.

Much easier to remember, right? And for your financial accounts, I would suggest creating yet another algorithm, as an extra layer of protection.

You can apply this same idea to those security questions that you see everywhere. Basically, you do not want to actually use a real answer, because it is far too easy for a determined hacker to read about that experience in your 1st car that you posted in Facebook. Instead, treat them with the same respect as your passwords. For instance, you might create an algorithm with your grandmother's cat's name that you apply to a site's question for referring to your own pet.

Once you have done this, you should be fairly safe.

Good luck.

As a postscript, and not to deflect responsibility, it is entirely possible that your email was not the one hacked. It may have been a more intelligent hack, where someone hacked into someone else's Facebook account, for example. From there, they may have grabbed the contacts and spoofed an email from it, sending spam and making it look like it came from yours. This is a more insidious form of identity theft that is becoming more common. Still, the best defense is to secure your passwords.

Feb 12 2013
Feb 12
Share this

TLDR: http://venturist.info/aaron-winborn-charity.html

So maybe you've heard about my plight, in which I wrestle Lou Gehrig in this losing battle to stay alive. And I use the phrase "staying alive" loosely, as many would shudder at the thought of becoming locked in with ALS, completely paralyzed, unable to move a muscle other than your eyes.

But that's only half the story. Wait for the punchline.

As if the physical challenges of adapting to new and increasingly debilitating disabilities were not enough, my wife and two young daughters are forced to watch helplessly as the man they knew loses the ability to lift a fork or scratch an itch, who just two years ago was able to lift his infant daughter and run with the 7-year-old. The emotional strain on my family is more than any family should have to bear. Not to mention the financial difficulties, which include big purchases such as a wheelchair van and home modifications, and ultimately round the clock nursing care, all of it exacerbated by the fact that we have had to give up my income both because of the illness and to qualify for disability and Medicaid.

Meet me, Aaron Winborn, software developer and author of Drupal Multimedia, champion of the open source software movement.

Years ago, I worked for the lady of death herself, Elisabeth Kübler-Ross, the author of On Death and Dying. Of course, I knew that one day I would need to confront death, but like most people, I assumed it would be when I was old, not in the prime of my life. Not that I'm complaining; I have lived a full life, from living in a Buddhist monastery to living overseas, from marrying the woman of my dreams to having two wonderful daughters, from teaching in a radical school to building websites for progressive organizations, from running a flight simulator for the US Navy to working as a puppeteer.

I accept the fact of my inevitable death. But accepting death does not, I believe, mean simply rolling over and letting that old dog bite you. Regardless of the prevalent mindset in society that says that people die and so should you, get over it, I believe that the reality we experience of people living only to a few decades is about to be turned upside down.

Ray Kurzweil spells out a coming technological singularity, in which accelerating technologies reach a critical mass and we reach a post-human world. He boldly predicts this will happen by the year 2045. I figured that if I could make it to 2035, my late 60s, that I would be able to take advantage of whatever medical advances were available and ride the wave to a radically extended lifespan.

ALS dictates otherwise. 50% of everyone diagnosed will die within 2 to 3 years of the onset of the disease. 80% will be gone in 5 years. And only 10% go on to survive a decade, most of them locked in, paralyzed completely, similar to Stephen Hawking. Sadly, my scores put me on the fast track of the 50%, and I am coming up quickly on 3 years.

Enter Kim Suozzi.

On June 10 of last year, her birthday, which is coincidentally my own, Kim Suozzi asked a question to the Internet, "Today is my 23rd birthday and probably my last. Anything awesome I should try before I die?" The answer that she received and acted on would probably be surprising to many.

On January 17, 2013, Kim Suozzi died, and as per her dying wish, was cryonically preserved.

She was a brave person, and I hope to meet her someday.

So yes, there we have it. The point that I am making with all this rambling. I hope to freeze my body after I die, in the hope of future medical technologies advancing to the point where they will be able to revive me.

The good news is that in the scheme of things, it is not too terribly expensive to have yourself cryonically preserved. You should look at it yourself; most people will fund it with a $35K-200K life insurance policy.

The bad news for me is that a life insurance policy is out of the question for me; a terminal illness precludes that as an option. Likewise, due to the financial hardships in store for us, self-funding is also out of the question.

When I learned about Kim Suozzi's plight, I reached out to the organization that set up the charity that ultimately funded her cryopreservation. The Society for Venturism, a non-profit that has raised funds for the eventual cryopreservation of terminally ill patients, agreed to take on my case.

Many of you reading this post have already helped out in so many ways. From volunteering your time and effort to our family, to donating money towards my Special Needs Trust to help provide a cushion for the difficult times ahead.

I am so grateful for all of this. It means so much to me and my family to know that there is such a large and generous community supporting us. I hate to ask for anything more, especially for something that may seem like an extravagance.

But is it really an extravagance?

If I were to ask for $100,000 for an experimental stem cell treatment, I doubt that we would even be having this conversation. No one in their right mind would even consider a potentially life-saving procedure to be an extravagance.

And what is cryonics, but a potentially life-saving procedure?

People choose from among many options for their bodies after death. Some choose to be buried, some choose cremation. Some choose to donate their bodies to science. That last is precisely what happens with cryonics: in addition to helping to answer the obvious question of will future revival from cold storage be possible, many developments in cryonics help modern medicine with the development of better preservation for organ transplantation and blood volume expanders.

Yes, I admit that the chances of it working are slim, but have you looked at the state of stem cell research for ALS lately? Consider that the only FDA approved medication to treat ALS, Rilutek, will on average add 3 months to one's lifespan, and you might begin to see my desperation.

But you should be happy with the life you've had. Why do you want to live forever?

The only reasonable response to that is to ask why do you want to die?

I love life. Every morning, even now with my body half paralyzed, I awaken with a new sense of purpose, excited to take on the day. There is so much I have yet to do. There are books to write, games to create, songs to sing. If I can get the use of my arms and hands again, there are gardens to plant, houses to build, space ships to fly. And oh, the people to love.

So please help me to realize this, my dying wish.

http://venturist.info/aaron-winborn-charity.html

"The most beautiful people we have known are those who have known defeat, known suffering, known struggle, known loss, and have found their way out of the depths. These persons have an appreciation, a sensitivity, and an understanding of life that fills them with compassion, gentleness, and a deep loving concern. Beautiful people do not just happen."

- Elisabeth Kübler-Ross

Jul 09 2012
Jul 09
Share this

This is a difficult post to write. I had been sitting on it for several months, trying to decide best how to convey the emotions I am feeling.

A few weeks ago, I was asked by a stranger if I had been in a car wreck, as I was wearing my neck brace. I replied that I was in a train wreck, and I was distracted by another person before I could finish my thought. He later asked where the train wreck took place, and I apologized for misleading him, and said I should be so lucky.

When I was first diagnosed with ALS last spring, I was overcome with grief, and spent many hours over the next month sobbing and screaming in rage. Eventually, life took over again, and I set all that aside for a while. After all, I still had two young daughters to help raise.

My Family

Most of my initial fears have come to pass. I remember the initial struggles lifting Sabina, just a few short months after her birth, before we even knew that there was anything close to serious going on. By January of last year, I was not even able to pick her up at all. Fortunately, she began crawling soon after, and had figured out how to crawl into my lap when she wanted me to hold her. I am so happy now when she climbs up on my lap to have me read a story to her or to watch a YouTube video.

And Ashlin has been very understanding and helpful through this whole process. As I have been unable to give hugs for several months, she came up with a method where she pulls my arms behind her back. I just about cried the first time that she did that. The last couple of months have been difficult, as I am no longer able to turn the pages of books when I read to her. At first, she went through a period when she didn’t want me to read to her at all. But then she spoke with Gwen about her feelings about it, and now she turns the pages for me.

Sadly, however, I know that the worst is yet to come. I will be visiting with the ALS clinic soon in order to get a power wheelchair. At the same time, we need to expand our bathroom to accommodate it, and put in a ramp in the back. And yet, that transition does not feel particularly big for me. Much more difficult has been the fact that I need artificial ventilation in order to breathe at night already. Or that I am unable to play frisbee anymore.

Aaron Welch with Advomatic recently set up a Special Needs Trust in my name. This unique trust allows us to use funds contributed by outside sources (we are legally not allowed to contribute to it), while protecting my eligibility to receive care-giving assistance through the state. The funds can be used to support my needs beyond what I will eventually receive from Medicare/Medicaid.

Basically, at some point down the road, I will be on disability and eligible for Medicare. The good news is that this happens almost automatically for a patient with ALS, due to some lobbying in the last couple of decades. As my needs increase, however, I will need more care and will need to become eligible for an attendant care program in PA, for which the income and asset allowance is almost nothing.

If and when I choose artificial, invasive ventilation, I will require 24 hour care. At that point, I would also need to be Medicaid eligible. This is where things get extremely challenging financially. For example, with attendant care / Medicaid, not only do I need to be basically destitute, but the state looks back over 5 years, and if they see that I have given any money to anyone in that time frame, they assume that I am trying to scam them, and dock the time, adding several months or more to the time before I would be eligible.

Based solely on the odds, there is a 50% chance that I will die in 1 to 2 years, although it’s a little more complicated than that. There are two predominant flavors of ALS: bulbar onset (the brainstem) and limb onset. Mine is the limb, which makes it only slightly more likely that I may live for another 3 or 4 years instead.

I want to point out that I am doing everything humanly possible to beat those odds. When they talk about the life expectancy of ALS patients, however, they are really talking about the "survivability", which is the point when a patient would require invasive ventilation to survive. Most of the 10% who will go on to live a decade or more have reached this point and are locked-in as well, trapped in their completely paralyzed bodies.

We have done what we can to protect what assets we do have. We have been consulting with an attorney as the information is very complex and we want to protect the future for our children as best as we can. We know that major purchases are down the road, such as medical and communications equipment, home modifications, a wheelchair, and an accessible van, all of which will be in the tens of thousands of dollars. The cost to support a patient with ALS in the later stages can run easily up to $100,000 a year or more. The funds in the Trust can help support me while I am alive. After I die, the funds remaining in the Trust will go to my wife, Gwen, to help raise our children.

For those who wish to contribute, we have set up a bank account for the Trust. We have set up a PayPal account as an option to contribute. For a one time donation, click on the Donate button. To make a recurring monthly donation, you can select the amount you wish and click Subscribe. Alternatively, we can accept checks for the Trust; contact me for more information if interested in sending a check.

I am angry at this stupid disease. I know from my time with Elisabeth Kubler-Ross so many years ago that this is a natural part of the grieving process, and that eventually I will enter the acceptance phase. I even look forward to that.

But my daughters, Ashlin (8 years old) and Sabina (2 years old), will not have the benefit of the acceptance of my death when it happens. I am being robbed of my time with them, of watching them grow up. It is hard enough to know that I will most likely not be around to watch Ashlin graduate. To know that Sabina will most likely not have any more than a fleeting image of her father from early memories makes me cry.

To add insult to the injury, I am quickly losing my ability to participate in their lives, even now, becoming simply a spectator. What I would give to be able to pick up a frisbee and toss it to my daughters.

I want to thank everyone for all your ongoing support and care. This slow-moving train wreck is more than any family should have to endure.

May 18 2012
May 18
Share this

The following transcript is for the video at http://www.youtube.com/watch?v=XfPKKisE88w :

[embedded content]

Hello, my name is Aaron Winborn. I am a developer for Advomatic, the author of Drupal Multimedia, and a contributor and a co-maintainer of several Drupal modules, including the Media suite of modules.

Today, I will demonstrate a new feature of the Media: YouTube module: browsing and searching videos directly from YouTube, in the media browser itself. So first, let’s set up our environment.

We are assuming that you already know how to install Drupal. If not, you can find information at Drupal.org.

So right now we are at the modules administration page. We are interested in the modules under the Media package. You will need to install and enable the File Entity module (version 7.x-2.x), and the same version of the Media module.

We will not enable the included Media Field module; it is there for legacy purposes, and has been deprecated in favor of core’s File Field.

The Media Internet Sources module, included with the Media module, is a dependency of the Media: YouTube module, so we will enable that.

Next will be the Media: YouTube module, also version 7.x-2.x.

Finally, we will install the WYSIWYG module.

Let’s start by configuring WYSIWYG. We do that by going to Configuration > Content Authoring > WYSIWYG profiles. Note that I have also installed and enabled the Admin Menu module and the Admin Menu Toolbar module, which gives us the fancy drop-down menus for administration that you see here.

Now in order to use WYSIWYG, you need to have also installed a third-party WYSIWYG library, such as CKEditor or TinyMCE. You need to follow the instructions with the WYSIWYG module to install that, although it is quite simple actually. You just download and unpack the file into the sites/all/libraries folder. You can see that I am using CKEditor here.

The WYSIWYG module allows us to set up profiles for the various text formats on our site; in this demo, we will edit the Filtered HTML format.

Open up the buttons and plug-ins field set next. Then check the Media Browser check box. That will add the media browser button to our WYSIWYG editor, which we will see soon.

In order to use that however, we need to configure the filter in question. In fact, I believe that if we do not do this step 1st, we will get an error message, complete with a link to the format configuration page.

On this page, we need to check the box next to “Convert Media tags to markup”. That is the answer to the number 1 support question that we get in the Media queue, which is, “Why is there bracketed goobly gook instead of my images?”

So now, as we will see, everything should be working now. So let’s test it.

Here on the create article page, we see a fancy button on the body text area! Let’s click it.

And there we go.

These are thumbnails being pulled directly from YouTube. How about that?

And there is even a ghetto pager, or at least previous/next links.

And you can also search YouTube directly from our browser.

So now we will select a video and submit it. Add a title and save the node. And there we go.

And that’s it really. Well, almost.

There are some more settings, specifically here to control which tabs show up for WYSIWYG. Note that at the time of this demonstration, you will not have this functionality unless you install the patch over at node 1434118.

To complete the demo, we will also do the same for fields. Let’s add a field to hold YouTube videos. We will call it Media, and it will be a file field with a Media file selector widget.

Here, let’s reorder it as well for the demo.

We leave everything at their default settings.

Hold on, I forgot that we need to allow the YouTube URI scheme. And the video file type.

So now we will create a new article, and select the media.

And here we have all the tabs available to our browser, including the new and improved YouTube tab.

And also, let us look at another new feature of the media module: My files!

This has been a long-awaited feature for the Media module as well.

Now here comes the 2nd most asked question in the support queue: “How come there is a link to my file, rather than the file itself?”

Let’s just fix that now.

Now we are in the file type administration page, where we can configure the display for each of our file types. Note that we can also add fields to our files, although we are not going to do that in this demo.

We will jump to the video display...

No, we want to make sure that our large formatter is set up properly for YouTube. And it is, so let’s set up that as the formatter for our Media file field.

And there it is, as a generic file, which is simply a link to the file stream itself. We will change that to rendered file. And then we set the view mode to large.

While we are in there, we can do the same for our teasers. We will just set that to the preview view mode, which by default will display a thumbnail.

Whoops, I forgot to save it. Let’s just do that again.

And there is the video.

And there is the thumbnail.

Well done!

Apr 18 2012
Apr 18
Share this

I received a most heart warming comment from a spammer the other day. To understand why it has affected me, you might first need a little background. I have been diagnosed with Stephen Hawking’s ALS, or Lou Gehrig’s disease. Based on just that fact, and that my lab scores are dead even (and that I do not have access to Mr. Hawking’s money or universal health care) I will most likely die sometime in the next year or two.

I announced my condition to the world through my blog last August, and have had many heartfelt wishes and offerings of support.

And lots of spam.

I use Mollom to protect my blog which is powered by Drupal, and it does a fairly decent job. Still, about 50 instances of spam make it through the filters every month. I have to handpick those out, which is an increasingly difficult task, considering that I have little ability to use my hands anymore.

So this one comment stuck out, beginning as so many do, with concern for my condition, and well-intentioned advice, in this case to seek out Ayurvedic medicine. But then, the poster went on to chastise other spammers on this site, and I quote:

One more thing for some persons who comment here. Don't forget that you are humanbeing. I also come here to place my link here. But when a person is in such a condition and place his feelings here, then how the hell you are posting your links here for business purpose. I am also doing SEO. But not at the cost of humanity. It is the time to help and support him, materially or mentally. If you can't do that please don't at least post here.

It is my sincere request.

This is a good reminder that even as we continue to fight the war against spam, that our combatants are human. Like so many soldiers, they are fighting for money (for “business purpose”). They also create internal justifications for what they do (“I am also doing SEO”).

And that at least one of them has a heart of gold.

Thank you, Dillip.

[embedded content]

(Here's a better version of the song.)

Feb 22 2012
Feb 22

CSS3 animations are finally becoming a useful tool in the front end developer's kit! Browser support is progressing, however there is no IE support yet (surprised?) and Opera currently doesn't support animations, but may in the future.

Despite the plethora of vendor prefixes to keep track of, one can really pull off some interesting animations; transforms and transitions anyone?

*/ /*-->*/

Check out the live demo at http://dabblet.com/gist/1867896/

The full demo is available over on dabblet.com if you want to play with the code, or grab a copy from github.

I know... I could have easily used images for the graphical elements and it's really an exorbitant amount of css to accomplish the goal, but where is the fun in that? There is also cpu/gpu use to take into account, but this little example isn't too heavy in that respect.

A couple of neat things about the animation:

  • Everything is html/css. No images. Yep, lots of divs/css, but no images. (Thanks Graeme Blackwood for the original css Druplicon, and Red Team Design for the css HTML5 logo)
  • @keyframes are super fun! Using percentages for steps, one can work through (I would guess) some very complex animations.
  • I did use a little JavaScript library, Lea Verou's -prefix-free to allow me to write plain css, without having to keep track of vendor prefixes. One could also use Prefixr afterwards to fill in the prefixes.

If you're using a modern browser and want to see a CSS3 transition in the wild, resize your browser! Our site uses transition: all 0.5s ease; to dress up the responsive layout transition.

Aug 23 2011
Aug 23
Share this

You may know me through my work with the Media suite of modules, and before that for my work with Embedded Media Field and Views Slideshow. You may have read my book, Drupal Multimedia, which I wrote before the birth of my second daughter, or seen me speak at a Drupal Camp or DrupalCon. You might have worked with me at a code sprint. Even if you haven’t met me, you might have seen some of my handiwork through one of the many sites I’ve helped developed over the years with Advomatic. Drupal has been a central part of my life - one of my three loves.

Earlier this year, my family and I were given some devastating news. I was diagnosed with Amyotrophic Lateral Sclerosis (ALS), more commonly known in the US as Lou Gehrig's Disease, which is a motor neuron disease that is slowly killing the motor neurons in my body. This is an incurable, terminal illness: 50% of patients diagnosed are dead within 2-3 years, and a further 20% in five years. Only 10% of patients are still alive after 10 years, and the majority of those are "locked in", like Stephen Hawking, unable to move any part of their body other than their eyes. The senses and cognitive functionality are spared. The disease is a progressively degenerative Motor Neuron Disease (MND), that eventually kills every voluntary muscle in the body, until the diaphragm collapses, which is generally when the patient dies, unless they are given a tracheostomy (and don't succumb to pneumonia).

This rare disease is sporadic, with no known cause. It's considered an "orphan disease", with an incidence rate of about 1 in 100,000. My neurologist is hopeful in my case, as it's limb onset (rather than bulbar, or brain-stem, onset), and I'm on the younger side of the bell-curve. Still, an early prognosis is impossible, as the disease progresses randomly. Right now, he said that based on my EMG, he expects to see clinical signs in my legs by next year, although he can't say if I'll be in a wheelchair by then.

Currently, I am unable to lift more than about 1-2 pounds. It's been hard on my wife, as she's doing everything I'm unable to do (vacuuming, dishes, etc.), on top of raising our children and pursuing her Masters. Our house is the opposite of accessible, with no first floor bathroom and being on a steep hill. Thus, we're in the meantime looking to buy a ranch house that we'll be able to modify.

Although I have some visible atrophy in my shoulders, wrists, and thumbs, I'm feeling on top of the world. I'm trying to front-load my life now, to ensure I'm doing what I'm able to at full capacity while I still can. I'm hopeful; I've been expecting a medical revolution this decade, and if I can hold on with my wits, I might be able to take advantage of that. Even if that doesn’t happen, from reading about and connecting with other PALS (Patients with ALS), I've learned, unsurprisingly, that the longer-lived people are those who manage to maintain a positive attitude in life.

I'm fortunate to be working with Advomatic, both because my job (Drupal!) is something I'm still able (and love) to do, and because I'm surrounded by such a supportive team. I plan to develop for as long as I'm able; I'm looking into voice recognition software for when my hands ultimately go. In a race against time, I'm also 'voice banking', recording my voice so that when I ultimately lose my voice, the computer will sound roughly like me. (Hawking's complaint is that he sounds "like a damn Yankee").

I know that many of you will want to know how to help me and my family. I know from experience that the first response is to want to make food. I appreciate that, as a gift of food directly helps to sustain a person, but we have a large, helpful community through our daughter’s school that is delivering meals once or twice a week. (Although if you’re local to us, we could use some occasional help with the yardwork...)

You can also send donations to the ALS Center at Penn State Milton S. Hershey Medical Center
(see http://pennstatehershey.org/web/neurology/patientcare/specialtyservices/als for more details).

Finally, Advomatic will likely be setting up a fund to help for when I’m unable to work any longer, and to provide a legacy for my family. Please contact Aaron Welch if you are interested in contributing to this.

Thanks,
Aaron Winborn

"It's only when we truly know and understand that we have a limited time on earth - and that we have no way of knowing when our time is up, we will then begin to live each day to the fullest, as if it was the only one we had." - Elisabeth Kübler-Ross

Jul 28 2011
Jul 28
Share this

This is a tutorial I wrote for a friend, who asked how to get started in Drupal. I figured I might as well share it, sorry if this is too basic for most of you hacks...

Hands on is the best way to learn Drupal...

There's a handbook at http://drupal.org/documentation -- there are several aspects to knowing Drupal. I would first suggest getting a site up and running so you can learn the basics.

First you'll need to set up a webserver. Unless you have a specific site you're trying to build, I would first install AMP on your desktop (this will be LAMP, MAMP, or WAMP, depending on your OS; you can find out about that with Google). That allows you to run local web sites at http://localhost/ .

Then you'll need to install Drupal. Just download it from http://drupal.org/project/drupal (currently the 7.7 version, the tar.gz or zip file). There are easier ways to do that if you're comfortable with the command line; see http://drupal.org/project/drush if you want to get overwhelmed with details. But depending on your learning style, it's probably best to start basic.

You'll need to download that into the directory designated by your AMP setup (might be /var/www/drupal, ~/Applications/MAMP/htdocs/drupal, or whatever WAMP tells you to do if you win the windows lottery; hold on, I'll look it up just in case... looks like it's c:\\wamp\www\drupal ).

If all goes well, you'll be able to go to http://localhost/drupal/ or http://localhost:8888/drupal/ (depending on your AMP installation & OS), where you'll see an Installation page.

At that point, you also need to create a database for your site. The easiest way to do this is to go to mysql from the command prompt and type

create database drupal;

However, as I recall, most AMP servers (other than Linux) don't do a good default setup, and it can be tricky to get mysql working from the terminal. So the next easier way is to navigate to http://localhost/phpMyAdmin/ or http://localhost:8888/phpMyAdmin/ and find the link to create a new database. You'll need to figure out (or figure out how to change) your default root password. It might be root, depending on your AMP.

Then you'll enter your database credentials from the installation screen, follow the other instructions, and you're off!

Did I forget anything crucial?

Jul 08 2011
Jul 08

The Society for Venturism has chosen me as the recipient of its charity for this year, to hopefully offer me cryonic preservation when the time comes. And this month, Longecity, an excellent forum for the discussion of issues related to extending the lifespan of humans, has offered up a matching grant of up to a thousand dollars to help out! So help out! Please.

Mar 11 2011
Mar 11
Share this

Announcing The First Annual Drupal Open Source Game Contest!

The BOF today was great! We made a couple of minor changes to the base rules. Also, nosro has volunteered to make a simplified version of the rules, so we can plug that in (and click through to the longer "fine print" explanations for more clarification).

My intent for launching this effort is to encourage us all to finish some games! There are only a very small few Drupal games out there, plus a smattering of flash games in nodes, and the potentials for using Drupal as a framework for gaming, as everyone in this group knows, is an untapped reservoir of awesomeness.

We're planning to build up & market the site & contest over the next few months, with a date of May 1 to open for submissions of "intent" by game author wannabes. Then we'll use DrupalCon London in August as a final deadline, and open it up for judging.

We'll also solicit prizes, ala the ifcomp, so that if, for example, we have 14 prizes and 23 submissions, then the top 14 authors will get to choose from the prizes in order.

Basically, once open for submissions, authors will be able to create a "secret" node on the site, where they announce their intent to create a game. During this phase, things are still hush-hush -- no one's generally allowed to talk about their projects before the August deadline. However, I think it might be useful on several levels to have at the very least a running list of authors. The arbiters will also be available to answer questions during this time, for instance, "I'm making a WarCraft clone in Drupal, will that count?" (Yes, but don't use any copyrighted info. And good luck on that one.) or "I'm going to create a flash game and stick it in a node. And I don't want to release the source. Does that count?" (No, wtf get out of here.)

At London, we'll open the contest for judging by the public. Although we're still working out the details, I think that IFComp has the best idea there: a simple 1-10 rating by people. People will be able to base their ratings on whatever they want, and are encouraged (required?) to write their reasons in the comments ("Excellent game play, great graphics," "Tight integration with core Drupal functionality," "Breakout wtf?"). Authors will need to release both a playable version of the game at this time, and a recipe of the modules & custom code. Judges are not required to examine the code if they don't want to (or don't know how to). But they are more than welcome to base some (or all) of their rating on the source.

None of this is written in stone, but I feel that this is a solid start, and imagine that anything further will just be tweaks.

So a reminder, if you're planning to write a game for this contest, make sure your public release coincides with the August deadline -- you'll be disqualified if you make the site generally available before then. (Although, please, please recruit a few beta-testers when you're getting closer. They won't be able to vote on your game, but they will do much towards making your game sweet.)

Also, discussed at the BOF, if your game is to be commercial, all parts of it MUST be freely available to judges during the judging period. (If you want to charge for level ups or whatever, that's your business, but that cannot be part of the game for the intents & purposes of this contest.) Also, you'll need to plan to write up a recipe of how you built it, which must include 100% GPL released code. (You don't need to release data, themes, or images, with a few caveats discussed in the rules.)

So w00t! Let's make some games! And make Morbus Iff proud...

(Crossposted at g.d.o.)

Jan 21 2011
Jan 21
Share this

This past month I've been busy getting the Styles module ready for release. This module does the heavy lifting for display of Media objects. For those that don't know yet, Media, the File Browser to the Internet, is the future of media handling for Drupal 7. It exposes the underlying streams API of Drupal core, allowing for fieldable media entities (fields on files), mixing up images and audio, local files and YouTube.

File Styles for Drupal

Basically, the Styles module allows you to select a style for display with a field (or in a View or with WYSIWYG), and the field will be displayed as determined by the predetermined criteria. For instance, as shown in the above diagram, you might set the display as 'Medium', and the actual displayed file or remote stream will be selected according to the file's mime type (as an image, a video, in an mp3 player, etc).

The UI

The tricky part has been creating the UI for this. We're just about there -- you can finally create and modify styles with the supplied Styles UI module.

File Styles menu item

You can add new styles, and modify the provided styles.

File Styles listing

When editing the style, you'll note all the 'containers' to the left (Image, Audio, YouTube, etc.), with a preview of its display. Just select the preset for that container/style combo, and you're set.

Edit File Styles

However, we still need to add a way to create and edit the presets themselves. As an example, you might want to create a preset for images that will display a Medium image style linked to the original media, or a YouTube thumbnail that will popup the video in a ColorBox when clicked.

Technically, this isn't too difficult (and is on the way). However, from the stand-point of the administrator, this has been a bit of a stumbling block. Though I'm leaning towards doing some in-place editing of presets within this same screen. (For that matter, we could even go so far as moving this entire screen, perhaps as a dialog, within the media field display screen.) At the same time, I'm afraid of clutter; the concept is hard enough to explain as it is. I'd love to hear from anyone with more of an eye towards UX for the administrator.

Thoughts? Comments?

Dec 03 2010
Dec 03

While working on two sites with huge lists of fields in views, I threw together the views_ui_tamer module to make life just a bit easier.

The module will probably not make it to drupal.org, but there is a patch here: http://drupal.org/node/981094

This should work for views 2 and 3.

views_ui_tamer adds two ease-of-use features to the views ui.

1. The jquery quicksearch plugin is used to filter checkbox options in the add_item_form. This lets you quickly find a field, argument, filter, etc to add.

2. Large numbers of fields in a view can make the ui unwieldy. These columns are now collapsed to a minimum height, and can be expanded and
toggled by clicking the section's title.

Please feel free to add to this by contacting me for access to the git repo or in the issue mentioned above.

Get the module here: git://github.com/awolfey/views_ui_tamer.git

This is kind of a test repo for me on git. I'm just getting started with git, so would appreciate any feed back if there is a better way to do this. Thanks

Nov 15 2010
Nov 15

The new models were so terrible that even 40 years later, some shoppers still won’t consider Detroit’s brands. Their Nike Air Max 90 flaws made for cars that comedians would savage, liability lawyers would chase and Nike Air Max 90 Hyperfuse crestfallen owners would try to pawn off on unsuspecting victims. “Led by General Motors, the giant domestic auto industry was going to flex its muscle and swat the pesky fly of imported Cheap Nike Free Running cars off its shoulder,” John Z. DeLorean, the former Chevrolet general manager, said in J. Patrick Wright’s 1979 book, Nike Air Max 90 VT “On a Clear Day You Can See General Motors.” The Gremlin, Vega and Pinto were small enough to compare directly with the import standard bearer of the time, the Volkswagen Beetle. Nike Air Max 95 All three of the domestic entries were conventional designs — shrunken versions of the era’s gargantuan sedans — with Nike Free 5.0 their engines in front driving the rear wheels. G.M.’s ambitions for the XP-887, which would become the Vega, were huge. Instead of being developed by the engineering staff of Nike Air Max Australia a single brand, the Vega was designed by the corporate engineering staff under the direction of Edward N. Cole, an executive vice president. It was Nike Free 3.0 then handed to Chevrolet’s managers to sell. The Vega Nike Air Max 87 would be an all-new car unrelated to any other in G.M.’s portfolio, using an all-new engine, and it would be built at the company’s newest, most automated plant, in Lordstown, Ohio. “The Vega came just as the bean counters were rising at G.M.,” Nike Free said John Heitmann, a professor of history at the University of Dayton, about G.M.’s changing corporate culture. “From the first day I stepped into the Chevrolet division, in 1969, it was obvious that the Vega was in real trouble,” DeLorean said. “General Motors was pinning its image and prestige on this car, and there was practically no interest in Nike Air Max it in the division.”

Oct 04 2010
Oct 04
Share this

Concept

The Open Guilds website at http://openguilds.org/ (and its cousin http://drupalguilds.org/ for Drupal) is a central location for grassroots certifications for practitioners of various Open Source software. The idea is that rather than a top-down certification process where people pay money to an organization to take a test, people instead prove to their peers their ability to craft in their field.

The organization of Open Guilds is set up to foster a peer review system. Anyone may create a guild within the organization, and set up their own certification procedures at various levels within the guild. Each guild will stand on its own merit, and gain the benefit of camaraderie from other related guilds.

Each guild within Open Guilds will have its own charter, which will specify, among other things, its system of governance and procedure; and a scope of certification. A charter may, for instance, specify that it is to be run by parliamentary procedure with an elected council, or that it is an open democracy.

The Open Guilds organization itself is run by a council of Vested Members, who are individuals actively involved over a period of time. The meetings themselves are conducted on-line in a visible format, with all matters decided by the majority vote of Vested Members.

Other than the right to vote on procedural matters, membership to Open Guilds is free to all individuals, who may join as Journey Members. Journey Members may join any guild, according to its charter (which may, for instance, require an invitation or approval vote by its council, specific certification within another guild, or other requirements).

A guild may specify many certifications, each of which may have requisite certifications. Although a guild’s charter may specify otherwise, in most cases, a certification will require a test or demonstration of ability to be evaluated by one or more certified members of the guild. The reputation of an individual guild will depend on the continuing diligent and truthful evaluation by the guild members.

Bylaws (version 0.9)

Definitions Open Guilds Open Guilds refers to the organization these Bylaws apply to. Its website is at http://openguilds.org/. Drupal Guilds Drupal Guilds refers to a subset of the Open Guilds, and is fully subject to the rules laid out in these Bylaws. Its website is at http://drupalguilds.org/. Individual An Individual must be a human being. Corporations, Organizations, or any other collection of individuals may not be considered to be an individual for purposes of these Bylaws.
  1. Purpose of the Open Guilds

    Open Guilds acts as an organization for the training, certification and professionalism of practitioners of various Open Source crafts. The website at http://openguilds.org/ serves to facilitate the purposes of the organization.

  2. Membership

    Three types of membership are available within Open Guilds: Individual Members, Vested Members, and Organizational Members. All Journey and Vested Members are also collectively referred to as Open Guilds Members.

    1. Individual Membership

      Any individual may become a Individual Member by registering with the site and agreeing to uphold and be bound by the Open Guilds Bylaws and any Terms of Service. Unless otherwise provided by the Bylaws, there are no fees or dues required of Individual Members. Individual Members may join any Guild that is open for Membership, and may participate fully according to that Guild’s charter, as permitted per the Bylaws.

      1. Participation in General Business

        Individual Members may participate in any general business or discussion as permitted per the Bylaws, although they may not vote on items or elections unless they are also Vested Members.

    2. Vested Membership

      An Individual Member who has been a member for at least a month, shall also be considered a Vested Member, so long as they have also logged in to the site at least once during the past month.

      1. Dues and Fees

        At this time, there are currently no dues or fees to be a Vested Member. However, should the Bylaws in the future choose to assign dues or fees to be considered a Vested Member, then after a probationary period, this fee is non-refundable.

      2. Probationary Period

        There is a thirty day probationary period during which the Individual Member is considered to be a Probationary Member. Once the probationary period has passed, then the Individual Member shall also be considered to be a Vested Member.

      3. Lapse of Vested Membership

        If a Vested Member has not logged onto the site for at least thirty days, they shall revert to Individual Membership, losing all privileges accorded to a Vested Member until such time as they again log on. If more than 90 days have passed since the last login, then if the individual logs on again, they must then resume a new Probationary Period.

      4. Voting Rights in General Business

        Once an Individual Member has passed the probationary period, they receive full voting privileges as provided per the Bylaws.

    3. Open Guilds Membership

      Each Individual and Vested Member is also considered an Open Guilds Member. Open Guilds Members may participate fully in the discussions of any business, except as may be restricted per the Bylaws.

    4. Organizational Membership

      Corporations, Organizations, and other collectives of individuals may apply to join Open Guilds with various levels of sponsorship, pending approval by the Open Guilds Meeting. So long as any required fees and dues are fully paid, they may be active and receive full organizational benefits as allowed per the Bylaws. Under no circumstance shall any Organizational Member receive any voting privileges. However, individuals within an organization that is an Organizational Member are not otherwise precluded from participating with Open Guilds in all respects as allowed per the Bylaws, as each individual is deemed a free agent who represents herself or himself.

    5. Revocation of Membership

      At any time, an individual or organization may choose to revoke their own Membership. Additionally, there may be other provisions in the Bylaws providing for a revocation of a Membership. In no case, unless otherwise provided per the Bylaws, shall any previously paid dues or fees be refunded to an individual or organization having their Membership revoked. If a Membership has been revoked, then so shall all resultant privileges provided per the Bylaws.

  3. Open Guilds Meeting

    All business of the Open Guilds is conducted in an ongoing online agenda referred to as the Open Guilds Meeting, unless otherwise specified per the Bylaws. All Open Guilds Members may fully participate in any discussion pertaining to an item on the rolling agenda, but only Vested Members may add a motion to the agenda or vote on the resolution of any such item, except as otherwise specified per the Bylaws.

    1. Agenda

      At any time, any Vested Member may add a motion to the agenda. The agenda shall be updated immediately, and the item and any resultant discussion shall be publicly visible.

      1. Regular Business

        Any motion other than a Bylaws change requires a simple majority of the voting quorum to pass.

      2. Bylaws Changes

        Any motion regarding changes to the Bylaws will require a two-thirds majority vote of the voting quorum to pass.

    2. Voting

      A motion requires a vote to determine its resolution, and all voting members shall have a period of time in which to record their vote on the matter. A vote may be changed by that member at any time during the allowed period, and will be recorded in an open manner unless otherwise specified per the Bylaws.

      1. Voting Period

        Unless otherwise determined by the Bylaws, a motion will be open for voting for two weeks, or until the number of ‘Aye’ votes greater than the majority (or greater than two-thirds of the members for motions changing the Bylaws) of active Vested Members is reached, whichever comes first.

      2. Voting Quorum

        The votes from a quorum of three Voting Members, or one-third of all active Vested Members, whichever is greater, is required for he successful resolution of any motion.

      3. Voting Resolution

        For normal business items, a motion passes if a number of ‘Aye’ votes greater than the majority of the voting quorum is reached. For items requiring changes to the Bylaws, then a two-thirds majority of the voting quorum is instead required. Unless otherwise specified by the Bylaws, all votes shall be recorded publicly, and may be changed at any time before resolution of the motion.

  4. Chartered Guilds

    Any Vested Member may petition to the Open Guilds Meeting to create a new Guild. This petition consists of a title, description, and charter. Once created, the petition will require a number of signatures of other Individual Members. After those signatures have been acquired, the new charter will be added to the Open Guilds Meeting for consideration as a motion. A majority vote as outlined in the Open Guilds Meeting section of the Bylaws will result in the Guild being accessible and displayed to the general public.

    1. Guild Charters

      A Guild Charter shall indicate the format of a Guild’s membership and governance. When creating a petition to form a Guild, various options shall be available to the petitioning member, which will collectively form the charter.

      1. Petition to Form a Guild

        Once a Petition to Form a Guild has been created, it will be published and available from a page of pending petitions. Any Open Guilds Members may comment on and/or sign the petition. Once at least three (3) Individual Members have signed the petition, it will be submitted for approval as a motion to the Open Guilds Meeting as a Regular Business agenda item, and removed from the page of pending petitions. An Open Guilds Member may only create up to one Petition to Form a Guild per thirty days.

      2. Petition Options

        A Petition to Form a Guild will include at least the desired Title, Description, and Purpose of the Guild. It will also list the form of Governance from available options, currently 'Democratic' or 'Parliamentary'.

      3. Initial Membership

        The Petition must also name at least one initial Journey Member. If the charter for the guild is created, then all such named members shall be made Journey Members for that guild. Newly created Parliamentary Guilds shall be considered to be in Special Session as per the Bylaws, until one or more Guild Masters have been elected from their members.

    2. Guild Governance

      An individual Guild will be governed either as a democracy or a parliament of elected masters, as specified by its charter.

      1. Guild Business

        Except as further specified per the Bylaws, Business and Voting of an individual guild shall be conducted as outlined for General Business by the Open Guilds. Motions to their agenda shall be posted, and shall be open for voting amongst eligible members for up to two weeks, or until half of the guild’s quorum have voted Aye, whichever comes first, except as otherwise noted. However, an Individual Guild may not consider any changes to the Bylaws, nor changes to their charter; such items may only be considered by the General Meeting of the Open Guilds.

      2. Voting Quorum

        For purposes of determining the outcome of a vote, the voting quorum for an individual guild will consist of at least one eligible member, or one third of all eligible members, whichever is greater. However, if this quorum is less than three, then a motion must remain posted for at least two weeks before resolution.

    3. Democratic Guild

      Any Journey Member may vote on any item posted to a Democratic guild’s business. The quorum consists of all Journey Members who have logged on at least once in the past thirty days.

    4. Parliamentary Guild

      Only Masters of an individual Guild may vote on motions posted to a Parliamentary Guild’s business. The quorum consists of all Guild Masters who have logged on at least once in the past thirty days.

      1. Reversion of Governance

        In the case that there are no Guild Masters who have logged on at least once in the past thirty days, then that Parliamentary Guild shall be held in Special Session until such time as at least one Guild Master has again logged on.

      2. Special Session

        If a Parliamentary Guild has no Guild Masters who have logged onto the site within the past thirty days, then any Journey Members of that guild may conduct business, voting as though the Guild were a Democratic Guild. Any motions passed during Special Session will be binding. However, if any Guild Master of that guild logs on during a Special Session, then the governance reverts to Parliamentary; any votes by Journey Members on pending motions shall be rendered invalid; and the clock for consideration of a motion shall be reset, so that a motion shall be open for another two weeks from that time.

      3. Veto by Grand Master

        In the case of a Parliamentary Guild, the Grand Master of that Guild shall have the power of veto. As such, the Grand Master may choose to veto any motion, causing it to fail, unless two-thirds of the voting membership vote Aye. If the Grand Master vetoes a motion, the motion will remain open for further consideration until either two weeks from the original posting or one week from the veto, whichever is later, or until two thirds of eligible Masters have voted Aye.

    5. Apprenticeships

      Any Individual Member of the Open Guilds may join any Chartered Guild as an Apprentice.

      1. Discussion Participation

        Apprentices of a Chartered Guild may participate fully in any business or discussions within that guild, but may not vote in their meetings.

      2. Mentors to Apprentices

        Apprentices may be assigned volunteer Mentors from other Apprentices or Journey Members of that guild, who agree to help that apprentice advance through the guild. Additionally, any member within the guild may offer help or advice beyond the formal structure of mentorship. Mentorships are detailed elsewhere in the Bylaws.

      3. Certifications

        A guild may offer one or more certifications. An Apprentice may at any time actively work towards certification, and may petition that guild for certification as outlined elsewhere in the Bylaws.

    6. Journey Members

      An Apprentice of a Chartered Guild may petition that guild to become a Journey Member. Such a petition will be considered as a Regular Business item on that guild’s agenda.

      1. Failed Petition

        If a motion to become a Journey Member within a guild fails, then the Apprentice may not again petition that guild to become a Journey Member for at least thirty days.

      2. Voting Privileges

        Journey Members who are part of a Democratic Guild shall receive voting privileges within that guild. They do not receive voting rights within a Parliamentary Guild, nor may they add a motion to the business of that guild, except as otherwise specified within the Bylaws. However, they may otherwise participate freely in that guild’s business.

    7. Guild Masters

      A Journey Member may petition that guild to become a Guild Master. Such a petition will be considered as a Regular Business item on that guild’s agenda.

      1. Probationary Period as Journey Member

        A Journey Member must have held the position of Journey Member for at least thirty days within a guild before being allowed to petition to become a Guild Master.

      2. Failed Petition

        If a motion to become a Guild Master within a guild fails, then the Journey Member may not again petition that guild to become a Journey Member for at least thirty days.

      3. Voting Privileges

        Guild Masters who are part of either a Democratic or Parliamentary Guild shall receive voting privileges within that guild. They may also add motions to that guild’s business.

    8. Grand Masters

      Parliamentary Guilds may choose to elect a Grand Master from among their Guild Masters. In such a case, the Grand Master shall hold the position for at least a year from the date of election, or until another election is held, whichever comes first.

      1. Elections

        Any Guild Master may petition to hold an election for Grand Master of that guild, as a regular agenda item, so long as the post is vacant, or there has not been an election within the past year. If the motion passes, and at least one Guild Master has declared an intention to run for the post of Grand Master as a nominee, within thirty days of the motion’s passage, then an election shall be held. The election shall begin thirty days from the motion’s passage, and voting and discussion may occur for at least two weeks. An election shall be considered special business, and the outcome shall be determined secretly, so that the individual votes of members are not disclosed. The results of the election shall also not be disclosed until two weeks have passed. Members may change their vote at any time during the election. Elections are not subject to veto by the standing Grand Master. All Journey Members who are part of the guild may vote in the election, and the quorum is determined from active Journey Members of that guild. Journey Members may vote for one or more nominees. The nominee of an election who receives the most votes wins. In the case of a tie of two or more nominees, one or more run-off elections of two weeks each shall be conducted from the tied nominees, until a clear winner is declared or until the other nominee(s) concedes the election.

      2. Veto Powers

        The Grand Master of a Parliamentary Guild receives the power to veto a regular motion of that guild as detailed in the Bylaws.

      3. Motion to Impeach

        A Guild Master may enter a motion to impeach the Grand Master. If the motion passes, then a new motion will be created for special session by all Journey Members of that guild to consider. A majority of Journey Members must vote Aye to impeach, and all such votes shall be held secret. If the motion in the special session passes, then the Grand Master shall be stripped of the title and powers, and the post shall be vacant until another election is held.

  5. Certifications

    Individual Guilds may choose to offer certifications to its members. Any achieved certifications by a member will be listed on their profile page.

    1. Motion to Create a Certification

      Any Journey Member or Master eligible to do so may add a motion to create a certification, which must contain the certification’s Title and Description. Upon adoption of the motion, the Certification will be available for achievement by any member of the Guild, as per the Bylaws.

    2. Motion to Bestow/Achieve Certification

      Any eligible Journey Member or Master may add a motion to achieve a certification from their guild by adding a motion. Likewise, an eligible member may add a motion to bestow a certification on any member or apprentice of their guild. If the motion passes, then the certification shall be bestowed to that member; otherwise, a similar motion for that member may not be made for at least thirty days.

    3. Listing of Certifications

      A list of all achieved/bestowed certifications shall be maintained for the guild. Also, all such certifications shall be listed on that member’s profile on the site.

  6. Mentorships

    One or more Mentors may be assigned to individual Apprentices or Journey Members. Mentors are responsible for helping to guide and advocate for that member. Such mentorships are voluntary, and have no binding power, other than that such mentorships shall be publicly listed on the guild and the relevant members’ profile pages. Mentorships may be assigned only by agreement of both parties, and may be revoked at any time by either party.

(These Bylaws are also cross-posted at the Drupal Guild Group.)

Sep 29 2010
Sep 29
Share this

Here is an idealized transcript of the recent Drupal Dojo session I did regarding the proposed Drupal Guilds, which you can watch here if you didn't catch it earlier:

Also, here are the slides I used during the presentation, which the headers generally refer to, if you want to follow along the text:

Medieval Guilds

My name is Aaron Winborn, and through the Drupal Dojo, I will now present some ideas I’ve had brewing around a concept for a Drupal Guild system for peer-review certifications. You might know me as a developer with Advomatic and contributor to Drupal for nearly five years. You probably don’t realize that I’ve also been involved with the Sudbury model of education for about twelve years, and am currently on the Board of Trustees for my daughter’s school.

First, let’s take a trip back through history.

Feudal Europe 2
Medieval Feudal Europe was not a fun place to live. Despite the image of knights and ladies held in the collective subconscious, everyone was a slave, or serf. All of Europe was parceled up into fiefdoms, where everyone worked to their death on the land. However, by the end of the early middle ages, a few barons came up with the idea of freeing their serfs and charging rent. When others realized they were making easily four times as much by doing this, within a century nearly all serfs had been freed.

The church moved from the center of town to the outskirts, to be replaced by the marketplace, where a new mercantile class sold their wares. Wanting a better life, a large number of them began to educate themselves.

Medieval Guilds 2
Medieval Guilds, deriving from an earlier system of pooling of gold and resources by craftspeople united by craft, quickly rose in prominence throughout Europe. They fostered professionalism with its system of apprenticeship, and the post of Guild Journeymen became the goal for nearly all freemen. This system even survived and thrived in early America, well into the nineteenth century.

Medieval Universities
In the early 11th century, a new type of guild arose, a guild of students, or ‘universitas’. These people met in their homes and churches, pooling together their resources to hire teachers to provide themselves with the best possible education. This system of education became so popular that it attracted the attention of the church and state, who formed competing guilds of teachers, who worked hard to attract paying students.

Modern Universities
Of course, we all know which system survived: by the twelfth century, there were over 100 established universities in Europe. Sanctioned by the state, it became a gatekeeper for the more lucrative professions, such as lawyers and medicine.

Modern Guilds 2
Some guilds have managed to exist into the twenty-first century, particularly in creative arts such as the Screen Actor’s Guild and the Writer’s Guild. Several other systems and organizations resemble modern guilds, such as the Bar Association and many unions.

Modern Guilds 3
I read this morning a paper by a professor at MIT that advocates a return to the medieval guild system, arguing that the twentieth century system of working for the company, with pensions and whatnot, is obsolete. In fact, the modern consultant in many ways resembles the old guild journey member, traveling between clients, working for multiple companies, and sharing their expertise and knowledge with other crafts people.

Professional Certifications
Many developers seek professional certifications, which fall into three categories: Corporate, Proprietary, and Professional. Corporate certifications exist within a single corporation, and are generally non-transferable, but might look good on a resume. Proprietary certifications, or product-specific certifications, are good for a specific software or hardware product, but are also not relevant outside that product. Professional certifications are more general, serving to increase the level of practice, and are generally industry-wide, such as the IEEE Certified Software Development Professional certification. Finally, there are some government mandated and overseen certifications, known as licensures.

Professional Certifications 2
There are hundreds of available software certifications, of dubious quality, most of which are given by the software manufacturer to anyone able to pay a buck and fool the test.

Professional Peer Review
Professional Peer Review is used in place of or to augment the value of testing. It is used in many professions, such as in Health Care, Accounting, Law, Engineering, Aviation, Forest Fire Management, and even Software Development. It has roots in Scholarly Peer Review, used in academia to determine whether an article is worthy of publication. Some criticisms of Peer Review are that it’s subject to gate keeping and elitism, it’s not designed to easily detect fraud, and can be a lengthy and expensive process.

Sudbury Model Schools
Based on the original Sudbury Valley School, Sudbury model schools are democratic, age-mixed, non-coercive environments for children. Part of the model involves certification; students are free to structure their days as they wish, but if they want to use certain equipment (such as a sewing machine, computers, or a dark lab), they must demonstrate proficiency and receive certification. In many of these schools, graduation is also a reflection of this process: students wishing to graduate will create a committee of peers and advisers, who will help guide the student through a portfolio creation, culminating in a defense of their thesis to the entire school body, who will vote on whether to award a diploma.

Free as in Drupal!

Open Guilds
My initial idea for creating the Drupal Guilds (as a subset of Open Guilds) came about during the development of the latest incarnation of DrupalDojo.com. Part of the initial discussions for that site included “learning tracks”, where users could flag their favorite lessons and sessions, forming “playlists” to be shared with others. I realized along the way that this could serve as an excellent form of certification.

For instance, a developer interested in learning how to present multimedia in Drupal could work through all the lessons in a specific track and come out the other end able to put their new-found knowledge to work. It would simply require people putting in the the time to oversee their education and award a certification. Considering that thousands of people already donate hundreds of thousands of hours to development and documentation, it simply requires a framework to funnel some of this expertise into an Open Source University.

Rather than a corporation coming along and offering a $500 certification test, we can create this system in a grassroots fashion, bootstrapping and certifying ourselves. Certifications would be free, with reputations as strong as the developers’ due diligence.

Open Guilds 2
The structure I propose involves allowing any person to join the Open Guilds as an Apprentice. Anyone may also join any individual Guild. Each Guild itself offers its own certifications, which are overseen by Journey Members and Masters, who, after presentations by the Apprentice and discussions, vote on whether to award a certification. The Masters of a Guild are likewise elected within that Guild.

Open Guilds 3
Finally, the creation of new guilds itself follows similar tenets: anyone may propose a new guild charter, which is determined by a majority of Vested Members of the Open Guilds. The proposed charter would state the title and purpose of the guild, as well as (perhaps) its form of governance, such as by democratic vote of all members, or the representative vote of its council of Masters.

Open Guilds 4
Vested Members would be members of the entire organization who have a vested stake (most likely determined by paying dues, and/or by the length of their membership and the frequency of their involvement). However determined, Vested Members would oversee the General Business of the Drupal Guilds and Open Guilds.

(Cross-posted at groups.drupal.org/guilds.)

Sep 22 2010
Sep 22

We provide on-site (New York City area) or remote Drupal development, specializing in advanced configurations and customization.

  • new development
  • module development
  • custom theming
  • site maintenance
  • site upgrades
  • admin training

If you would like to begin a discussion about your project, please contact us.

Sep 10 2010
Sep 10
Share this

Drupal Guilds

This is probably late notice for many of you, but I'm going to give a presentation about Drupal Guilds today (Friday, September 10) at 5 GDT (12 EDT), about an hour and a half from this posting. The presentation will be recorded, along with a slide show, and I'll make sure to post about it again after that if you miss the meeting.

Background: Drupal Guilds, a subset of Open Guilds, is a planned organization to foster a grassroots peer review certification system for crafters of Drupal. With its roots in early medieval guilds, particularly in the universitas, it seeks to help create and foster masters of Open Sourcery.

I've been working on-and-off this concept over the past year, which is a culmination stemming largely from both my work with Open Source Software and with Sudbury model schools over the past twelve years.

Please see my post at Drupal Guilds & Open Guilds for more information. Hope to see you soon!

Edit: Here's an embedded slide show of the presentation; video soon!

Aug 25 2010
Aug 25
Share this

As KarenS predicted, there have been many Drupal-free nights from me this summer, while my household adjusts to its newest member. Though thanks to Advomatic, my days have continued to be filled with Drupal.

Sabina Rose

For instance, I've been hard at work on Node Reference / Embed Media Browser (nrembrowser), the perhaps unfortunately named, but descriptive, module that does just what it says. I did a session for it in an earlier incarnation at Drupal Dojo, and have banged at it incessantly, until it's now just about ready for prime time! As I said in the session, that project is, I believe, a reasonable alternative for folks who want Media-like functionality for Drupal 6.

What makes it viable is its integration with WYSIWYG and Styles. Styles is a brainstorm I had during the development of Media, in which I tackled how to display wildly differing data (such as photos and videos) with a single field formatter. I've been refining the model in version 2 of the Drupal 6 version, where I've moved it into a class structure and made lots of improvements. Coming soon to Drupal 7...

And then there's Views Slideshow: Galleria, which makes it easy to create a Galleria slideshow in Drupal. And some refactoring of Embedded Media Field, version 3, in which we begin the migration to Drupal 7's Media, and bring some of its magic (of a unified storage system) back to 6...

But what I've been dying to tell you all is about my few, but precious, Drupal nights. Though it's gone from 12 hours or more a week to three, in the wee hours of the morning at that, I've still been banging away. However, I've changed directions entirely, following another passion of mine: games! And no, not playing them, making them... I've said too much already, as it's not quite ready for release. But suffice to say, I think it'll be fun, and of course, it'll be available through the good ol' GPL...

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web