Mar 26 2015
Mar 26

We’ve already discussed why accessibility matters and whetted your appetite on accessibility by providing a few simple things you can do to make your site more accessible.  Now it’s time to look at accessibility in forms, a common component of just about every site. Inaccessible forms can affect a wide array of disabilities - screen reader users, keyboard-only users, cognitive disabilities, and mobility impaired - and create unnecessary barriers for what could be loyal website visitors and customers.   Here are 5 items to get you started:

1. Thou shalt use the <label> tag.

The label tag associates a single label tag to a single form element (text fields, radio buttons, checklists, etc.) and gets read by the screen reader when focus is placed on the form element so that a user knows what type of input is expected.  The “for” attribute is used to associate the label with the form element by matching the value of the “for” attribute to the value of the elements “id” attribute.  This association should be unique - one label per form element.  Using the id attribute fosters this since id’s should be unique on a page as well.  This also has the side effect of being able to select an option or put focus on an element by clicking on the label, which is great for someone who has difficulty being accurate with a mouse.  And remember, the placeholder attribute should not be used as a replacement for a label.

Example:

<label for=”myinput”>Label text</label>
<input id=”myinput” name=”textfield” type=”text”>

2. Thou shalt use fieldsets and legends for grouping.

Use fieldset and legend tags for grouping related input elements, particularly with sets of radio buttons and checkboxes.  Use the legend tag with fieldsets to associate a description with the group of elements.  The text of the legend element will be read by the screenreader when a person tabs to an input element in the group, whereas text placed outside of legend text would be completely skipped by screen readers when tabbing from form element to form element.

Example:

<fieldset>
    <legend>Choose your favorite sport:</legend>
    <input id="soccer" type="checkbox" name="sports" value="soccer">
    <label for="soccer">Soccer</label><br>
    <input id="basketball" type="checkbox" name="sports" value="basketball">
    <label for="basketball">Basketball</label><br>
    <input id="quidditch" type="checkbox" name="sports" value="quidditch">
    <label for="quidditch">Quidditch</label><br>
</fieldset>

3. Thou shalt clearly indicate required fields as part of the label text.  

Optimally, required fields would have “(required)” in the label text of each required field as not all screen readers read an asterisk by default.  Alternatively, if most fields are required, instead of polluting every label with “(required)”, non-required fields could have “(optional)” as part of the label. Either way, required fields should be clearly indicated.

Example:

<label for=”myinput”>Label text (required)</label>
<input id=”myinput” name=”textfield” type=”text”>

4. Thou shalt keep all form elements accessible by a keyboard and in logical tab order.  

Not everyone uses a mouse.  Some users navigate a form by tabbing from element to element. By default, form elements can receive focus by the keyboard but it is possible to break this behavior with javascript.  Be sure it stays intact.  In addition, the tab order of the form elements should be logical.  Bouncing from First Name to Address to Last Name is likely to cause a non-sighted user to question whether they’re missing something.  Generally, staying away from tables for form layout and allowing forms to keep their linear display is all that’s required since tab order is set according to the order of the form elements in the source code.  But there are times when tables are needed for forms and tab order needs to be addressed with the tabindex attribute. However, use the tabindex attribute with caution.    

5. Thou shalt clearly provide feedback to the user.

Whether it’s a validation error or successful form submission, users should be clearly notified.  Validation errors should include what field was rejected and why.  W3C provides an excellent overview of common approaches for placement of these success and error notifications.

For basic forms, following these commandments will create a form usable by a wider audience.  However, forms can get vastly more complicated.  Sometimes there IS a need to associate one label with multiple form elements.  Sometimes a label needs to be hidden.  Sometimes we need to use custom controls like star ratings and share buttons.  A couple great resources for more in-depth form accessibility are:

Go forth and accessify!

Additional Resources

5 Simple Things You Can Do To Make Your Site More Accessible | Mediacurrent Blog Post
Why Accessibility Matters | Mediacurrent Blog Post
Web Accessibility Terminology | Mediacurrent Blog Post

Mar 26 2015
Mar 26

By default Search API (Drupal 7) reindexes a node when the node gets updated. But what if you want to reindex a node / an entity on demand or via some other hook i.e. outside of update cycle? Turned out it is a quite simple exercise. You just need to execute this function call whenever you want to reindex a node / an entity:

  1. search_api_track_item_change('node', array($nid));

See this snippet at dropbucket: http://dropbucket.org/node/1600 search_api_track_item_change marks the items with the specified IDs as "dirty", i.e., as needing to be reindexed. You need to supply this function with two arguments: entity_type ('node' in our example) and an array of entity_ids you want to be reindexed. Once you've done this, Search API will take care of the rest as if you've just updated your node / entity. Additional tip: In some cases, it's worth to clear field_cache for an entity before sending it to reindex:

  1. // Clear field cache for the node.

  2. cache_clear_all('field:node:' . $nid, 'cache_field');

  3. // Reindex the node.

  4. search_api_track_item_change('node', array($nid));

This is the case, when you manually save / update entity values via sql queries and then want to reindex the result (for example, radioactivity module doesn't save / update a node, it directly manipulates data is sql tables). That way you'll ensure that search_api reindexes fresh node / entity and not the cached one.

Mar 26 2015
Mar 26

I had a case recently, where I needed to add custom data to the node display and wanted this data to behave like a field, however the data itself didn't belong to a field. By "behaving like a field" I mean you can that field at node display settings and able to control it's visibility, label and weight by dragging and dropping that field. So, as you may have undestood, hook_preprocess_node / node_view_alter approach alone wasn't enough. But we do Drupal right? Then there should be a clever way to do what we want and it is here: hook_field_extra_fields() comes for help! hook_field_extra_fields() (docs: https://api.drupal.org/api/drupal/modules!field!field.api.php/function/hook_field_extra_fields/7) exposes "pseudo-field" components on fieldable entities. Neat! Here's how it works, let's say we want to expose a welcoming text message as a field for a node, here's how we do that:

  1. /**

  2. * Implements MODULE_NAME_field_extra_fields().

  3. */

  4. function hook_field_extra_fields() {

  5. $extra['node']['article']['display']['welcome_message'] = array(

  6. 'label' => t('Welcome message'),

  7. 'description' => t('A welcome message'),

  8. 'weight' => 0,

  9. );

  10. return $extra;

  11. }

As you see in example above, we used hook_field_extra_fields() to define an extra field for an enity type of 'node' and 'article' bundle (content type). You can actually choose any other type of entity that's available on your system (think user, taxonomy_term, profile2, etc). Now if you'll clear your cache and go to display settings for Node -> Article you should see 'A welcome message' field available. Ok the last bit is to actually force our "extra" field to output some data, we do this in hook_node_view:

  1. /**

  2. * Implements hook_node_view().

  3. */

  4. function MODULE_NAME_node_view($node, $view_mode, $langcode) {

  5. // Only show the field for node of article type

  6. if ($node->type == 'article') {

  7. $node->content['welcome_message'] = array(

  8. '#markup' => 'Hello and welcome to our Drupal site!',

  9. );

  10. }

  11. }

That should be all. Now you should see a welcome message on your node oage. Please note, if you're adding an extra field to another entity type (like, taxonomy_term for example), you should do the last bit in this entity's _view() hook.

UPDATE: I put code snippets for this tutorial at dropbucket.org here: http://dropbucket.org/node/1398

Mar 26 2015
Mar 26

I'm a big fan of fighting with Drupal's inefficiencies and bottlenecks. Most of these come from contrib modules. Everytime we install a contrib module we should be ready for surprises which come on board with the module.

One of the latest examples is Menu item visibility (https://drupal.org/project/menu_item_visibility) that turned out to be a big trouble maker on one of my client's sites. Menu item visibility is a simple module that let's you define link visibility based on a user's role. Simple and innocent... until you look under the hood.

The thing is Menu item visibility stores it's data in database and does a query per every menu item on the page. In my case it produced around 30 queries per page and 600 queries on menu/cache rebuild (which normally equals to the number of menu items you have in your system).

The functionality that this module gives to an end user is good and useful (according to drupal.org: 6,181 sites currently report using this module) but as you see, storing these settings in db can become a huge bottleneck for your site. I looked at the Menu item visibility source and came to this "in code" solutions that fully replicates the module functionality but stores data in code.

Step 1.

Create a custom module and call it like Better menu item visibility., machine name: better_menu_item_visibility.

Step 2.

Let's add the first function that holds our menu link item id (mlid) and role id (rid) data:

  1. /**

  2. * This function returns a list of mlid's with a list of roles that have access to link items.

  3. * You can change the list to add new menu items or/and roles

  4. * The list is presented in a format:

  5. * 'mlid' => array('role_id', 'role_id),

  6. */

  7. function better_menu_item_visibility_menu_item_visibility_role_data() {

  8. return array(

  9. '15' => array('1', '2'),

  10. '321' => array('1'),

  11. '593' => array('3'),

  12. // Add as many combinations as you want.

  13. );

  14. }

This function returns an array with menu link item ids and roles that can access the item. If you already have Menu item visibility installed, you can easily port the data from the db table {menu_links_visibility_role} into this function.

Step 3.

And now let's do the dirty job and process the menu items:

  1. /**

  2. * Implements hook_translated_menu_link_alter().

  3. */

  4. function better_menu_item_visibility_translated_menu_link_alter(&$item, $map) {

  5. if (!empty($item['access'])) {

  6. global $user;

  7. // Menu administrators can see all links.

  8. if ($user->uid == '1' || (strpos(current_path(), 'admin/structure/menu/manage/' . $item['menu_name']) === 0 && user_access('administer menu'))) {

  9. return;

  10. }

  11. $visibility_items_for_roles = better_menu_item_visibility_menu_item_visibility_role_data();

  12. if (!empty($visibility_items_for_roles[$item['mlid']]) && !array_intersect($visibility_items_for_roles[$item['mlid']], array_keys($user->roles))) {

  13. $item['access'] = FALSE;

  14. }

  15. }

  16. }

In short this function skips access check for user 1 and for user that has 'administer menu' permission and does the access check for link menu items listed in better_menu_item_visibility_menu_item_visibility_role_data. As you see, instead of calling database it gets data from the code which is really fast. Let me know what you think and share your ways of fighting with Drupal's inefficiencies.

Mar 26 2015
Mar 26

Drupal Views offers us a cool feature: ajaxified pagers. When you click on a pager, it changes the page without reloading the main page itself and then scrolls to the top of the view. It works great, but sometimes you may encounter a problem: if you have a fixed header on your page (the one that stays on top when you scroll the page) it will overlap with the top of your view container thus scroll to top won't work preciselly correct and the header will cover the top part of your view.

I've just encountered that problem and making a note here for the future myself and, probably yourself, about how I solved this problem. If you'll look into the views internal, you'll see it uses internal Drupal JS Framework command called viewsScrollTop that's responsible for scrolling to the top of the container. What we need here is to override this command to add some offset to the top of our view.

1. Overriding JS Command

Thankfully, Views is flexible enough and provides hook_views_ajax_data_alter() so we can alter js data and commands before they got sent to the browser, let's overwrite viewsScrollTop command with our own. In your custom module put something like this:

  1. /**

  2. * This hook allows to alter the commands which are used on a views ajax

  3. * request.

  4. *

  5. * @param $commands

  6. * An array of ajax commands

  7. * @param $view view

  8. * The view which is requested.

  9. */

  10. function MODULE_NAME_views_ajax_data_alter(&$commands, $view) {

  11. // Replace Views' method for scrolling to the top of the element with your

  12. // custom scrolling method.

  13. foreach ($commands as &$command) {

  14. if ($command['command'] == 'viewsScrollTop') {

  15. $command['command'] = 'customViewsScrollTop';

  16. }

  17. }

  18. }

Now, everytime Views emits viewsScrollTop command, we replace it with our own custom one customViewsScrollTop.

2. Creating custom JS command

Ok, custom command is just a JS function attached to Drupal global object, let's create a js file and put it into it:

  1. (function ($) {

  2. Drupal.ajax.prototype.commands.customViewsScrollTop = function (ajax, response, status) {

  3. // Scroll to the top of the view. This will allow users

  4. // to browse newly loaded content after e.g. clicking a pager

  5. // link.

  6. var offset = $(response.selector).offset();

  7. // We can't guarantee that the scrollable object should be

  8. // the body, as the view could be embedded in something

  9. // more complex such as a modal popup. Recurse up the DOM

  10. // and scroll the first element that has a non-zero top.

  11. var scrollTarget = response.selector;

  12. while ($(scrollTarget).scrollTop() == 0 && $(scrollTarget).parent()) {

  13. scrollTarget = $(scrollTarget).parent();

  14. }

  15. var header_height = 90;

  16. // Only scroll upward

  17. if (offset.top - header_height < $(scrollTarget).scrollTop()) {

  18. $(scrollTarget).animate({scrollTop: (offset.top - header_height)}, 500);

  19. }

  20. };

  21. })(jQuery);

As you may see, I just copied the standard Drupal.ajax.prototype.commands.viewsScrollTop function and added header_height variable that equals to the offset/fixed header height. You may play with this value and set it according to your own taste. Note the name of the function Drupal.ajax.prototype.commands.customViewsScrollTop, the last part should match your custom command name. Save the file in your custom module dir, in my case it's: custom_views_scroll.js

3. Attaching JS to the view

There are multiple ways to do it, let's go with with the simplest one, to your custom_module.info file add scripts[] = js/custom_views_scroll.js and clear caches, that'll make this file to be autoloaded on every page load. That's all, since now, your views ajax page scrolls should be powered by your customViewsScrollTop instead of stock viewsScrollTop, see the difference?

Mar 26 2015
Mar 26

If you have a fieldgroup in a node, you may want to hide it on some conditions. Here's how to do that programmatically. At first, we need to preprocess our node like this:

  1. /**

  2. * Implements hook_preprocess_HOOK().

  3. */

  4. function MODULE_NAME_preprocess_node(&$variables) {

  5. }

The tricky part starts here, if you'll google for "hide a fieldgroup" you'll get lots of results referencing a usage of field_group_hide_field_groups() like this snippet: http://dropbucket.org/node/130.

While this function perfectly works on forms it is useless if you apply it in hook_preprocess_node() (at least I couldn't make it work). The problem is fieldgroup uses 'field_group_build_pre_render' function that is get called at the end of the preprocessing call and populates your $variables['content'] with a field group and its children, so you can't alter this in hook_preprocess_node(). But as always in Drupal there's a workaround. At first let's define some simple logic in our preprocess_node() to determine if we want to hide a field group:

  1. /**

  2. * Implements hook_preprocess_HOOK().

  3. */

  4. function MODULE_NAME_preprocess_node(&$variables) {

  5. if ($variables['uid'] != 1) {

  6. // You can call this variable any way you want, just put it into $variables['element'] and set as TRUE.

  7. $variables['element']['hide_admin_field_group'] = TRUE;

  8. }

  9. }

Ok, so if user's id is not 1 we want to hide some fantasy 'admin_field_group'. We define logic here and pass result into elements array that is to be used later. As I previously noted, field group uses 'field_group_build_pre_render' to combine fields into a group, so we just need to alter this call in our module:

  1. /**

  2. * Hide admin field group on a node display.

  3. */

  4. function MODULE_NAME_field_group_build_pre_render_alter(&$element) {

  5. if (isset($element['hide_admin_field_group']) && isset($element['hide_admin_field_group'])) {

  6. $element['hide_admin_field_group']['#access'] = FALSE;

  7. }

  8. }

We made a check for our condition and if it is met, we set field group's access to FALSE that means: hide the field group. So now you should have a field group hidden on your node display. Of course, this example is the simplest case, you may add dependencies on node view_mode, content type and other conditions, so sky is the limit here. You can find and copy this snippet at dropbucket: http://dropbucket.org/node/927 I wonder, if you have another way of doing this?

Mar 26 2015
Mar 26

Submitted by ansondparker on Thu, 03/26/2015 - 12:09

I recently had to add a new content type to an existing tabled view... the thing was that one content type had an extra field to define a link... fine.. I figured I could go in and use Views PHP.... it's a hacky solution, but a few lines and I'd be out the door... curiosity got the better of me... I know Views PHP is a bit déclassé, so for good measure I googled for a solution, and pretty quickly stumbled on Views Conditional... the documentation is clear, but I figure a few screen shots won't hurt, and may encourage the next nerd looking for a better way to set either-or's in their views...

Steps to get some variance in your views

  • Add the conditional view field after adding your fields that you want to show add the conditional view field
  • UI for Conditional View Field the UI is really straightforward - and tokens from previous rows are available.... just exclude them from view and use them here.... it's absolutely obvious :)
  • Conditional Field Outputs... Ya! and that's about it... sure - we could've done that a bunch of ways.... but with a clean UI like that there's no reason too.. and if you really want to you can have conditions based on the conditions... bonus.
Mar 26 2015
Mar 26

Drupal 8 coming (queue suspenseful music)! One of the HUGE things with Drupal 8 is this term called "headless Drupal". Don't worry the Drupal drop isn't in any danger of losing precious head real estate, rather headless Drupal refers to using a different client-side (front-end) framework then the back end Drupal framework. From what I have read the goal is to have Drupal become the preferred back-end content management system.

Are curiosity has peaked and we have started looking into the various Javascript Frameworks that are available. As of now the one that is peaking our interest is Angular.js. If you are experimenting, what frameworks do you like? If you don't know where to start check out this infographic below. Special thanks to Anna Mininkova from Web Design Degree Center for sharing this with us.

Choosing a Javascript FrameworkSource: WebDesignDegreeCenter.org

Mar 26 2015
Mar 26

Drupal has all the elements to build a custom content model that can mirror a DITA specialisation. The really exciting thing is that this data model can be built from the UI, without a single line of code. While there are considerable drawbacks to the usability of the resulting interface, the fact that this is a free and open source implementation means that those who have more time than money could use this implementation as a starting point to build a DITA CCMS that accommodates an arbitrary specialisation.

I am really excited about this because for the first time it is now feasible to build a completely free and open source CCMS for DITA, or for any other XML language for that matter. Such a starting point would enable a community of DIY technical writers to bootstrap a full featured open source CCMS. Through small incremental improvements the community could further improve the implementation so that it becomes useful for a broader audience.

Drupal modules

To achieve this I’ve used the following modules:

  • Paragraphs: normally meant to enable rich structured web pages, that don’t rely on a WYSIWYG editor. The Paragraph field lets you define a set of predefined field bundles that can be added at a location in the page. Because you can add a paragraph field to a paragraph bundle, this allows you to nest bundles as deep as you need them to go (this is hard to explain, just watch the screencast if this wasn’t clear).

  • Asset: the asset module was originally conceived to enable reuse of media assets. But because it is possible to create custom assets with any fields you want, this enables you to define a series of elements that can be added into a CKeditor field, with an arbitrary content architecture. Currently assets display as blocklevel elements. But I think it should be possible to define in-line elements. As a bonus the Asset module includes assets by reference, which means you can use this to create arbitrary reusable elements. So Drupal can also do transclusion...

This is fairly complex to express in words, so I’ve included a screencast that shows you how the modules work and what you can do with them.

Video demo

[embedded content]

The video shows:

The paragraphs module and how you can use it to:

  • add a paragraph
  • define what bundles to use
  • create a paragraph bundle
  • add fields to a paragraph bundle

The asset module and how you can use it to:

  • define what assets can be used by a filter type
  • create a new asset
  • add fields to an asset

A prototype for the DITA task content model and how you can use it for:

  • obligatory elements
  • optional elements
  • grouping elements under a top level group
  • choosing between 2 optional elements
  • adding any number of optional elements
  • infinite nesting

Output

The content model, while closely resembling the DITA model, will have some redundancy: sometimes you need an additional level in the form to achieve the proper behavior (e.g. when you can choose between 2 optional elements). Also in some cases it might not be possible to name the label conform to DITA standards because of the expected label format in Drupal.

To get to a valid output it would be enough to build an output function that generates an XML file in which each of the fields is encapsulated in a tag pair for the field name upon which an XSLT transformation is done to remove redundant elements and to replace fields with an incorrect tag with their proper element tag.

Since we don’t need to move elements around, a simple replacement feature should be enough to generate valid DITA XML. It is easy to imagine a visual interface that would enable someone who is not a developer to build this transformation, through a series of replacement pairs, so that you wouldn’t need a developer to build the content model and to generate the proper output.

Future

So what is next? Should we build such a CCMS? Maybe we could make an example implementation for Lightweight DITA? The only part that is currently missing is the output function and it’s interface. If there is sufficient interest I will consider asking one of our developers to build the module and we could build the Lightweight DITA implementation at a code sprint, at one of the coming DITA conferences.

Interested in an open source DITA CCMS?

If you would like to use and/or contribute to a free an open source DITA CCMS, fill in our form to help us decide how much time to invest in this project:

Mar 25 2015
Mar 25

Next week, an international conglomeration of awesome folks will convene in Portland, Oregon for a D8 Accelerate-funded sprint on DrupalCI: Modernizing Testbot Initiative.

The main aim of DrupalCI is to rebuild Drupal's current testbot infrastructure (which is currently powered by an aging Drupal 6 site) to industry-standard components such as Jenkins, Docker, and so on, architected to be generally useful outside of Drupal.org. See Nick Schuch's Architecting DrupalCI at DrupalCon Amsterdam blog post for more details.

The goal is to end the sprint with an "MVP" product on production hardware, with integration to Drupal.org, which can be used to demonstrate a full end-to-end D8 core ‘simpletest’ test run request from Drupal.org through to results appearing on the results server.

You can read and subscribe to the sprint hit-list issue to get an idea of who's going to be working on what, and the places where you too can jump in (see the much longer version for more details).

This is a particularly important initiative to help with, since it not only unblocks Drupal 8 from shipping, it also makes available awesome new testing tools for all Drupal.org projects!

Go Team! :)

Mar 25 2015
Mar 25
Share this


“Your journey to heaven or hell or oblivion or reincarnation or whatever it is that death holds. … This is the Antechamber of the Mystery…”
Brent Weeks, The Way of Shadows

I suppose this will be the last entry in Aaron Winborn’s blog. This is his dad, Victor Winborn, filling in for him. Aaron entered his final sleep of this life on March 24, 2015. His transition was peaceful, and he was at peace with himself.
Aaron was the oldest of six natural siblings and two step-brothers. He was convinced, of course, that I was doing the father thing all wrong and that he could do far better on his own. He finally got to try to prove it when he gained two daughters of his own. Unfortunately, he was diagnosed with ALS shortly after the youngest was born. We will never know how successful he would have been with teenage daughters, but if the first few years of life with the girls are any indication, he would have been an outstanding father.

His mom, Lynn, no doubt carries the most memories of his infancy and childhood, though there is the interesting story or two that Aaron carried with him to the end. Such as the time his friend, Buddy (remember the My Buddy dolls?) became so dirty that his mom insisted Buddy needed a bath. Aaron was having a panic attack over Buddy drowning in the washer, so his mom invited him to sit in front of our front loader and watch his playmate float past the window in the door of the machine. Things were going great and Aaron was pretty sure his friend must be enjoying the swim, when Buddy suddenly exploded and his innards filled the machine with cotton-like blood. This was such a vivid evisceration that Aaron carried that vision, burned into his retina, to the end.

At twelve, we let him fly to Houston to live for a few months with his aunt and uncle, Diana and Les. He learned lots of useful things there, such as that you can’t just get your hair wet to prove to your more than astute aunt that you washed it. You actually have to put a little of the shampoo on your head so she can smell the strawberry scented perfume in it. His Uncle Les was one of his favorite people in the world. Les is the most captivating professor that either of us ever took a class from. In college, Les’ classes on Vietnam history, Film history, and Black history gave Aaron a yearning to teach.

Aaron had innumerable interests and was talented in most all of them. He left home shortly after high school in order to spend time with Paul Solomon (Fellowship of the Inner Light) at Paul’s retreat, Hearthfire. A year or so later, we learned that he had become personal assistant for Elizabeth Kubler Ross (On Death and Dying). Then we learned that he had moved to a commune in England, Mickleton House in Mickleton Village, Gloucestershire, UK. While there he worked in construction and kissed a girl for the first time.
When he finally returned home, Aaron and I were able to work together for several years. As we drove from job to job, we talked about books, poetry, religion, history, philosophy, technology, and science. We frequently didn’t agree on many topics, but we had eerily similar interests. That made it both fun and frustrating to carry on extended conversations. Of all my kids, Aaron was at the same time, both the most like me and the least like me. I often wondered how that worked. While he may have inherited a lot from his old dad, he always had a mind of his own.

One of those extended conversations had to do with both the purpose and destiny of the soul. I had homed in on the concept that a person was a soul, consisting of physical body and immaterial spirit. At the time, Aaron had no problem with that concept. Where we departed ways was in the purpose. He felt that we were in this physical realm in order to overcome physicality and to develop the spirit – at all odds with the physical pressures against the concept. My belief was that the spirit came to this world in order to learn to master the physical. The final take of our argument was that since he was the younger and had more years ahead of him than I did, I hoped that he would somehow find the real answer and share it with me. Now that Aaron has passed into or through the “Antechamber of the Mystery”, if he has any consciousness at this point, he knows the answer. Unfortunately, the veil separating us will prevent him from giving me the answer. I’ll have to go find it for myself.

The yearning to teach that his Uncle Les had instilled in him was finally fulfilled when he was offered the opportunity to teach at School in the Community in Carrboro, North Carolina. This is where he learned of his love for Web development. During this period, he attended a retreat at the Mid-Atlantic Buddhist Association outside St. Louis, and managed to fulfill a 30-day vow of silence. Any of you who knew Aaron while he had a voice, know that the experience must have been torture for him. Shortly after that, he met the love of his life, Gwen Pfeifer. They then moved to Willimantic, Connecticut. He taught at Sudbury School about 25 minutes away in Hampton. Hampton has the honor of being the only town in Connecticut without a stoplight. Their oldest daughter, Ashlin, was born while he worked there.

A few short years later, he was able to fulfill his next big dream of professional Web development. He became the first employee of Advomatic and ultimately also became an expert in the Drupal content management platform. After they moved to Harrisburg, Pennsylvania he wrote the book, Drupal Multimedia. In Harrisburg, Ashlin (and now Sabina) started attending The Circle School.

The Circle School community has been the core of the most amazing support group that I have ever heard of, much less experienced. For almost four years, they have offered food, help, work, laughs, and tears to the entire family. The co-captains in the Gwen & Aaron’s Helping Hands group have been relentless in their efforts to keep the little community vibrant, healthy, and ever-busy. Every time I have made the trip up to PA to visit, there have been multiple people dropping by to provide meals, take Ashlin to local events, help Aaron with his computer work, or just to offer hugs and hold hands with him.

I think the most important non-family member in their lives has been Michelle, who started out as a Mother’s Helper, and ended up being, besides Gwen, Aaron’s primary caregiver. She has become such an important part of the family, that little Sabina actually believes she is a family member. Michelle just finished her degree in psychology, though; and she and her husband will be moving closer to his work in Baltimore. She will always be a part of Ashlin’s and Sabina’s hearts, and I suspect will stay close to the girls for many years.

As many of you know, Aaron chose to take the path of coming back to this life in the same body that he inhabited here as Jeffrey Aaron Winborn. In other words, he chose to have his body frozen (cryonics) and hopes to have it restored when science has the knowhow to bring it back from the dead and to heal the affliction that ALS laid upon him.

While he was still at home, Aaron and I went through a period of reading many of Robert Heinlein’s books. One especially notable character of his was a curmudgeon named Lazarus Long. Lazarus was the main character of a number of books. I guess I didn’t realize at the time how much an impression this character made on Aaron. He ended up naming his first dog Lazarus Long Winborn. I just called him Grandpuppy. Aaron has been interested in cryonics for a long time, and had been planning on taking advantage of it even before he learned he had ALS. I have always wondered how much influence the Heinlein books had on this decision.

He leaves behind, his beautiful and loving partner, Gwen, and his two wonderful daughters, Ashlin and Sabina. Besides his core family, he leaves behind parents, siblings, aunts & uncles, cousins, a large local community of friends and caregivers, an extended family of coworkers, professional friends, activist friends, fellow ALS patients and survivors, and many more than I could ever know. There must be hundreds of people who have learned to love and honor him. And now comes the hardest moment a father can ever have when he has to say…

Goodbye, my Son. My friend.

Love,
Dad

Mar 25 2015
Mar 25

The NYC Drupal Users Group is holding a Drupal Dev Day conference (#D3NYC15) on April 19th at John Jay College. People are pretty excited about this, and we're looking forward to some of the many ways we'll be able to give back to the community in doing so.

With D3NYC15, we're focusing on getting back to creating a grassroots community-building event right in our hometown. First and foremost, the event is in the 'unconference' format, which is an informal way of polling the attendees and creating meaningful sessions that are relevant to those that have shown up. We anticipate expert-lead sessions as well as BoFs.

As part of growing our community, we're planning a Drupal training session in the morning for those new community members. Alex Ross (bleen18) has graciously consented to volunteer his considerable talents to this effort.

In addition to this, we will also have a Drupal mentoring room ('drupal ladders') where a number of Drupalists in NYC have offered to help coach people on contrib work they are doing. Lastly, we're looking forward to our Drupal 8 sprint effort.

While the event is only a day long, it is our hope that D3NYC15 will serve as a launching point for both new and experienced users within the NYC community to help with the D8 issue queue. Even if our community just tackles one item off the queue that day, it will be a big win-- every little bit helps get Drupal 8 into final release.

If you're in the New York City area, or want to visit, join us for a day of coding and fun. Registration can be done at http://www.eventbrite.com/e/drupal-dev-day-nyc-2015-registration-16240282121, and more info about the camp can be found at www.drupalcamp.nyc. We are very much looking for sponsors to help defray the costs of the camp. The sponsorship packages are modestly priced at $500 and $1000, and in addition to great benefits you get the continued admiration of the NYC Drupal community as well as the knowledge that you are investing both in the NYC Drupal talent pool as well as the advancement of Drupal itself.

Mar 25 2015
Mar 25

DrupalCon Los Angeles is on its way! We’ve got a big week of announcements, and are thrilled to share all the big news with you.

We recently posted news about session and training selections that have just been published. Another part of the preparation process that we particularly enjoy is seeing all the great scholarship and grant applications come in through the door. If we had our way, we’d get every single applicant to DrupalCon — but unfortunately, we do have to pick. We’re proud to announce our lucky grant and scholarship recipients are as follows:

Grants

Emma Karayiannis (emma.maria)

Jeremy Thorson (jthorson)

Bojhan Somers (Bojhan)

Michael Anello (ultimike)

Pere Orga (Pere Orga)

Joël Pittet (joelpittet)

Daniel Wehner (dawehner)

Lauri Eskola (lauriii)

Amit Goyal (amitgoyal)

Frédéric Marand (fgm)

Scholarships

Rachit Gupta (rachit_gupta)

Pedro Cambra (pcambra)

PingYee Yau (cloudbull)

Chakrapani Reddivari (chakrapani)

Heissen López (heilop)

Luis Eduardo Telaya Escobedo (edutrul)

Mar 25 2015
Mar 25

DrupalCamp London wrapped up on March 1st, making it the third large Drupal event in London since DrupalCon London in 2011. Over the past three years, the local events team has learned a great deal about planning a successful DrupalCamp in one of the largest and most diverse cities in the world.

“The tricky part,” said Ben Wilding (kazillian), one of the lead DrupalCamp London organizers, “is getting the core group who will commit to the camp. There are always lots of people who say they'll volunteer, but ultimately, you need a core group of people who will make it happen."

When planning a DrupalCamp, Wilding says there are seven main points to keep in mind.

1. Find your core supporters

“In the years we’ve been planning DrupalCamp London, we've encountered two challenges: informing the local community enough, versus not giving them too much info. Things went slowly in year one since there was a revolving door problem as different people showed up to the planning meetings each month. Ultimately, it boiled down to the same five or six people who turned up at all the meetings, and they became the decision maker group.

“When it comes to planning a camp with your local community, open it up. Get as many local people involved as you can, but don’t be surprised if those numbers dwindle quite rapidly down to a core few. Then, closer to time, engage with the actual community who has volunteered to help out on the weekend-of. We actually had that as a ticket type— you could buy a free volunteer ticket. We probably had about thirty so, but we capped it. There are always dropouts at the last minute, so we always let a few new people come on board — but we always cap the volunteer tickets.

2. Define the most important jobs

Wilding says that, when it comes to planning a DrupalCamp, having clearly defined jobs is critical to success.

“We had one person in charge of supporting the volunteers on site,” Wilding says. “That person is responsible for updating everyone on what’s going on and managing the volunteers on the day-of. Our volunteer coordinator does a walk around with the volunteers on the first day at the venue. Another helpful tool he uses is a spreadsheet where he maps volunteers to specific rooms and tasks.

“Especially with a job like managing volunteers, or handling the website, we’ve found that it’s best to make sure that one person owns it, and it’s not too difficult for them to manage."

3. Plan ahead

“Give yourself more time than you think you’ll need — like months and months more time,” advises Wilding. For DrupalCamp London, the team stars more than ten months out on certain elements, such as the website, approaching the venue, and coordinating the team.

“Start anything you can as early as you can,” says Wilding. “The first thing, the hardest thing to get sorted out, is the venue — getting a space to commit for free. One you have that space, a lot of other things can fall into place if you’re being sensible about it and starting early."

“We get really involved with the whole team about four months out,” Wilding continues. “Our camp is always the last weekend in February, so it’s awkwardly placed where Christmas sits. We do things in October and November, the ball gets rolling, and then Christmas happens and we lose 5 weeks. When planning your camp, look out for anything holiday related in your calendar, like major public holidays. Be wary that you don’t sit back. Plan around it."

4. Use personal networks

One way to get great things at low to no cost is through utilizing your community members’ personal networks.

“The more people you reach out to, the better, and the more people you’ve got looking through their personal contacts, the better,” says Wilding. "The time to manage speakers and sponsorships is a lot, so it’s best if possible if you can get someone to really own that. It’s something that’s easy to pass around a group of volunteers— you have to have one key person to own it."

Personal networks have helped the London team get keynote speakers, their venue, even their sponsor.

“We’ve been very fortunate that we get the venue in central London for three days straight. It usually costs hundreds of thousands of pounds, but we got it donated.

“It goes back with personal relationships that one of the organizers had with the university — he had done some free lectures promoting Drupal. It’s also about finding the right venue who benefits from the event, too. The computing department at the university we host our DrupalCamp at is very keen to find industry connections for students, so we always donate lots of tickets to their students so they can promote it internally.

“The way it stands now, the university students get the opportunity to meet the best of the best of the Drupal world, and the community benefits from the introduction of new users as well. Housing DrupalCamp London within the city university means they get the benefit of real world enterprise business and technology paired up with their students, which is ideal for them."

5. Be Prepared for Complications

Wilding notes that organizing DrupalCamps can be complicated in unexpected places.

“This year, we had the BBC as a diamond sponsor,” said Wilding. “We had a contact on their end who was great. However, we had to put everything through their legal department and their marketing department. It was a good lesson for us in that, when you get to a certain level of sponsorship, the complications and the amount of work you have to put in to manage those relationships is incredible."

However, there are ways to address normal pain areas and cut down on the headaches for everyone.

“Registration can be pretty difficult,” said Wilding. “Printing names out on lanyards takes ages, and sometimes someone can’t find someone’s badge because God knows where it went, and there are long queues. We managed to make it much easier this year — we had ten volunteers by the doors in the morning for the first two days, and with 500 people coming in, we knew it could take a long time.

“So instead of printing badges, we just let people sign them on their own. For the actual registration process, we used an app called check me in. People RSVPd for the camp through EventBrite, and  so when they walked in, they could check in with the door volunteers, all of whom had the app on their phone. After that, attendees could grab their lanyard, T-shirt, and tote bag. It was so much better than having 150 disgruntled in a queue at the start of the event."

6. Enjoy your successes

Every DrupalCamp has great moments, and DrupalCamp London was no exception.

“We had Dr. Sue Black as the keynote for DrupalCamp on Saturday, and it was a bit of a different angle than what we usually have,” Wilding said. “We wanted to go beyond the normal Drupal talk for the keynote, so we had Dr. Black come speak about her experiences of promoting women’s engagement with technology — and also saving Bletchley park, too.

"One of Dr. Black's favorite projects that she does and does a lot of fundraising for is called Techmums (http://techmums.co/). She promotes technology amongst mothers in poorer, lower income households, and trains them to use it. One success story was how they trained a mum on how to put an attachment on an email. It was totally new to this woman who was running her own business. Before that, she was sending samples across town… she was sending her son at the end of the day across town, on a bus, with these samples for clients to look at. And this totally changed her life.

“After the talk, Dr/ Black had a huge queue of people who wanted to help with her various projects. It was so great that we were able to help our community share knowledge, and hopefully we benefitted the world a bit."

7. Thank your supporters

“I’d like to issue another huge thanks to the volunteers and sponsors again because DrupalCamp wouldn’t happen without them,” said Wilding in conclusion.

“We’ve had a few people step back from organizing this year so we’re looking for new people to get involved and get engaged next year. If you’re interested in helping out, keep an eye on the Twitter account (@drupalcampldn and #dclondon). We’re taking a break for a few months — but keep an eye on Twitter, and the Drupal London User Group. We’ll make some noise in a couple months time, and get people together who want to chat about stepping in and helping out. We’re always looking for new people, and are happy to answer any questions people have."

Mar 25 2015
Mar 25
131 The Job Market in Drupal with Mike Anello - Modules Unraveled Podcast | Modules Unraveled

Skip to main content

The current job market in Drupal

  • What does the job market look like in the Drupal space?
  • Let’s talk about the pipeline idea.
    • Experienced developers vs. Junior developers

What to do to start your career

  • If you’re completely new what do you do?
  • If you’re a hobbyist, what do you do?
  • How do you get experience?

From the employers perspective

  • What should job descriptions look like?

Questions from Twitter

  • Paul Booker
    How should a small business, non-profit, charity, .. find a drupal developer when they need something doing?
  • Damien McKenna
    For someone who knows nothing about web development, what's their best path forward?
  • Chris Hall
    When do you know to ask for/be a Drupal Dev vs a PHP Dev with Drupal experience? Should/is there be a difference?
Mar 25 2015
Mar 25

Wow. That was the first thing I said when I found out that I had been elected to the Drupal Association Board. The next thought was how much trust the Drupal community has put in me. I'm honored to be elected. Thank you. I also want to thank my fellow nominees, who all stepped forward with passion and great ideas. It was a joy to be on the "meet the candidate" discussions with them, and I would have been thrilled had one of them been elected in my place. I'm excited to get to work with the amazing Drupal Association team. I'm not going to lie; one of the reasons I applied for this position was to be able to work with these great people. It is an amazing opportunity for me, and I hope to add my part to pushing the Drupal project and community into the future. Woohoo!

I'm very aware that as one of the two At-large Directors on the board, I've been chosen to represent your voice; the voice of the Drupal community. Please feel free to reach out to me, here on my blog, through Twitter (add1sun), or my Drupal.org profile, to share your thoughts, ideas, and concerns.

Mar 25 2015
Mar 25

Just a couple months ago the title of this post would have sounded crazy to me.

For the last several months I've been working on the CINC module as a way to make my work (and yours) with Drupal configuration better: faster, more predictable, more flexible, etc. Views is one of the most complex types of Drupal configuration, and I've been doing a great job of delaying what I saw as a daunting challenge of figuring out how Views should work with CINC. Even a simple Views export makes it clear that as you click around the Views UI, a lot is happening in the code, and it can be difficult to understand what exactly is going on. Here's an example Views export:

$view = new view();
$view->name = 'random_instructor';
$view->description = '';
$view->tag = 'default';
$view->base_table = 'node';
$view->human_name = 'Random Instructor';
$view->core = 7;
$view->api_version = '3.0';
$view->disabled = FALSE; /* Edit this to true to make a default view disabled initially */
 
/* Display: Master */
$handler = $view->new_display('default', 'Master', 'default');
$handler->display->display_options['use_more_always'] = FALSE;
$handler->display->display_options['access']['type'] = 'none';
$handler->display->display_options['cache']['type'] = 'none';
$handler->display->display_options['query']['type'] = 'views_query';
$handler->display->display_options['exposed_form']['type'] = 'basic';
$handler->display->display_options['pager']['type'] = 'some';
$handler->display->display_options['pager']['options']['items_per_page'] = '1';
$handler->display->display_options['pager']['options']['offset'] = '0';
$handler->display->display_options['style_plugin'] = 'default';
$handler->display->display_options['row_plugin'] = 'node';
/* Sort criterion: Global: Random */
$handler->display->display_options['sorts']['random']['id'] = 'random';
$handler->display->display_options['sorts']['random']['table'] = 'views';
$handler->display->display_options['sorts']['random']['field'] = 'random';
/* Filter criterion: Content: Published */
$handler->display->display_options['filters']['status']['id'] = 'status';
$handler->display->display_options['filters']['status']['table'] = 'node';
$handler->display->display_options['filters']['status']['field'] = 'status';
$handler->display->display_options['filters']['status']['value'] = 1;
$handler->display->display_options['filters']['status']['group'] = 1;
/* Filter criterion: Content: Type */
$handler->display->display_options['filters']['type']['id'] = 'type';
$handler->display->display_options['filters']['type']['table'] = 'node';
$handler->display->display_options['filters']['type']['field'] = 'type';
$handler->display->display_options['filters']['type']['value'] = array(
  'instructor' => 'instructor',
);
 
/* Display: Block */
$handler = $view->new_display('block', 'Block', 'block');

That view shows one random instructor node in a block. And if you read the code for a few minutes, you can probably figure that out. But I don't know anyone who would attempt to write that code from scratch. So the only way to create a view like that is to click around in the Views UI. As impressive as the Views UI is, clicking a dozen times and finding the few options I care about among the dozens I don't is still an incredibly tedious way to say "show one random instructor node in a block." Ideally I would say that with code that looks more like show_one_random_instructor_node_in_a_block(); I should be able to say what I want and move on to thinking about more interesting problems, without slowing down on implementation details for simple needs.

As of the beta 2 release, CINC has Views integration, and it's wonderful. It's not quite as simple as show_one_random_instructor_node_in_a_block(), but here's how I'm currently creating that Views block:

CINC::init('View')->machine_name('random_instructor')
  ->set('human_name', 'Random Instructor')
  ->set_row_style('node')
  ->set_view_mode('teaser')
  ->add_filter('published')
  ->add_node_type_filter('instructor')
  ->limit_items(1)
  ->add_sort('random')
  ->add_block_display()
  ->create();

So far that's only touching a small subset of what Views can do. I'll continue refining CINC Views toward this "say what you want and move on" ideal. But now that it's good enough that I'm personally no longer using Views UI at all, I wanted to invite you to the party.

Writing Views configuration from scratch is no longer daunting, and it will only get easier from here. So try it out and submit your own requests to make it even better.

Mar 25 2015
Mar 25

This year we have a variety of presentations for you at DrupalCon LA. These all come out of the hard work we're doing all year round on projects such as Tesla, Syfy, SNL, NBC, Bravo (to name a few), and also within the Drupal community.

Blake Hall
So, what's the deal with all of this javascript? Why would you want to use any of these libraries? How are they different and, maybe more importantly, how are they alike?

Sally Young, Carwin Young, & Wes Ruvalcaba
Front-end web development is evolving fast and selecting the right tools to use and when to use them is key to building successful solutions. Knowing why you might incorporate new techniques and what's a good fit for your needs can be challenging with so many choice available, whilst balancing client needs, team efficiency and code quality.

Greg Dunlap
It's the very first meeting with your shiny new client. A blank slate, the opening steps of a potentially year-plus long project. This is where it all begins: discovery. [...] A lot of devs get thrown into this process with no framework or roadmap of how to manage it. This talk will give them one[...]

Dave Reid
The Drupal 8 Contrib Media Team is making good progress on our master plan for media handling in Drupal 8. We'd like to share what we've done so far, what is left to do, and how everything fits together into the vision we have for D8 Media.

Amber Himes Matz, Joe Shindelar, & Greg Dunlap
Over the years the documentation available on drupal.org has grown, and expanded, and then grown some more. But our tools, policies, and processes for maintaining it haven't always kept up with that unchecked growth. So how can we as a community update the way we do documentation to make it as well structured, thought out, and maintained as the code base it seeks to document?

Marissa Epstein
[...] discuss fundamental principles of cognitive and behavioral psychology, and how they apply to creating smooth user experiences. See the mistakes even intelligent people can make, and how you should handle them. Human-proof your designs by feeding the lizard brain, and help website visitors skip complicated thinking with simple interactions.

Joe Schindelar
The Drupal 8 plugin system provides a set of guidelines and reusable code components that allow developers to expose pluggable functionality within their code and (as needed) support managing these components through the user interface. Understanding the ins and outs of the plugin system will be critical for anyone developing modules for Drupal 8.

Jared Ponchot
Open Source software has been conceived of, created and maintained by distributed teams, and several companies like the one I work for (Lullabot) have formed that leverage this same distributed model for a business. It's a model that seems natural for software development. But what about for other disciplines like design?

Jeff Eaton
Better HTML-focused WYSIWYG tools aren't enough, adding more and more fields to the mix only complicate editors' lives, and the principles of semantic HTML don't solve the deeper problem. The work of content modeling must extend inside the body field, not just wrap around it, and that requires a more holistic approach to the design and architecture of a Drupal site.

Mar 25 2015
Mar 25

Everyone on the staff and Board of the Drupal Association would like to congratulate our newest board member:

Addison Berry.

In addition to congratulating Addison, please join me in thanking the 23 other candidates who put themselves out there in service of Drupal and stood for election. 

This was the fourth election we've held for At-Large board seats at the Drupal Association. This year we had two specific goals for the elections:

  • Increase the diversity of the candidates - Although we only had one female candidate, we saw great success by other measures of diversity. 24 candidates came from 14 different countries - including South American and Asian countries. 
  • Increase voter turnout - We fielded 1,432 votes in this election. Our pool of eligible voters was 159,758, so that means our voter tunrout was .89%. This is still low, but a vast improvement over the last election, which saw a .36% turnout. 

Our next steps will be to reach out to the candidates for their evaluation of the elections experience. We also want to hear from you. Please tell us about your experience with the elections process in the comments below so that we can include them in our planning for the 2016 elections.

Flickr photo: Kodak Views

Mar 25 2015
Mar 25

This blog describes how to solve PDOException - SQLSTATE[22003] - Numeric value out of range: 1264 Out of range. When you try to store large integer value in 'integer' field type, then you will get this error. It is because the maximum value of 'integer' field type is exceeded. 

For example, you want to store phone number in a content type. Then you may create new field with 'integer' field type in the content type. When you store 10 digit phone number in this field, "PDOException error" will be shown. MySQL allows you to store values from -2147483648 to 2147483647 in integer field type if signed, so you can't store phone number in 'integer' field type. MySQL allocates 4 bytes for 'integer' field type ie.,

4 bytes = 4 x 8 = 32 bits
2^32 => 4294967296 values
2^32 - 1 => 4294967295 (if unsigned, 0 to 4294967295 values)
-2^(32 - 1) to (2^(32 - 1) - 1) => -2147483648 to 2147483647 (if signed, -2147483648 to 2147483647 values).

If you want to store large integer value, then you need to use field type either 'bigint' or 'text field'. If you choose 'bigint', then there is no need to add custom validation for phone number field. On the other hand, if you choose 'text field' then you need to add custom validation using either hook_form_alter() or hook_form_form_id_alter().

<?php
/**
 * Implement hook_form_alter()
 */
function kf_form_alter(&$form, &$form_state, $form_id) {
  if ($form_id == 'your_form_id') {
    // add custom validate handler
    $form['#validate'][] = 'kf_test_form_validate';
  }
}

/**
 * Implement kf_test_form_validate
 */
function kf_test_form_validate($form, &$form_state) {
  if (isset($form_state['values']['field_phone_number'][LANGUAGE_NONE][0]['value'])) {
    $phone = $form_state['values']['field_phone_number'][LANGUAGE_NONE][0]['value'];
    if (!ctype_digit($phone)) {
      form_set_error('field_phone_number', t('Please enter valid phone number..!'));
    }
  }
}
?>

The ctype_digit() is used to check whether all of the characters in a given string are numerical or not. So it doesn't allow user to store string values as phone number.

Mar 25 2015
Mar 25

If you know the file id, it is really simple,
Code:

$file = file_load($fid);
file_move($file, 'public://new_file_name);

How it works:

We need a source file object to move file to the new location and update the files database entry. Moving a file is performed by copying the file to the new location and then deleting the original. To get source file object, we can use file_load() function.
file_load($fid) - loads a single file object from the database
$fid - file id
return object representing the file, or FALSE if the file was not found.

After that, we can use file_move() function to move the file new location and delete the original file..

file_move($file, 'public://new_file_name)

Parameter1 - Source File Object
Parameter2 - A string containing the destination that $source should be moved to. This must be a stream wrapper URI.
Parameter3 - Replace behaviour (Default value : FILE_EXISTS_RENAME)

For more details, check here.
 

Mar 25 2015
Mar 25

So I just came back from European Drupal Days in Milan. I had great fun at the event, it was well organized and filled with interesting talks. I'll be sure to attend it next year too!

Image by Josef Dabernig

Below are the slides for my talk about Drupalgeddon and Drupal security in general. I got a lot of interesting questions after the talk, so thanks everybody for attending my talk!

Mar 25 2015
Mar 25

Originally published on February 23rd, 2015 on NTEN.org. Republished with permission.

For the last 15 years or so, we’ve seen consistent growth in nonprofits’ appreciation for how open source tools can support their goals for online engagement. Rarely do we run across an RFP for a nonprofit website redesign that doesn’t specify either Drupal or WordPress as the preferred CMS platform. The immediate benefits of implementing an open source solution are pretty clear:

  • With open source tools, organizations avoid costly licensing fees.

  • Open source tools are generally easier to customize.

  • Open source tools often have stronger and more diverse vendor/support options.

  • Open source platforms are often better suited for integration with other tools and services.

The list goes on… And without going down a rabbit hole, I’ll simply throw out that the benefits of open source go well beyond content management use cases these days.

But the benefits of nonprofits supporting and contributing to these open source projects and communities are a little less obvious, and sometimes less immediate. While our customers generally appreciate the contributions we make to the larger community in solving their specific problems, we still often get asked the following in the sales cycle:

"So let me get this straight: First you want me to pay you to build my organization a website. Then you want me to pay you to give away everything you built for us to other organizations, many of whom we compete with for eyeballs and donations?"

This is a legitimate question! One of the additional benefits of using an open source solution is that you get a lot of functionality "for free." You can save budget over building entirely custom solutions with open source because they offer so much functionality out of the box. So, presumably, some of that saving could be lost if additional budget is spent on releasing code to the larger community.

There are many other arguments against open sourcing. Some organizations think that exposing the tools that underpin their website is a security risk. Others worry that if they open source their solutions, the larger community will change the direction of projects they support and rely upon. But most of the time, it comes down to that first argument:

"We know our organization benefits from open source, but we’re not in a position to give back financially or in terms of our time."

Again, this is an understandable concern, but one that can be mitigated pretty easily with proper planning, good project management, and sound and sustainable engineering practices.

Debunking the Myths of Contributing to Open Source

Myth #1: "Open sourcing components of our website is a security risk."

Not really true. Presumably the concern here is that if a would-be hacker were to see the code that underlies parts of your website, they could exploit security holes in that code. While yes, that could happen, the chances are that working with a software developer who has a strong reputation for contributing to an open source project is pretty safe. More importantly, most strong open source communities, such as the Drupal community, have dedicated security teams and thousands of developers who actively review and report issues that could compromise the security of these contributions. In our experience, unreviewed code and code developed by engineers working in isolation are much more likely to present security risks. And on the off chance that someone in the community does report a security issue, more often than not, the reporter will work with you, for free, to come up with a security patch that fixes the issue.

Myth #2: "If we give away our code, we are giving away our organization’s competitive advantage."

As a software vendor that’s given away code that powers over 45,000 Drupal websites, we can say with confidence: there is no secret sauce. Trust me, all of our competitors use Drupal modules that we’ve released - and vice versa.

By leveraging open source tools, your organization can take advantage of being part of a larger community of practice. And frankly, if your organization is trying to do something new, something that’s not supported by such a community, giving away tools is a great way to build a community around your ideas.

We’ve seen many examples of this. Four years ago, we helped a local nonprofit implement a robust mobile mapping solution on top of the Leaflet Javascript library. At the time, there wasn’t an integration for this library and Drupal. So, as part of this project we asked the client invest 20 hours or so for us release the barebones scaffolding of their mapping tool as a contributed Drupal module.

At first, this contributed module was simply a developer tool. It didn’t have an interface allowing Drupal site builders to use it. It just provided an easier starting point for custom map development. However, this 20 hour starting point lowered the cost for us to build mapping solutions for other clients, who also pitched in a little extra development time here and there to the open source project. Within a few months, the Leaflet module gained enough momentum that other developers from other shops started giving back. Now the module is leveraged on over 5,700 websites and has been supported by code contributions from 37 Drupal developers.

What did that first nonprofit and the other handful of early adopters get for supporting the initial release? Within less than a year of initially contributing to this Drupal module, they opened the door to many tens of thousands of dollars worth of free enhancements to their website and mapping tools.

Did they lose their competitive advantage or the uniqueness of their implementation of these online maps? I think you know what I’m gonna say: No! In fact, the usefulness of their mapping interfaces improved dramatically as those of us with an interest in these tools collaborated and iterated on each other’s ideas and design patterns.

Myth #3: "Contributing to an open source project will take time and money away from solving our organization’s specific problems."

This perception may or may not be true, depending on some of the specifics of the problems your organization is trying to solve. More importantly, this depends upon the approach you use to contribute to an open source project. We’ve definitely seen organizations get buried in the weeds of trying to do things in an open source way. We’ve seen organizations contribute financially to open source projects on spec (on speculation that the project will succeed). This can present challenges. We’ve also seen vendors try to abstract too much of what they’re building for clients up front, and that can lead to problems as well.

Generally, our preferred approach is generally to solve our clients immediate problems first, and then abstract useful bits that can be reused by the community towards the end of the project. There are situations when the abstraction, or the open source contribution, needs to come first. But for the most part, we encourage our clients to solve their own problems first, and in so doing so provide real-life use cases for the solutions that they open source. Then, abstraction can happen later as a way of future-proofing their investment.

Myth #4: "If we open source our tools, we’ll lose control over the direction of the technologies in which we’ve invested."

Don’t worry, this isn’t true! In fact:

Contributing to an open source project is positively selfish.

By this I mean that by contributing to an open source project, your organization actually gets to have a stronger say in the direction of that project. Most open source communities are guided by those that just get up and do, rather than by committee or council.

Our team loves the fact that so many organizations leverage our Drupal modules to meet their own needs. It’s great showing up at nonprofit technology conferences and having folks come up to us to thank us for our contributions. But what’s even better is knowing that these projects have been guided by the direct business needs of our nonprofit clients.

How to Go About Contributing to Open Source

There are a number of ways that your nonprofit organization can contribute to open source. In most of the examples above, we speak to financial contributions towards the release of open source code. Those are obviously great, but meaningful community contributions can start much smaller:

  • Participate in an open source community event. By engaging with other organizations with similar needs, you can help guide the conversation regarding how a platform like Drupal can support your organization’s needs. Events like Drupal Day at the NTC are a great place to start.

  • Host a code sprint or hackathon. Sometimes developers just need a space to hack on stuff. You’d be surprised at the meaningful that connections and support that can come from just coordinating a local hackathon. One of our clients, Feeding Texas, recently took this idea further and hosted a dedicated sprint on a hunger mapping project called SNAPshot Texas. As part of this sprint, four developers volunteered a weekend to helping Feeding Texas build a data visualization of Food Stamp data across the state. This effort built upon the work of Feeding America volunteers across the country and became a cornerstone of our redesign of FeedingTexas.org. Feeding Texas believes so strongly in the benefits they received from this work that they felt comfortable open sourcing their entire website on GitHub.

Of course, if your organization is considering a more direct contribution to an open source project, for example, by releasing a module as part of a website redesign, we have some advice for you as well:

  • First and foremost, solve your organization’s immediate problems first. As mentioned earlier in the article, the failure of many open source projects is that their sponsors have to handle too many use cases all at once. Rest assured that if you solve your organization’s problems, you’re likely to create something that’s useful to others. Not every contribution needs to solve every problem.

  • Know when to start with abstraction vs. when to end with abstraction. We have been involved in client-driven open source projects, such as the release of RedHen Raiser, a peer-to-peer fundraising platform, for which the open source contribution needed to be made first, before addressing our client’s specific requirements. In the case of RedHen Raiser, the Capital Area Food Bank of Washington, DC came to us with a need for a Drupal-based peer-to-peer fundraising solution. Learning that nothing like that existed, they were excited to help us get something started that they could then leverage. In this case, starting with abstraction made the most sense, given the technical complexities of releasing such a tool on Drupal. However, for the most part, the majority of open source contributions come from easy wins that are abstracted after the fact. Of course, there’s no hard and fast rule about this - it’s just something that you need to consider.

  • Celebrate your contributions and the development team! It might sound silly, but many software nerds take great pride in just knowing that the stuff they build is going to be seen by their peers. By offering to open source even just small components of your project, you are more likely to motivate your development partners. They will generally work harder and do better work, which again adds immediate value to your project.

In conclusion, I hope that this article helps you better understand that there’s a lot of value in contributing to open source. It doesn’t have to be that daunting of an effort and it doesn’t have to take you off task.

Mar 24 2015
Mar 24

Before recently settling into the path of using Vagrant + Ansible for site development (speaking of Ansible: I absolutely love it and need to blog about some of my fun with that), I had been using Acquia Dev Desktop. Even now, I'll use it from time to time since it is easy to work with.

However, I don't like using the version of drush it comes packaged with and prefer using my own version of drush via composer. I have a slew of reasons for this but ultimately, I am able to easily switch version of drush via composer so I can work with older sites vs. using drush HEAD so I can work on drush issues. In either case, trying to run your own version of drush with dev desktop (like running drush sql-sync) may lead to the following or similar errors:


  1. Error: no database record could be found for target @self

And if you look inside the settings.php file, you will find something like the following:


  1. //<DDSETTINGS>

  2. // Please don't edit anything between <DDSETTINGS> tags.

  3. // This section is autogenerated by Acquia Dev Desktop.

  4. if (isset($_SERVER['DEVDESKTOP_DRUPAL_SETTINGS_DIR']) && file_exists($_SERVER['DEVDESKTOP_DRUPAL_SETTINGS_DIR'] . '/loc_MYSITE_dd.inc'))
  5. require($_SERVER['DEVDESKTOP_DRUPAL_SETTINGS_DIR'] . '/loc_MYSITE_dd.inc');

  6. //</DDSETTINGS>

Basically, Dev Desktop now moves database settings (plus anything else Acquia specific) to this include file and it gets included assuming the settings dir is in the server variable. From further investigation, I found that (atleast on OSX) the directory can be found at $HOME/.acquia/DevDesktop/DrupalSettings. Dev Desktop Apache adds this setting as does Dev Desktop Drush, but your custom version of drush will not. To resolve the issue, we are defining DEVDESKTOP_DRUPAL_SETTINGS_DIR as an environment variable. Drush will pick up environment variables and add them to $_SERVER for usage (which the settings.php file would, in this case). Add the following to your ~/.bash_profile file (or to any of the other files specified in this drush documentation):


  1. export DEVDESKTOP_DRUPAL_SETTINGS_DIR="$HOME/.acquia/DevDesktop/DrupalSettings"

And voila! Your favourite version of Drush will now work with Dev Desktop.

Mar 24 2015
Mar 24

Out of the box, Drupal offers only a single type of validation for fields; required or not required. For most use cases this is fine, however, it can be a little difficult to define your own custom validation logic. What if you need to validate a field value and make sure it's unique?

The Field validation module allows you to define custom validation rules using Drupal's administration interface. The module ships with a lot of its own validators: "Plain text", "Specific value(s)" and much more.

If none of the validators meet your requirement, you can write your own by implementing a validator plugin. Because validators are Ctools plugin, they are easy to maintain as each one gets its own file.

In this tutorial, we'll use Field validation to validate a field value and make sure it's unique.

Getting Started

Before we begin, go download Field validation, Ctools and enable "Field Validation UI".

If you use Drush, run the following command:

drush dl field_validation ctools
drush en field_validation_ui

Unique Field Value

The module comes with one useful validator which checks if a field value is unique. Of course, you could you implement this type of validation with custom code, but Field validation can handle it with just a few easy clicks.

I've created a field called "ID number" which I've attached to the Article content type. Let's now configure a validator which will be used to make sure the field value is unique.

Fig 1.0

1. Click on Structure from the toolbar, click on "Field Validation" and "Add".

2. In the Name field enter in "id_number_validation", "ID number validation" in the "Rule name" field.

Fig 1.5

3. From "Entity type" select Node, from "Bundle" select Article and from "Field name" select "ID number" (the field we want to validate.)

Because we added the field to the Article content type, we selected Node from the "Entity type" and Article from the Bundle. If this field was on the user entity then you would select User as the "Entity type" and Bundle.

4. Finally, select value from the Column drop-down.

Fig 1.6

At this point, we're given the validation a name and selected which field we want to validate. Let's now configure the validator.

5. Click on the Validator drop-down and select "Unique values".

Fig 1.1

You can further configure the validator from the field-set, below the drop-down. For now leave it as is.

6. The final thing we need to do is create a validation message. Enter in "[field-name] must be unique." into the "Custom error message" field. You may have noticed that I used "[field-name]" in the message. You can use certain tokens in your messages. Click on "Replacement patterns" to see which ones are available.

Fig 1.2

Now if you try and create two articles with the same "ID number" value you should see "ID number must be unique.".

Fig 1.3

Features Integration

You may be thinking to yourself, this is all good but can I export the configuration into code? Yes you can. Field validation has complete Features integration.

Just select your custom validation from the "Field validation" component and add it to your feature.

Fig 1.4

Summary

I've only shown you one small bit of what Field validation can really do. But trust me, spend some time playing with it and you'll be amazed how powerful it is.

I have to mention that there's a sub-module called "Field validation extras". This sub-module ships more specific validators for "Postal code", "Color(HTML5)" and much more. Make sure you review this sub-module when searching for all available validators.

Like what you see?

Join our free email list and receive the following:

  1. Discover our best tutorials in a 6 part series
  2. Be notified when free content is published
  3. Receive our monthly newsletter
Mar 24 2015
Mar 24

Submitted by ansondparker on Tue, 03/24/2015 - 15:31

Tableau is a helluva tool.  For data visualization it is at the top of its game... we recently had a 3 day seminar at UVa regarding Tableau, and I was stoked to see that they use Drupal heavily for their portal design.  

While I don't have the $$'s to run Tableau Server right now, it's a cool tool and something to consider...  Anyhow - here's a quick look at Tableau + Drupal + media tableau.  The upshot is that by integrating at the database level you can use views to display all of your workbooks... if all you want is to embed some visualizations in your content this isn't necessary as tableau provides iFrames for workbooks too... to use that just install media tableau, integrate with your wysiwyg editor of choice and move on... I wanted to test it all, so here's the kitchen sink :)

Here are the steps I've taken for the complete integration with views etc... if you just want to embed some tableau views in drupal skip down a bit :)

  • A windows box to run the tableau server - not having one handy I used virtual box to get started...
  • Download and install Tableau Server - then follow http://onlinehelp.tableau.com/current/server/en-us/adminview_postgres_ac... to allow 3rd party access to the database 
  • Since I wanted to test it more easily I also installed wampserver and added postgresql integration (remember to put libpq.dll in the right place wamp\bin\php\phpX.X.X towamp\bin\apache\Apache2.2*\bin).  and built drupal on the same virtual box.  I probably could've taken the time to learn more about bridging over VM's... whatevergetting postgres + wampserver happy together
  • Apache and Tableau are both going to be asking for port 80 - change one or both... tableau has it's config manager, apache has its httpd.conf... 
  • You'll need to add the new tableau postgres database to your settings.php file... this is pretty straightfowrad, here's a copy of minesettings.php for postgres + tableau
  • Once you've configured tableau server tableau server config go to the admin panel and point to your server.  NOTE!!!! this is only important if you want to use views to display content... actually about half of these steps are important if you want to use views or have direct access to the tables in postgres.... if you don't need it, you can just embed workbooks through the media embed tool....
  • Now you should see several views available http://yoursite.com/tableau should show all of the workbooks and views available in tableau in a drupal view :)tableau views in a drupal view there are a bunch of relationships available in the view, and I imagine this can be extended pretty easily... working with EVA or some such should be pretty straightforward.  I also really like how the tags are ingested.... Haven't tested with the search api yet... any examples out there?

Working with Tableau Media Embed...  (note: this is hella simple)

  • Install the tableau and tableau media plugins. media tableau requires - in my use case I found wysiwyg media embed to be necessary... this is an easy install, just remember to install the plugin so that /sites/all/modules/wysiwyg_mediaembed/plugin/mediaembed/plugin.js is true.... I'm using the ckeditor library with the wysiwyg editor also, fwiw...enable media embed in wysiwyg
  • once media embed is installed go to the tableau server side and grab a share for a viewget the embed code for drupal from tableau server
  • Then it's just back over to drupal to paste the embedpasting embed from tableau in drupal

That's it!  note to those who may follow - I had no luck getting the media field in a content type to accept tableau links... there are notes in there that point to it working... but I never was successful... so it works fine in wysiwyg, but as an actual media field... no buenousing tableau in a media field - no bueno  if you have any questions please feel free to ask away...

Mar 24 2015
Mar 24

Supporting the release of Drupal 8 through funding

Have you heard about the Drupal 8 Accelerate fund? The Drupal Association is collaborating with Drupal 8 branch maintainers to provide grants for those actively working on Drupal 8, with the goal of accelerating its release.

Here at Lullabot, we can’t imagine a more worthy effort, and we’ve been chipping in to get D8 shipped for quite some time now. Along with community contributions from bots like Dave Reid, Matthew Tift, and Marc Drummond; we’ve also been investing quite a bit to spread the D8 message through Drupalize.Me’s Drupal 8 video tutorials. We even highlighted the D8 Accelerate fund on the Drupalize.Me podcast last December.

Still, we didn’t feel this was enough, so Lullabot, along with Drupalize.Me, just contributed $5,000 to Drupal 8 Accelerate as an anchor donor! We couldn’t be more excited. This money will directly fund grants to the community, and help pay directly those who are working to get Drupal 8 out the door.

These grants have already started being awarded, and we’re eager to see continued impact. At Lullabot, we’re thankful for everyone who’s contributing to Drupal 8, and we’d like to encourage people to donate to Drupal 8 Accelerate however they can here.

Related Bits

Mar 24 2015
Mar 24

I recently had the opportunity to explore popular methods of adding captions to images inside the WYSIWYG editor using the setup of WYSIWYG module with CKEditor library. Our two main criteria were Media module integration and a styled caption in the WYSIWYG editor. As I discovered, we couldn’t have both without custom coding which the budget didn’t allow for. Media module integration won out and the File Entity with View Modes method was chosen. 

The modules/methods I reviewed were:

  • CKEditor’s Image2 plugin
  • Caption Filter module
  • JCaption module
  • Image Caption module
  • File entity and view modes method

These are the results of the module and method comparisons. A matrix with various properties of each is at the end of the article.

CKEditor Image2 Plugin:

Enhanced Image (Image2) is a CKEditor plugin meant to replace the ordinary image plugin. It gives captioning capabilities in addition to center alignment and click and drag resize. The easiest way to install it is to use Quiron’s sandbox project and follow the instructions given. The only deviation I needed to make was to rename the module’s directory from ckeditor_image_captions to ckeditor_image2. 

Sample output of a captioned image:

<img alt="This is alt text" height="150" src="http://placehold.it/350x150" width="350">This is my awesome image2 caption

CKEditor&#039;s Image2 plugin modal
Image2’s replacement of default Image modal

Image2 Plugin nicely formats the caption with the image inside the WYSIWYG
What image caption looks like in WYSIWYG with Image2 plugin

Image2 plugin is not formatted in node view.  This must be added.
What image caption with Image2 looks like in node view by default. It could use some styling.

Caption_filter:

Caption_filter is a token replacement method of adding a caption. Captioning is achieved by adding the caption text inside of [caption][/caption] tokens in source view. For example:

[caption align=center]<img alt="This is alt text" height="150" src="http://placehold.it/350x150" width="350" />[/caption]

Sample output of a captioned image:

<div class="caption caption-center">
 <div class="caption-inner" style="width: 350px;">
   <img alt="This is alt text" height="150" src="http://placehold.it/350x150" width="350">This is my awesome caption filter caption
 </div>
</div>

Image shows switching to source view to use caption filter
Using caption filter requires working in Source view or disabling the WYSIWYG editor.

Image shows the image and caption in a wysiwyg.  The raw tokens are displayed in the WYSIWYG.
What the caption looks like in WYSIWYG. Tokens are not replaced in the editor. Caption Filter is probably best used without the use of a WYSIWYG.

Image and caption are nicely styled by default on node view
Default styling of Caption Filter’s captions.

JCaption:

This image caption module uses jQuery to transform the title or alt attribute of an image to an image caption. See the JCaption module page for another comparison of the JCaption, Image Caption and Caption Filter modules.

Sample output of a captioned image:

<span class="caption none" style="width: 350px;">
 <img alt="This is my awesome alt/caption text" src="http://placehold.it/350x150">
 <p>This is my awesome alt/caption text</p>
</span>

Image shows the CKEditor default plugin modal - how a JCaption is added
Adding an image with alt text via the CkEditor default image plugin.

Applying a caption with JCaption - the caption is not visible in the WYSIWYG.
A view of the image in WYSIWYG editor. The caption is not visible in the editor.

Default styling of the image in node view is a simple display of the image followed by the caption underneath
Default styling of the image in node view

Image Caption

With the Image Caption module, an image is added with the usual CKEditor Image plugin, and the caption is added by setting a previously defined class in the Font Style’s dropdown of the WYSIWYG editor. JQuery is used to wrap the image in an html container div and then puts the image title text in a child div underneath the image.

Sample output of a captioned image:

<span class="image-caption-container image-caption-container-none" style="display: inline-block; width: 350px;">
 <img alt="" class="caption caption-processed" src="http://placehold.it/350x150" title="This is the awesome Image Caption" style="width: 350px; height: 150px;">
 <span style="display:block;" class="image-caption">This is the awesome Image Caption</span>
</span>

Image displays without caption in the WYSIWYG when using Image Caption module

Image inserted into the WYSIWYG. Image displays without caption.

Image caption and image as shown in node view with a simple format - unstyled image with caption underneath
Image as seen with Image Caption caption in node view.

 

File Entity and View modes:

This method involves the most amount of setup but integrates well with the Media module. It involves adding a caption field to the image file entity, an image style, and a view mode, and configuring the image file display. A step-by-step account of this method can be found at 58 Bits.

Sample output of a captioned image:

<div class="media media-element-container media-caption"><div id="file-8" class="file file-image file-image-gif contextual-links-region">
  <h2 class="element-invisible"><a href="http://www.mediacurrent.com/file/8">500x400.gif</a></h2>
  <div class="contextual-links-wrapper contextual-links-processed">
    <a class="contextual-links-trigger" href="http://www.mediacurrent.com/">Configure</a>
    <ul class="contextual-links">
      <li class="file-edit first"><a href="http://www.mediacurrent.com/file/8/edit?destination=node/4">Edit</a></li>
      <li class="file-delete last"><a href="http://www.mediacurrent.com/file/8/delete?destination=node/4">Delete</a></li>
    </ul>
  </div>
  <div class="content">
    <img height="176" width="220" class="media-element file-caption" typeof="foaf:Image" src="http://captions.dev/sites/default/files/styles/medium/public/500x400_5.gif?itok=IsKmYwfN" alt="">
    <div class="field field-name-field-caption field-type-text field-label-hidden">
      <div class="field-items">
        <div class="field-item even">This is the awesome File Entity caption</div>
      </div>
    </div>
  </div>
</div>

The image is added to the WYSIWYG via the media browser by choosing the caption view mode created during setup
Adding the image and caption via the media browser. Choose the view mode in the “Display as” field and set the Caption field.

The WYSIWYG with a captioned image - the caption doesn&#039;t display
Image in WYWIWYG - no caption visible.

Image in node view contains no styling - shows just an image with a caption displayed underneath
Image as seen in node view with no styling.

A Comparison of Captioning Modules/Methods.

Comparison matrix of captioning modules/methods.   Image2 Caption Filter JCaption FE & View Mode Image Caption Integrates with Media   x   x   Visible in WYSIWYG x x (although without token replacement)       Each instance of the image can have a different caption x x x   x Works with Ckeditor Image Replaces Image plugin x x   x Easy to set up x x x   x Source view required   x       Additional notes Also gain image centering capabilities Nicely styled node view out of the box without additional CSS Many config options including whether to use alt text or title text for caption Most extensive set up out of all options Requires adding a class to the WYWIWYG’s Font Style dropdown

 

Summary: Which Module Do I Use?!?

Which module is used is really dependent on the requirements of the project. If you require Media module functionality (maintaining a library of images that can be searched and re-used), the only two options are the File Entity - View Mode method and the Caption Filter module. If your editors are savvy and understand html basics, Caption Filter will give you more flexibility around captions, allowing you to reuse images with different captions tailored to the context. If you need to keep it as simple as possible, doing the extra work of setting up the File Entity and View Modes will make it pretty simple for your editors to add pictures, have them automatically resized, and aligned without any additional work.

If the Media module is not needed, Image2, JCaption, and Image Caption are all pretty comparable, although I would lean towards the Image2 plugin simply because it gives the ability to center the image and the caption is visible in the WYSIWYG editor. The one thing I would caution against is in the JCaption module. It gives the option of using either alternative text* or title text as the caption. It’s generally not good practice to simply duplicate the alt text (you are adding alt text to your images, right?) for a caption. Both the caption and the alt text will be read by screen readers and will be redundant for the user. Using title text as caption text may be less problematic since most screen readers do not read title text by default.

*Alternative text (or alt text) is text that describes an image. It gives people who are blind and use screen readers and people who have images turned off due to low bandwidth a description of the image that they can’t see. It is good practice to include alt text for images.

Additional Resources

Guide to Drupal Terminology | Mediacurrent Blog post
Conditionally Open a Link in a New Tab in Views Without the PHP Filter | Mediacurrent Blog Post

Mar 24 2015
Mar 24

A few years ago, I was looking for a quick and easy way to run OpenID on a small web server.

A range of solutions were available but some appeared to be slightly more demanding than what I would like. For example, one solution required a servlet container such as Tomcat and another one required some manual configuration of Python with Apache.

I came across the SimpleID project. As the name implies, it is simple. It is written in PHP and works with the Apache/PHP environment on just about any Linux web server. It allows you to write your own plugin for a user/password database or just use flat files to get up and running quickly with no database at all.

This seemed like the level of simplicity I was hoping for so I created the Debian package of SimpleID. SimpleID is also available in Ubuntu.

Help needed

Thanks to a contribution from Jean-Michel Nirgal Vourgère, I've just whipped up a 0.8.1-14 package that should fix Apache 2.4 support in jessie. I also cleaned up a documentation bug and the control file URLs.

Nonetheless, it may be helpful to get feedback from other members of the community about the future of this package:

  • Is it considered secure enough?
  • Are there other packages that now offer such a simple way to get OpenID for a vanilla Apache/PHP environment?
  • Would anybody else be interested in helping to maintain this package?
  • Would anybody like to see this packaged in other distributions such as Fedora?

Works with HOTP one-time-passwords and LDAP servers

One reason I chose SimpleID is because of dynalogin, the two-factor authentication framework. I wanted a quick and easy way to use OTP with OpenID so I created the SimpleID plugin for dynalogin, also available as a package.

I also created the LDAP backend for SimpleID, that is available as a package too.

xjm
Mar 24 2015
Mar 24

The next beta for Drupal 8 will be beta 8! Here is the schedule for the beta release.

Wednesday, March 25, 2015 Drupal 8.0.0-beta8 released. Emergency commits only.
Mar 24 2015
Mar 24

Write more complete Behat test scenarios for both Drupal 7 and Drupal 8.

On of the main goal of BDD (Behaviour Driven Development) is to be able to describe a system's behavior using a single notation, in order to be directly accessible by product owners and developers and testable using automatic conversion tools.

In the PHP world, Behat is the tool of choice. Behat allows to write test scenarios using Gherkin step definitions and it generates the corresponding PHP code to actually run and test the defined scenarios.

Thanks to the excellent Behat Drupal Extension Drupal developers have been able to enjoy the benefits of Behavioral Driven Development for quite some time.

Essentially the project provides an integration between Drupal and Behat allowing the usage of Drupal-specific Gherkin step definitions. For example, writing a scenario that tests node authorship would look like:

Scenario: Create nodes with specific authorship
  Given users:
  | name     | mail            | status |
  | Joe User | [email protected] | 1      |
  And "article" content:
  | title          | author   | body             |
  | Article by Joe | Joe User | PLACEHOLDER BODY |
  When I am logged in as a user with the "administrator" role
  And I am on the homepage
  And I follow "Article by Joe"
  Then I should see the link "Joe User"

Dealing with complex content types

The Gherkin scenario above is pretty straightforward and it gets the job done for simple cases. In a real-life situation, though, it's very common to have content types with a high number of fields, often of different types and, possibly, referencing other entities.

The following scenario might be a much more common situation for a Drupal developer:

Scenario: Reference site pages from within a "Post" node
  Given "page" content:
    | title      |
    | Page one   |
    | Page two   |
    | Page three |
  When I am viewing a "post" content:
    | title                | Post title         |
    | body                 | PLACEHOLDER BODY   |
    | field_post_reference | Page one, Page two |
  Then I should see "Page one"
  And I should see "Page two"

While it is always possible to implement project specific step-definition, as show on this Gist dealing with field collections and entity references, having to do that for every specific content type might be an unnecessary burden.

Introducing field-handling for the Behat Drupal Extension

Nuvole recently contributed a field-handling system that would allow the scenario above to be ran out of the box, without having to implement any custom step definition, working both in Drupal 7 and Drupal 8. The idea behind it is to allow a Drupal developer to work with fields when writing Behat test scenarios, regardless of the entity type or of any field-specific implementation.

The code is currently available on the master branches of both the Behat Drupal Extension and the Drupal Driver projects, if you want to try it out follow the instructions at "Stand-alone installation" and make sure to grab the right code by specifying the right package versions in your composer.json file:

{
  "require": {
    "drupal/drupal-extension": "3.0.*@dev",
    "drupal/drupal-driver": "1.1.*@dev"
}

The field-handling system provides an integration with several highly-used field types, like:

Date fields

Date field values can be included in a test scenario by using the following notation:

  • Single date field value can be expressed as 2015-02-08 17:45:00
  • Start and end date are separated by a dash -, for ex. 2015-02-08 17:45:00 - 2015-02-08 19:45:00.
  • Multiple date field values are separated by a comma ,

For example, the following Gherkin notation will create a node with 3 date fields:

When I am viewing a "post" content:
  | title       | Post title                                |
  | field_date1 | 2015-02-08 17:45:00                       |
  | field_date2 | 2015-02-08 17:45:00, 2015-02-09 17:45:00  |
  | field_date3 | 2015-02-08 17:45:00 - 2015-02-08 19:45:00 |

Entity reference fields

Entity reference field values can be expressed by simply specifying the referenced entity's label field (e.g. the node's title or the term's name). Such an approach wants to keep up with BDD's promise: i.e. describing the system behavior by abstracting, as much as possible, any internal implementation.

For example, to reference to a content item with title "Page one" we can simply write:

When I am viewing a "post" content:
  | title           | Post title |
  | field_reference | Page one   |

Or, in case of multiple fields, titles will be separated by a comma:

When I am viewing a "post" content:
  | title           | Post title         |
  | field_reference | Page one, Page two |

Link fields

A Link field in Drupal offers quite a wide range of options, such as an optional link title or internal/external URLs. We can use the following notation to work with links in our test scenarios:

When I am viewing a "post" content:
  | title       | Post title                                              |
  | field_link1 | http://nuvole.org                                       |
  | field_link2 | Link 1 - http://nuvole.org                              |
  | field_link3 | Link 1 - http://nuvole.org, Link 2 - http://example.com |

As you can see we use always the same pattern: a dash - to separate parts of the same field value and a comma , to separate multiple field values.

Text fields with "Select" widget

We can also refer to a select list value by simply referring to its label, so to have much more readable test scenarios. For example, given the following allowed values for a select field:

option1|Single room
option2|Twin room
option3|Double room

In our test scenario, we can simply write:

When I am viewing a "post" content:
  | title      | Post title               |
  | field_room | Single room, Double room |

Working with other entity types

Field-handling works with other entity types too, such as users and taxonomy terms. We can easily have a scenario that would create a bunch of users with their relative fields by writing:

Given users:
  | name    | mail                | language | field_name | field_surname | field_country  |
  | antonio | [email protected] | it       | Antonio    | De Marco      | Belgium        |
  | andrea  | [email protected]  | it       | Andrea     | Pescetti      | Italy          |
  | fabian  | [email protected]  | de       | Fabian     | Bircher       | Czech Republic |

Contributing to the project

At the moment field-handling is still a work in progress and, while it does support both Drupal 7 and Drupal 8, it covers only a limited set of field types, such as:

  • Simple text fields
  • Date fields
  • Entity reference fields
  • Link fields
  • List text fields
  • Taxonomy term reference fields

If you want to contribute to the project by providing additional field type handlers you will need to implement this very simple, core-agnostic, interface:

<?php
namespace Drupal\Driver\Fields;/**
* Interface FieldHandlerInterface
* @package Drupal\Driver\Fields
*/
interface FieldHandlerInterface { /**
   * Expand raw field values in a format compatible with entity_save().
   *
   * @param $values
   *    Raw field values array.
   * @return array
   *    Expanded field values array.
   */
 
public function expand($values);
}
?>

If you need some inspiration check the current handlers implementation by inspecting classes namespaced as \Drupal\Driver\Fields\Drupal7 and \Drupal\Driver\Fields\Drupal8.

Mar 24 2015
Mar 24

Something that’s seen a resurgence of late is the importance of SEO and site/page ranking (though if your job revolves around content, you’d probably say it never left). It seems like after the initial crush of SEO and SEO specialists in the early-ish days of the web, things like backlink ranking, metadata, alt tags and all those other accessibility tools went the way of the dodo. Gone were the days where people massaged their content to smash in as many keywords as possible, where SEO specialists were getting paid obscene amounts of money to add in <meta> data into your html templates. Design reigned supreme.

But with the sheer amount of information available online, there remains a pressing and consistent need to ensure that your site rises above the fray; you want to be the expert, the forum for discussion, the end-all-be-all. So you’ve put together that lovely little site with the cool hamburger nav, the beautifully optimized content, the frequently updated blog and news sections--what’s next?

One of the key tools that people don’t really use is Google analytics; sure, everyone’s got that tracking code embedded in their templates or footer, and every so often, you’ll go in there and take a look at the cool line graph that tells you...well, nothing really. If you dig deeper into GA, you get an awesome picture of where, when, how, and even why people are visiting your site. You can see which content people like, which content is shared and liked, how often and by whom.

So that’s step one: Use GA to it’s full potential.

They have enough tools to make you cry, but if you make the effort to wade through them, you can tailor it to not only fit your content or website better, you can also make website a better fit for the web itself. Both Drupal and Wordpress make it so easy to add in the GA tracking code that there’s really no excuse to not have it up and running, secreting away all those squirrely little details Google is so fond of.

But more importantly, on the subject of tools, is a wonderful little plugin for Wordpress called Yoast. If you’ve worked with WP, have a WP site, or even browse the internet, you’ve probably heard of this plugin, and for good reason...it’s stunningly easy to setup and use. It tells you what pages are awful, how many keywords you’re missing, when your content is poorly optimized, when your images don’t have tags and text, and a number of other issues that take the best stab at Google’s inscrutable SEO rules/ranking criteria as you can get. This plugin is so ubiquitous as to be almost mandatory if you care at all about getting your website seen--the makers claim that Yoast has been downloaded almost 15 million times. So what’s the option for Drupal?

The short answer is, there isn’t one.

Sure, when it comes down to it, you have the same options as WP sites--being customizable is, after all, kind of Drupal’s thing. You can edit your templates to add in your markup, you can install dozens of modules (XML sitemap, pathauto, Google analytics, redirect, metatag, token, page title, SEO checker, SEO checklist...and the list goes on), and you’ll eventually get the same functionality as one WP plugin that takes literally two minutes to add and install. And that’s including the time it takes to actually FTP in.

And I suppose this all fits in with Drupal’s more...Drupal-y...audience. It’s open source--you’re expected to code, to configure, to muck, to install and extend. It’s just one of those things where you shouldn’t have to install so many different, specialized modules to get the functionality you need to optimize. Sure-- there’s even a module called SEO Tools -- but it has a list of 18 additional “recommended/required modules,” and all that gets you is a screen that looks remarkably like the Google analytics dashboard.

I hesitate to use the word “lazy,” when what I really mean is “efficient,” or the even more adept “optimized,” but really--you just spent weeks/months and thousands of dollars getting a site up and running, UX’d, designed, sprinted and agile-d. The final step, the key step to actually getting your site seen, should not be a massive, dozen-module installing, template editing undertaking.

Obviously, the SEO ideal for a fresh or redesigned Drupal site is a thorough and well-planned strategy phase, and if there’s time, budget, interest (and willpower) to structure your content around key SEO strategies, it’s perfect. But if you’ve ever worked on websites, you know that sometimes you just need something that works well out of the box.

WP has Yoast. Where’s Drupal’s “something?” Consider the gauntlet thrown.

Mar 24 2015
Mar 24

Drush for Developers (Second Edition) book cover Calling this book a "second edition" is more than a little bit curious, since there is no Drush for Developers (First Edition). I can only assume that the publisher considers Juampy's excellent Drush User's Guide (reviewed here) as the "first edition" of Drush for Developers (Second Edition), but that really minimzes ths book, as it is so much more than an updated version of Drush User's Guide. It would be like saying The Godfather, Part 2 is an updated version of The Godfather - which is just crazy talk. Drush Developer's Guide is more of a sequel - and (like The Godfather, Part 2) a darn good one at that.

This is not a book about learning all the various super-cool Drush commands available and how awesome Drush is and how it can save you all sorts of time (see Drush User's Guide). This book is for people who already know what Drush does, already have it installed, and probably already use it every day. It doesn't lay the groundwork like a guide for newbies, it takes your average Drush user and "levels them up."

Granted, the first chapter has some necessary introductory material: what is Drush, how do you install it, how can you use it to save the world. Been there, done that. But, it is in the first few pages of the second chapter that Juampy tips his hand - this is really going to be a book about using Drush to create an efficient local-dev-stage-production workflow.

He introduces the topic of the "update path"; a sequence of drush commands that can be used when pushing code (with features) between environments to ensure that the "destination" environment is properly updated to take advantage of the updated code. Turns out this book is actually about creating an efficient workflow - it just happens that Drush is an excellent tool to accomplish the task. For the most part, he starts with a very basic "update path" and then spends the rest of the book iterating on it and improving it in virtually every section.

Along the way, Juampy does a great job of introducing us to a number of intermediate-to-advanced topics. He covers Drupal's registry for module and theme paths, the Features module, cron, Jenkins, Drush and Drupal bootstrapping, and Drupal's Batch API - to name a few. Each of these new topics never seem to be forced - there is always a compelling reason to spend a few pages with each one in order to improve the "update path".

Juampy also does a nice job of covering the basics of creating a custom Drush command, including an entire chapter devoted to error handling and debugging. I especially appreciated his examples of how to create a custom Drush command that calls a number of other Drush commands. This makes it possible to design a custom workflow that can be executed with a minimum of commands.

The book reaches its crescendo in the second-to-last chapter, "Managing Local and Remote Environments". Juampy deftly covers site aliases then quickly integrates them with the "update path" he started in chapter 2, to end up with a super-solid workflow that anyone can adapt for their own purposes.

He wraps things up with a chapter on setting up a development workflow, but by the time I got to this chapter, my mind was already racing with how I could improve my current workflow with the ideas already presented. The information in the last chapter seems incremental in comparison to the rest of the book, but there are great ideas nonetheless. Having a site's docroot at the second level in the Git repo and fine tuning database dumps from production was the icing on the cake.

Personally, I can't wait until I finish writing this review (I'm almost there!) so that I can implement some of Juampy's ideas in some of my client projects. If you feel like you're only using 10% of Drush and want to take it to the next level, buy this book.

I did exchange a few emails with Juampy while I was writing this review. (He did confirm that there is no "Drush for Developers (First Edition.)" I can only hope that the "Second Edition" in the title doesn't dissuade anyone from picking up this book - it is a worthy addition to any Drupal professional's library.

Trackback URL for this post:

http://drupaleasy.com/trackback/714

Attachment Size drushfordevelopers_bookcover.jpg 21.42 KB
Mar 24 2015
Mar 24

In this tutorial, you'll learn how to set up a local Drupal 8 development environment using the Vagrant Drupal Development (VDD) contributed module, Chef, Vagrant, and VirtualBox.

Vagrant Drupal Development (VDD) is a ready-to-use development environment contained in a virtual machine. Why use it? It provides a standard hosting setup for developing with Drupal. This can get you up and running quickly, without knowing anything about server administration.

The VDD module uses Chef, a configuration management tool that helps automate your infrastructure, and Virtualbox server virtualization software. VDD uses Ubuntu 12.04 LTS Precise Pangolin, a Linux distribution, as a base. LTS is an acronym for Long Term Support. Version 12.04 was released in 2012 and has maintenance support until 2017. There is an issue open for upgrading to Ubuntu 14.04 LTS, although 2017 is a ways off.

One of the best things about the VDD module is that it shares a directory on your local machine with the virtual machine (VM) you use to run Drupal. This allows you to use all of your local development tools, while keeping all the server configuration in the VM!

Getting started

We're going to set up a Drupal 8 environment. Vagrant Drupal Development requires Virtualbox and Vagrant, if you don't have these installed, follow the steps below.

Installing Virtualbox:

Navigate to https://www.virtualbox.org/wiki/Downloads and grab a copy of the installer for your OS/Platform, and install it!

Installing Vagrant:

Make sure you followed the step above and installed Virtualbox. Otherwise, you won't be able to install Vagrant. Navigate to http://www.vagrantup.com/downloads.html and, again, grab a copy of the installer for your OS/Platform, and install it!

If you previously had Virtualbox installed, make sure to back up your current VMs before proceeding with this tutorial.

Clone the VDD repo in to your home directory (~/):


git clone --branch 8.x-1.x http://git.drupal.org/project/vdd.git
cd vdd

VDD automatically creates drush aliases for each site in the config.json file at


~/vdd/consig.json

By default, it defines a Drupal 7 and Drupal 8 alias. You can also set the username, email, and password for your Drupal install:


"vdd": {
  "sites": {
    "drupal8": {
      "account_name": "root",
      "account_pass": "root",
      "account_mail": "[email protected]",
      "site_name": "Drupal 8",
      "site_mail": "[email protected]",
      "vhost": {
        "document_root": "drupal8",
        "url": "drupal8.dev",
        "alias": ["www.drupal8.dev"]
      }
    },
    "drupal7": {
      "account_name": "root",
      "account_pass": "root",
      "account_mail": "[email protected]",
      "site_name": "Drupal 7",
      "site_mail": "[email protected]",
      "vhost": {
        "document_root": "drupal7",
        "url": "drupal7.dev",
        "alias": ["www.drupal7.dev"]
      }
    }
  }
}

We can now start provisioning our VM. Once you type the following, go and grab a snack. It takes some time!


vagrant up

Once this is complete, you'll see the following in the terminal:


==> default: =============================================================
==> default: Install finished! Visit http://192.168.44.44 in your browser.
==> default: =============================================================

Visiting http://192.168.44.44 presents you with further instructions. I'll quickly add the bare-bones here so we can move along. But I urge you to visit this link and read more, particularly if you're interested in a Drupal 6 or 7 development environment.

We are going to add a custom entry to our hosts files, which controls your local system's DNS.

In Linux, edit the following file as an administrator in your text editor of choice:


/etc/hosts

In OS X, edit the following file as an administrator in your text editor of choice:


/private/etc/hosts

In Windows, edit the following file as an administrator in your text editor of choice:


C:\Windows\System32\drivers\etc\hosts

I'm only going to add an entry for Drupal 8 here, as the VDD summary page noted above (http://192.168.44.44) describes the entries you may wish to add for other Drupal versions.


192.168.44.44 drupal8.dev
192.168.44.44 www.drupal8.dev

Now we want to grab the latest Drupal 8, either using git, or drush. Make sure to use the directory name Drupal 8 to save us from editing apache config:


cd ~/vdd/data/drupal8
git clone --branch 8.0.x http://git.drupal.org/project/drupal.git .

By default, VDD performance can be slow, depending on your system, as it uses shared folders to share a directory on your host system with your virtual machine.

To speed this up, if you're running OS X or Linux, you can use: NFS

If your host system is Debian or Ubuntu (OS X users can skip this step):


apt-get install nfs-kernel-server

Now edit the "synced_folders" section of your config.json file, setting the "type" property from "default" to "nfs":


"synced_folders": [
    {
      "host_path": "data/",
      "guest_path": "/var/www",
      "type": "nfs"
    }
  ],

Now issue the following command to reload your VM configuration, from the VDD root directory on your host system:


vagrant reload

To install Drupal 8 on your new virtual machine :


vagrant ssh

The drush command 'si' installs your new Drupal site using the credentials specified in the config.json, based on the alias you pass. Since we're installing Drupal 8:


cd ~/sites/drupal8
drush @drupal8 si standard -y

You should now be able to navigate to drupal8.dev or www.drupal8.dev in your web browser, and log in with the credentials you specified in your config.json file.

You can start working on Drupal 8 core, on your local system, at the following location:


~/vdd/data/drupal8

Why not start with Writing a Hello World Module?

If you would like to use mysql from the command line, you'll want to first ssh to your VM. From the VDD root directory on your host system:


vagrant ssh
cd ~/sites/drupal8

Then for mysql directly (you'll be prompted to type in the password: "root"):


mysql -u root -p drupal8

Or using drush sql-cli:


drush sql-cli

Happy coding everyone!

Mar 24 2015
Mar 24

Many of you who work on projects every day have enjoyed great success with them, but then again, there are also those projects that don't achieve their financial goals, or slip over the timeline or don't deliver the results they should. In this article, I’ll describe three rules to help you consistently plan for project success. In no case should a project be left to chance, to the customer or to the supplier alone. Here I’ll explain how to control a project from its start to its successful end. You risk driving it into the ground if you don’t stick to the following points:

1) Understand why we have projects

Recently, at the booth of a project manager at an exhibition, I read the following: "Projects are not an end unto themselves." This sentence may seem trivial at first glance, but it incorporates several important truths:

  • After a successfully completed project, something should be different and improved, compared to before
  • Projects start with a fixed – and hopefully clearly defined – goal
  • Afterward, there should be positive changes and the project value should persist

2) Set a clear schedule to control limited resources

Taking the definition of a project into consideration, it can be described as:

  • Unique: we’re not selling a product that’s finished and can be sold and used as is; rather, we’re creating something that doesn’t yet exist in the desired form
  • Fixed scheduled start: unless a project has a clearly defined beginning, the project team will only take it semi-seriously during the course of the work
  • Scheduled end: a clear project end is part of the goal and is defined in terms of both date and content

The fact that there are only limited resources available for a project is self-evident and of utmost importance. This is where a project plan comes into play – as well as a project manager who’ll manage the resources to stick to the plan.

3) Set goals! Make them clear, measurable and known amongst the team

A project follows a vision that consists of one or more goals. These objectives should be formulated in a SMART (Specific Measurable Achievable Realistic Timely) way and be clearly understood by all project participants, as they can only work towards objectives that are known to them. Goals extend to certain areas, e.g. cost, duration, or benefits. Important in the formulation of objectives: the project requirements are not goals but conditions that must be met before the goal can ever be reached. All requirements and project milestones should be periodically checked regarding each objective. This can happen in a weekly retrospective meeting, in a monthly controlling meeting or even in a daily status meeting. The basic conditions and requirements can – and will – change, of course. This is normal. Even the goal, especially of long-running projects, can change if important conditions change. However, this should be done consciously and in a controlled fashion, with the agreement of all involved parties. Under no circumstances should unnoticed and thus uncontrolled changes be accepted. To deal with changing environments and requirements is exactly what agile project management was made for.

So: stick to your overall goal. If, for instance, you want to create an e-commerce system to attract new customer groups and generate more revenue, but you’re spending days trying to find just the right position for banners ads, you’re forgetting your goals. Set priorities and stick to them during development.

If your goal is higher revenues, better quality or more efficiency, you should provide concrete metrics and a fixed date for those measurements. Otherwise, there will be unfulfilled expectations at the end of the project. What if someone thought 1% more sales was reaching the goal, although the sales manager expected an uplift of 10% – but he just assumed this without clearly communicating his expectation? And, in addition, the sales manager guessed the project would take six months, while the rest of the team planned for 12?

The topic of the next installment in this series will be "Agile projects for a fixed price? Yes you can!"

Mar 24 2015
Mar 24
#And how to use them in Drupal

A popular search engine for Drupal is Apache Solr. Although installation and configuration of Solr can be done almost completely via the Drupal Admin UI, in some cases it can be very instructive to see and understand what data is sent to and from Solr when a search is done from Drupal.

First place to look when using Solr inside Tomcat is the log file of Tomcat, usually /var/log/tomcat6/catalina.out. If this file is not present in this directory on your system use

locate catalina.out

or a similar command to find it.

If Solr is used in its own Jetty-container and is run as a seperate service (which is the only option for Solr 5.x), log4j is used to implement logging and configuration is done in the log4j.properties-file.

By default the logs are written to 'logs/solr' under the Solr root, this can be set with in log4j.properties with the 'solr.log'-option, for example:

solr.log=/var/solr/logs

For more information about log4j, see Apache Log4j 2.

In the log, each line like the following represents one search query:

INFO: [solrdev] webapp=/solr path=/select params={spellcheck=true&
spellcheck.q=diam&
hl=true&
facet=true&
f.bundle.facet.mincount=1&
f.im_field_tag.facet.limit=50&
f.im_field_tag.facet.mincount=1&
fl=id,entity_id,
entity_type,
bundle,
bundle_name,
label,
ss_language,
is_comment_count,
ds_created,
ds_changed,
score,
path,
url,
is_uid,
tos_name&
f.bundle.facet.limit=50&
f.content.hl.alternateField=teaser&
hl.mergeContigious=true&
facet.field=im_field_tag&
facet.field=bundle&
fq=(access_node_tfslk0_all:0+OR+access__all:0)&
mm=1&
facet.mincount=1&
qf=content^40&
qf=label^5.0&
qf=tags_h2_h3^3.0&
qf=taxonomy_names^2.0&
qf=tos_content_extra^0.1&
qf=tos_name^3.0&
hl.fl=content&
f.content.hl.maxAlternateFieldLength=256&
json.nl=map&
wt=json&
rows=10&
pf=content^2.0&
hl.snippets=3&
start=0&
facet.sort=count&
q=diam&
ps=15} hits=10 status=0 QTime=12 

NB: one way to get a clearer look at the log-lines, is by copying one of them into a text editor and replace '&' with '&\n' and ',' with ',\n' to get a more readable text.

Here '[solrdev]' indicates the core the query was submitted to and 'path=/select' the path.

Everything between the {}-brackets is what is added to the query as parameter. If your Solr host is localhost, Solr is running on port 8080 and the name of your core is solrdev then you can make this same query in any browser by starting with:

http://localhost:8080/solr/soldev/select?

followed by all the text between the {}-brackets.

This looks like no simple query and in fact a lot is going on here: not only is the Solr/Lucene index searched for a specific term, we also tell Solr which fields to return, to give us spellcheck suggestions, to higlight the search term in the return snippet, to return facets etcetera.

For better understanding of the Solr query we will break it down and discuss each (well, maybe not all) of the query parameters from the above log line.

Query breakdown

q: Search term

The most basic Solr query would only contain a q-field, e.q.

http://localhost:8080/solr/solrdev/select?q=diam 

This would return all fields present in Solr for all matching documents. This will either be fields directly defined in the schema.xml (in this examples we use the schema's based on the schema included in the Search API Solr module) like bundle_name:

<field name="bundle_name" type="string" indexed="true" stored="true"/>

or dynamic fields, which are created according to the field definition in the schema.xml, eg:

<dynamicField name="ss_*"  type="string"  indexed="true"  stored="true" multiValued="false"/> 

The above query run on my local development environment would return a number of documents like this one:

<doc>
    <bool name="bs_promote">true</bool>
    <bool name="bs_status">true</bool>
    <bool name="bs_sticky">false</bool>
    <bool name="bs_translate">false</bool>
    <str name="bundle">point_of_interest</str>
    <str name="bundle_name">Point of interest</str>
    <str name="content">Decet Secundum Wisi Cogo commodo elit eros meus nisl turpis.(...)
    </str>
    <date name="ds_changed">2015-03-04T08:45:18Z</date>
    <date name="ds_created">2015-02-19T18:55:44Z</date>
    <date name="ds_last_comment_or_change">2015-03-04T08:45:18Z</date>
    <long name="entity_id">10</long>
    <str name="entity_type">node</str>
    <str name="hash">tfslk0</str>
    <str name="id">tfslk0/node/10</str>
    <long name="is_tnid">0</long>
    <long name="is_uid">1</long>
    <str name="label">Decet Secundum Wisi</str>
    <str name="path">node/10</str>
    <str name="path_alias">content/decet-secundum-wisi</str>
    <str name="site">http://louis.atlantik.dev/nl</str>
    <arr name="sm_field_subtitle">
    <str>Subtitle Decet Secundum Wisi</str>
    </arr>
    <arr name="spell">
    <str>Decet Secundum Wisi</str>
    <str>Decet Secundum Wisi Cogo commodo elit eros meus nisl turpis. (...)
    </str>
    </arr>
    <str name="ss_language">nl</str>
    <str name="ss_name">admin</str>
    <str name="ss_name_formatted">admin</str>
    <str name="teaser">
    Decet Secundum Wisi Cogo commodo elit eros meus nisl turpis. Abluo appellatio exerci exputo feugiat jumentum luptatum paulatim quibus quidem. Decet nutus pecus roto valde. Adipiscing camur erat haero immitto nimis obruo pneum valetudo volutpat. Accumsan brevitas consectetuer fere illum interdico
    </str>
    <date name="timestamp">2015-03-05T08:30:11.7Z</date>
    <str name="tos_name">admin</str>
    <str name="tos_name_formatted">admin</str>
    <str name="url">
        http://louis.atlantik.dev/nl/content/decet-secundum-wisi
    </str>
 </doc>

(In this document I have left out most of the content of the fields content and spell).

Obviously, when searching we are not interested in all fields, for example the afore mentioned field content contains a complete view (with HTML tags stripped) of the node in Drupal, which most of the times is not relevant when showing search results.

fl: Fields

So the first thing we want to do is limit the number of fields Solr is returning by adding the fl parameter, in which we name the fields we want returned from Solr:

http://localhost:8080/solr/solrdev/select?q=diam&fl=id,entity_id,entity_type,bundle,bundle_name,label,ss_language,score,path,url

This would return documents like:

<doc>
    <float name="score">0.1895202</float>
    <str name="bundle">point_of_interest</str>
    <str name="bundle_name">Point of interest</str>
    <long name="entity_id">10</long>
    <str name="entity_type">node</str>
    <str name="label">Decet Secundum Wisi</str>
    <str name="path">node/10</str>
    <str name="ss_language">nl</str>
    <str name="url">
        http://localhost/nl/content/decet-secundum-wisi
    </str>
</doc>

Here we not only use fields which are direclty present in the index (like bundle) but also a generated field score which indicates the relevance of the found item. This field can be used to sort the results by relevance.

By the way, from Solr 4.0 on, the fl can be added multiple times to the query with in each parameter one field. However the "old" way of a comma-seperated field list is still supported (also in Solr 5).

So in Solr 4 the query could (or should I say: should?) be written as:

http://localhost:8080/solr/solrdev/select?q=diam&fl=id&fl=entity_id&fl=entity_type&fl=bundle&fl=bundle_name&fl=label&fl=ss_language&fl=score&fl=path&fl=url

NB: in Solr 3 this would give a wrong result, with only the first fl field returned.

Please note that you can not query all dynamic fields at once with a fl-parameter like 'fl=ss_*': you must specify the actual field which are created while indexing: fl=ss_language,ss_name,ss_name_formatted... etc.

fq: Filter queries

One thing we do not want is users finding unpublished content which they are not allowed to see. When using the Drupal scheme, this can be accomplished by filtering on the dynamic fields created from

 <dynamicField name="access_*" type="integer" indexed="true" stored="false" multiValued="true"/>

To filter, we add a fq-field like this:

http://localhost:8080/solr/solrdev/select?q=diam&fl=id,entity_id,entity_type,bundle,bundle_name,label,ss_language,score,path,url&fq=(access_node_tfslk0_all:0+OR+access__all:0)

The queries in fq are cached independent from the other Solr queries and so can speed up complex queries. The query can also contain range-queries, e.g. to limit the returned documents by a popularity-field present in the Drupal-node (and of course, indexed) between 50 and 100, one could use

fq=+popularity:[50 TO 100] 

For more info see the Solr wiki, CommonQueryParameters

To add filter queries programmatically in Drupal when using the Apache Solr Search-module, implement

hook_apachesolr_query_alter()

and use

$query->addFilter

to add filters to the query.

For example, to filter the query on the current language, use:

function mymodule_apachesolr_query_alter(DrupalSolrQueryInterface $query) {
    global $language;
    $query->addFilter("ss_language", $language->language);
}

If using the Solr environment with Search API Solr Search, implement

hook_search_api_solr_query_alter(array &$call_args, SearchApiQueryInterface $query)

In this hook you can alter or add items to the

$call_args['params']['fq']

to filter the query.

In both cases the name of the field to use can be found via the Solr admin. In this case it is a string fieks created from the dynamic ss_ *-field in 'schema.xml'.

rows and start: The rows returnd

The 'row'-parameter determines how many documents Solr will return, while 'start' detemines how many documents to skip. This basically implements pager-functionality.

wt: Type of the returned data

The 'wt'-parameter determines how the data from Solr is returned. Most used are:

  • xml (default)
  • json

See https://cwiki.apache.org/confluence/display/solr/Response+Writers for a complete list of available formats and their options.

qf: Boosting

The DisMax and EdisMax plugins have the abilty to boost the score of documents. In the default Drupal requestHandler ("pinkpony") the Edismax-parser is set as query plugin:

<requestHandler name="pinkPony" class="solr.SearchHandler" default="true">
    <lst name="defaults">
        <str name="defType">edismax</str>

Boosting can be done by adding one or more qf-parameters which define the fields ("query field") and their boost value in the following syntax:

[fieldName]^[boostValue]

E.g:

qf=content^40&
qf=label^5.0&
qf=tags_h2_h3^3.0&
qf=taxonomy_names^2.0&

In Drupal this is done by implementing the relevant (Search API or Apache Solr) hook and adding the boost.

For example, let's say you want to boost the result with a boost value of "$boost" if a given taxonomy term with id "$tid" is present int the field "$solr_field"

For Apache Solr module we should use:

function mymodule_apachesolr_query_alter(DrupalSolrQueryInterface $query) {            
    $boost_q = array();
    $boost_q[] = $solr_field . ':' . $tid . '^' . $boost;
    $query->addParam('bq', $boost_q);
}

For Search API Apache Solr:

function mymodule_search_api_solr_query_alter(array &$call_args,     SearchApiQueryInterface $query) {
 $call_args['params']['bq'][] = $solr_field . ':' . $tid . '^' . $boost;
}

NB: both examples we ignore any boost-values set by other modules for the same field. In real life you should merge the exisitng boost-array with our new one.

Negative boosting

Negative boosting is not possible in Solr. It can however be simulated by boosting all documents who do not have a specific value. In the above example of boosting on a specific taxonomy term, we can use:

$boost = abs($boost);
$boost_q[] = '-' . $solr_field . ':' . $tid . '^' . $boost;

or, for Search API

$boost = abs($boost);
$call_args['params']['bq'][] = '-' . $solr_field . ':' . $tid . '^' . $boost;

where the '-' for the field name is used to indicated that we want to boost the items that do not have this specific value.

mm: minimum should match

The Edismax parser also supports querying phrases (as opposed to single words). With the 'mm' parameters the minimum amount of words that must be present in the Solr-document is given. For example: if the query is "little red corvet" and 'mm' is set to '2', only documents which contain at least:

  • "little" and "red"
  • "little" and "corvette"
  • "red" and "corvette"

are returned, and documents which contain only of the words are not.

q.alt: for empty queries

If specified, this query will be used when the main query string is not specified or blank. This parameter is also specific for the EdisMax query handler.

And even more

Of course the above mentioned fields are not all fields used in a Solr query. Much more can be done with them and there are a lot of other parameters to influence the output of Solr.

To mention just a few:

Facets

Facets are one of the strong assets of Solr. A complete description of all possibilities and settings for facets would take to far in this scope, but a number of usefull parameters are discussed her.

The Drupal Facet API module and its range of plugins have a plethora of settings and possibilites, like the Facet Pretty Paths-module or, for an example of the numerous possibilities of the Facet API, the Facet API Taxonomy Sort module.

The most important Solr parameters in relation to facets are:

facet

Turn on the facets by using "facet=true" in the query.

facet.field

The field to return facets for, can be added more than once.

facet.mincount

This indicates the minimum amount of results for which to show a facet.

If this is set to '1', all facets items which would return documents are returned, if set to '0' a facet item for each value in the field will be returned, even if clicking on the item would return zero documents.

facet.sort

How to sort the facet items. Possible values are:

facet.sort=count  

Sort by number of returned documents

facet.sort=index 

Amounts to sorting by alphabet, or, in the Solr-wiki style: return the constraints sorted in their index order (lexicographic by indexed term). For terms in the ascii range, this will be alphabetically sorted.

facet.limit

The maximum amount of facet items returned, defaults to 100.

facet.offset

The number of facet items skipped. Defaults to '0'.

'facet.limit' and 'facet.offset' can be combined to implement paging of facet items.

Highlighting

hl

Highlighting is turned on with 'hl=true' and enables highlighting the keywords in the search snippet Solr returns. Like facetting and spellchecking, highlighting is an extensive subject, see http://wiki.apache.org/solr/HighlightingParameters for more info.

hl.formatter

According to the Solr wiki: currently the only legal value is "simple". So just use 'simple'.

hl.simple.pre/hl.simple.post

Set the tags used to surround the highlight, defaults to <em> / </em> To wrap the highlight in for example bold tags, use:

hl.simple.pre=<b>&hl.simple.post=<b>

Spellchecking

spellcheck

Spellchecking is enabled with 'spellcheck=true'.

Because spellchecking is a complicated and language dependend process, it is not discussed here in full, see http://wiki.apache.org/solr/SpellCheckComponent for more information about the spellcheck component.

If the query for the spellchecker is given in a seperate 'spellcheck.q'-parameter like this:

spellcheck.q=<word>

this word is used for spell checking. If the 'spellcheck.q'-parameter is not set, the default 'q'-parameters is used as input for the spellchecker. Of course the word in the 'spellcheck.q' should bare a strong relation to the word in the 'q'-parameter, otherwise ununderstandable spelling suggestions wpould be given.

One can also make seperate requests to the spellchecker:

http://localhost:8080/solr/solrdev/spell?q=<word>&spellcheck=true&spellcheck.collate=true&spellcheck.build=true

Where <word> in the 'q-parameter is the word to use as input for the spellchecker.

One important subject releated to spellchecking is the way content is analyzed before it is written into the index or read from it. See SpellCheckingAnalysis for the default settings for the spellcheck-field.

In Drupal there is a spellcheck module for Search API Search API Spellcheck which can be used for multiple search backends.

Conclusion

Although most of the parameters mentioned above are more or less hidden by the Drupal admin interface, a basic understanding of them, can help to understand why your Solr search does (or more usually: does not) returns the results you expected.

As said in the introduction: looking at the queries in the Tocmat log can help a lot when debugging Solr.

Mar 24 2015
Mar 24

In honor of long-time Drupal contributor Aaron Winborn (see his recent Community Spotlight), whose battle with Amyotrophic lateral sclerosis (ALS) (also referred to as Lou Gehrig's Disease) is coming to an end later today, the Community Working Group, with the support of the Drupal Association, would like to announce the establishment of the Aaron Winborn Award.

This will be an annual award recognizing an individual who demonstrates personal integrity, kindness, and above-and-beyond commitment to the Drupal community. It will include a scholarship and stipend to attend DrupalCon and recognition in a plenary session at the event. Part of the award will also be donated to the special needs trust to support Aaron's family on an annual basis.

Thanks to Hans Riemenschneider for the suggestion, and the Drupal Association executive board for approving this idea and budget so quickly. We feel this award is a fitting honor to someone who gave so much to Drupal both on a technical and personal level.

Thank you so much to Aaron for sharing your personal journey with all of us. It’s been a long journey, and a difficult one. You and your family are all in our thoughts.

Mar 24 2015
Mar 24

We defined what a plugin is, discussed some plugins used in core and wrote our own custom plugin previously. We shall tune it up a bit in this post.

Real world plugins have a lot more properties than the label property mentioned in our breakfast plugin. To make our plugin more "real world", we introduce 2 properties, image and ingredients. It makes more sense now to have a custom annotation for breakfast instead of using the default Plugin annotation.

How different are custom annotations from the usual Plugin annotation?

1) They convey more information about a plugin than what an @Plugin does. Here's a custom annotation for a views display plugin from search_api, taken from here.

/**
 * Defines a display for viewing a search's facets.
 *
 * @ingroup views_display_plugins
 *
 * @ViewsDisplay(
 *   id = "search_api_facets_block",
 *   title = @Translation("Facets block"),
 *   help = @Translation("Display the search result's facets as a block."),
 *   theme = "views_view",
 *   register_theme = FALSE,
 *   uses_hook_block = TRUE,
 *   contextual_links_locations = {"block"},
 *   admin = @Translation("Facets block")
 * )
 */

2) Custom annotations are a provision to document the metadata/properties used for a custom plugin.
Check out this snippet from FieldFormatter annotation code for instance:

  /**
   * A short description of the formatter type.
   *
   * @ingroup plugin_translatable
   *
   * @var \Drupal\Core\Annotation\Translation
   */
  public $description;

  /**
   * The name of the field formatter class.
   *
   * This is not provided manually, it will be added by the discovery mechanism.
   *
   * @var string
   */
  public $class;

  /**
   * An array of field types the formatter supports.
   *
   * @var array
   */
  public $field_types = array();

  /**
   * An integer to determine the weight of this formatter relative to other
   * formatter in the Field UI when selecting a formatter for a given field
   * instance.
   *
   * @var int optional
   */
  public $weight = NULL;

That gave us a lot of information about the plugin and its properties.
Now that you are convinced of the merits of custom annotations, let's create one.

Checkout the module code if you haven't already.

$ git clone [email protected]:badri/breakfast.git

Switch to the custom-annotation-with-properties tag.

$ git checkout -f custom-annotation-with-properties

and enable the module. The directory structure should look like this:
Custom annotation directory structure

The new Annotation directory is of interest here. Custom annotations are defined here.

  /**
   * A glimspe of how your breakfast looks.
   *
   * This is not provided manually, it will be added by the discovery mechanism.
   *
   * @var string
   */
  public $image;

  /**
   * An array of igredients used to make it.
   *
   * @var array
   */
  public $ingredients = array();

Now, the plugin definition is changed accordingly. Here's the new annotation of the idli plugin.

/**
 * Idly! can't imagine a south Indian breakfast without it.
 *
 *
 * @Breakfast(
 *   id = "idly",
 *   label = @Translation("Idly"),
 *   image = "https://upload.wikimedia.org/wikipedia/commons/1/11/Idli_Sambar.JPG",
 *   ingredients = {
 *     "Rice Batter",
 *     "Black lentils"
 *   }
 * )
 */

The other change is to inform the Plugin Manager of the new annotation we are using.
This is done in BreakfastPluginManager.php.

    // The name of the annotation class that contains the plugin definition.
    $plugin_definition_annotation_name = 'Drupal\breakfast\Annotation\Breakfast';

    parent::__construct($subdir, $namespaces, $module_handler, 'Drupal\breakfast\BreakfastInterface', $plugin_definition_annotation_name);

Let's tidy the plugin by wrapping it around an interface. Though this is purely optional, the interface tells the users of the plugin what properties it exposes.
This also allows us to define a custom function called servedWith() whose implementation is plugin specific.

With the plugin class hierarchy looking like this now:
breakfast plugin class hierarchy

The servedWith() is implemented differently for different plugin instances.

// Idly.php
  public function servedWith() {
    return array("Sambar", "Coconut Chutney", "Onion Chutney", "Idli podi");
  }

// MasalaDosa.php
  public function servedWith() {
    return array("Sambar", "Coriander Chutney");
  } 

We make use of the interface functions we wrote in the formatter while displaying details about the user's favorite breakfast item.

// BreakfastFormatter.php
    foreach ($items as $delta => $item) {
      $breakfast_item = \Drupal::service('plugin.manager.breakfast')->createInstance($item->value);
      $markup = '<h1>'. $breakfast_item->getName() . '</h1>';
      $markup .= '<img src="http://lakshminp.com/the-drupal-8-plugin-system-part-4//'. $breakfast_item->getImage() .'"/>';
      $markup .= '<h2>Goes well with:</h2>'. implode(", ", $breakfast_item->servedWith());
      $elements[$delta] = array(
        '#markup' => $markup,
      );
    }

And the user profile page now looks like this.

user profile for breakfast plugin

Derivative plugins

We have Idly plugin instance mapped to the Idly class and Uppuma instance mapped to the Uppuma class. Had all the plugin instances been mapped to a single class, we would have got derivative plugins. Derivative plugins are plugin instances derived from the same class.
They are employed under these scenarios:
1. If one plugin class per instance is an overkill.
There are times where you don't want to define a class for each plugin instance. You just say that it is an instance of a particular class that is already defined.
2. You need to dynamically define plugin instances.
The Flag module defines a different Flag Type plugin instance for different entities. The entities in a Drupal site are not known beforehand and hence we cannot define one instance each for every entity. This calls for a plugin derivative implementation.

Lets add derivative plugins to our breakfast metaphor.

$ git checkout -f plugin-derivatives

Here's a new user story for the breakfast requirement. We are going to add desserts to our menu now. All desserts are of type Sweet. So, we define a derivative plugin called Sweet which is based on breakfast.

This calls for 3 changes as shown in the new directory structure outlined below:

derivative plugins

1) Define the Sweet plugin instance class on which all our desserts are going to be based on.

/**
 * A dessert or two whould be great!
 *
 *
 * @Breakfast(
 *   id = "sweet",
 *   deriver = "Drupal\breakfast\Plugin\Derivative\Sweets"
 * )
 */
class Sweet extends BreakfastBase {  
  public function servedWith() {
    return array();
  }
}

Note the "deriver" property in the annotation.

2) Next, we define the deriver in the Derivative directory.

/**
 * Sweets are dynamic plugin definitions.
 */
class Sweets extends DeriverBase {

  /**
   * {@inheritdoc}
   */
  public function getDerivativeDefinitions($base_plugin_definition) {
    $sweets_list = drupal_get_path('module', 'breakfast') . '/sweets.yml';
    $sweets = Yaml::decode(file_get_contents($sweets_list));

    foreach ($sweets as $key => $sweet) {
      $this->derivatives[$key] = $base_plugin_definition;
      $this->derivatives[$key] += array(
        'label' => $sweet['label'],
        'image' => $sweet['image'],
        'ingredients' => $sweet['ingredients'],
      );
    }

    return $this->derivatives;
  }
}

3) The derivative gets sweets info from the sweets.yml present in the module root directory. This could even be an XML/JSON or any file format which holds metadata. I've used a YAML file for consistency's sake.

mysore_pak:  
  label: 'Mysore Pak'
  image: 'https://upload.wikimedia.org/wikipedia/commons/f/ff/Mysore_pak.jpg'
  ingredients:
    - Ghee
    - Sugar
    - Gram Flour

jhangri:  
  label: 'Jhangri'
  image: 'https://upload.wikimedia.org/wikipedia/commons/f/f8/Jaangiri.JPG'
  ingredients:
    - Urad Flour
    - Saffron
    - Ghee
    - Sugar

The class hierarchy for derivative plugins looks a little big bigger now.

Derivative plugins class hierarchy

Clear your cache and you must be able to see the sweets defined in the yaml file in the breakfast choice dropdown of the user profile.

new derivative plugins

Enjoy your dessert!

That concludes our tour of Drupal 8 plugins. Hope you liked it and learnt something. Stay tuned for the next series about services.

Mar 24 2015
Mar 24

There are many Web hosting companies claiming their ability to host Drupal sites in the enterprise space. However, most of the time, these providers simply provide the hardware or a virtual machine (VM/VPS) with an operating system (OS) capable of hosting Drupal if you build the application stack, configure it and manage it all yourself. They may even claim that they can set all of this up for you, but they'll charge extra for the labour. They don't have a comprehensive platform where instances can be automatically deployed as needed.

What do I mean by a "platform"?

I'm considering a somewhat complex requirement: a solution that includes a fully-managed Drupal application stack with several environments where it is trivial to move code, databases and files between them.

While it seems like it would be difficult to find such a service, there are now several available. While these options may seem expensive relative to generic Web-site hosting, they'll save you immensely on labour costs, as you won't have to pay to do any of the following yourself:

  • Installation of the application stack on each environment (Dev, Staging and Prod, see below.)
  • Managing all OS software updates and security patches.
  • Managing the configuration between environments (a.k.a. Configuration Management)
  • Self-hosting and managing a deployment tool such as Aegir or creating and managing your own deployment scripts.
  • Developing, maintaining and documenting processes for your development workflow.

The environments in question are Dev (or Development, the integration server where code from all developers is combined and tested), Staging (or QA, where quality assurance testing is done with the content and configuration from Production/Live and the code from Dev) and Prod (Production/Live, your user-facing site). Typically, these three (3) server environments are set up for you, and the Drupal hosting provider provides a simple-to-use command-line (CLI) and/or Web interface to deploy code to Dev, to Staging and finally to Prod while allowing you to easily move site data (content and configuration) from Prod to Staging to Dev. Your developers then work in their own sandboxes on their laptops pushing code up through the environments, and pulling site data from them.

This allows you to significantly reduce or eliminate the resources you'll need for system administration and DevOps. You can focus your project resources on application development, which is to get your content management system up and running as quickly as possible, while leaving infrastructure issues to the experts. Even if these items are outsourced, they add additional costs to your project.

However, you may have specific requirements which preclude you from going with such a managed solution. You may have a need to keep the environments behind your corporate firewall (say to communicate securely with other internal back-end systems such as CRMs, authentication servers or other data stores) or your country/jurisdiction has strict privacy legislation preventing you from physically locating user data in regions where electronic communication is not guaranteed to be kept private. For example, in Canada, personal information can only be used for purposes that users have consented to (see PIPEDA). As a good number of the described hosting services rely on data centres in the United States, the PATRIOT Act there (which gives the US government full access to any electronic communication) could make it difficult to enforce such privacy protections.

At the time of this writing, the following Drupal hosting companies are available to provide services as described above. They're listed in no particular order.

I haven't had a chance to try all of these yet as I have clients on only some of them. It would be great if someone who's evaluated all of them were to publish a comparison.

This article, Drupal-specific hosting: Choose a provider from those offering comprehensive platforms, appeared first on the Colan Schwartz Consulting Services blog.

Mar 23 2015
Mar 23

2015 started out strong with our first DrupalCon of the year, which took place from 10-12 February in Bogota, Columbia. Nothing feels better than to bring the power of DrupalCon to a new region where attendees can revel in their love for Drupal, the community, and enjoy time together. As people listened to the Driesnote, attended sessions and sprints, and celebrated with some Tejo, we heard a lot of “this is a real Con” and “it feels so good to experience this in my own backyard”.

Driesnote crowdSharing the gift of DrupalCon with the Latin American community was a joy for Drupal Association staff and community organizers. It wouldn’t have happened without help from Aldibier Morales, Carlos Ospina, Ivan Chaquea, Nick Vidal, and Jairo Pinzon, who helped organize the event. Conversely, it better connected the Drupal Association with this region, helping us better understand the high level of contribution as well as new ways to support this region.

263 people attended DrupalCon Latin America from 23 countries including 12 Latin American countries. 63% of attendees said that this was their first DrupalCon, which underscores why it’s so important to bring DrupalCon to new parts of the world. Attendees were primarily developers from Drupal Shops, but there was more diversity than expected. The event also attracted a higher level of beginners than expected and 14% of attendees were women, which falls between DrupalCon Europe (10% women) and DrupalCon North America (22%). Below are some demographic tables that compare DrupalCon Latin America with DrupalCon Austin.

SprintersAs you can imagine, the most attended sessions were focused on Drupal 8. DrupalCon Latin America was the first event to offer translated sessions and all sessions were recorded and posted to the DrupalCon YouTube Channel. Thanks to Lingotek, 20 additional session recordings were translated, too, and can be found on Youtube.

One of the big takeaways for Drupal Association staff was finding out how many attendees contribute to Drupal. When Megan Sanicki, COO, asked in her keynote introduction presentation how many people contributed, a large number of hands went up. It explains why DrupalCon Latin America had the largest percentage of attendees attend the sprint compared to any other DrupalCon --  38.4% of attendees showed up to make a difference. Thanks to the sprint leads, YesCT, alimac, DevelCuy, jackbravo and the other 19 sprint mentors, 101 people were able to participate in the sprints.

We’re also happy that financially the event achieved its budget goals. When planning DrupalCon Latin America, we knew that hosting the event in a new region would create budget challenges. We accepted that and were willing to operate this DrupalCon at a loss. We see this as an investment in a region that will grow because DrupalCon was hosted here. Below is the high level budget and further below is a more detailed view into the expenses.

DrupalCon Latin America Budget

Budget

Actual

Income $150,150 $104,513.74 Expenses $250,750 $188,034.40 Net Profit -$99,920 -$83,520.66

Overall, DrupalCon Latin America was a success! Session evaluations came back strong and the event received a high Net Promoter Score of 80. Also, attendees felt that they received more value than expected (see chart below).

While we hoped for larger numbers, it’s important to point out that DrupalCon Amsterdam in 2005 had about 100 attendees. When the event returned in 2014, it hosted 2,300 people. All regions have to start somewhere and DrupalCons have the power to infuse community members with a burst of energy and passion that helps the community grow. We saw this immediately after DrupalCon Latin America with the growth of Global Training Days. Last year, the region hosted 7 trainings total, but right after DrupalCon Latin America, the region hosted 10 - not even ¼ of the way into the year. Additionally, three Latin American community members nominated themselves in the Drupal Association At-Large Board Elections.

We are thrilled that we were able to bring DrupalCon to new regions of the world. Be sure and attend the closing session of DrupalCon Los Angeles to find out where we are bringing DrupalCon next.


DRUPALCON STATISTICS

DEMOGRAPHIC COMPARISONS

Business (sales, marketing) Front end (design, themer) C-Level Site Builder Other (PM, Trainer, etc) Site Administrator

Job Function

Austin

Latin America

Developer 40% 48% Business (sales, marketing) 11% 12% Front end (design, themer) 13% 10% C-Level 9% 9% Site Builder 11% 8% Other (PM, Trainer, etc) 9% 12% Site Administrator 7% 3%

How I use Drupal

Austin

Latin America

Drupal Shop 47% 61% Site Owner 30% 12% Freelance 5% 9% Evaluating 6% 4% Hobbyist 2% 2%

Skill Level

Austin

Latin America

Advanced 37% 40% Intermediate 39% 38% Beginner 23% 22%

SESSION STATISTICS

DrupalCon Latin America: Highest Attended Sessions

Count

#d8rules - Web-automation with Rules in Drupal 8 87 An Overview of the Drupal 8 Plugin System 70 Drupal 8 CMI on Managed Workflow 67 Getting Content to a Phone in less than 1000ms 58 AngularJS + Drupal = Un Dúo Superheróico! 52 DevOps, por donde comenzar? 49 The Future of Commerce on Drupal 8 (and beyond) 43 I Want it All and I Want it Now: Configuration Management and CI 38 SEO for Drupal 37

DrupalCon Latin America: Youtube views (as of 3/11/2015)

# of views

DrupalCon Latin America 2015: Keynote by Dries Buytaert 1053 DrupalCon Latin America 2015: Keynote by Larry Garfield 546 DrupalCon Latin America 2015: The Future of Commerce on Drupal 8 (and beyond) 407 DrupalCon Latin America 2015: Drupal 8 CMI on Managed Workflow 241 DrupalCon Latin America 2015: AngularJS + Drupal = Un Dúo Superheróico! 238


DRUPALCON FINANCIALS

Expenses

  Staff Wages, Benefits, Overhead $106,621.54 Catering $11,784.76 Staff Travel & Accommodations $11,552.25 Event Planning $9,244.45 Registration Materials, Conference Supplies, Tees $8,180.90 Taxes, Fees, VAT $7,009.54 Speaker Fees, Travel Awards, Etc $6,973.16 Translation $6,772.00 IT, Wifi, Electrical $6,705.89 Archiving $5,500.00 Design $4,500.00 Conference Facility $3,013.78 Shipping $176.13 Total Expenses $188,034.4

Pages