Apr 01 2015
Apr 01

By default Search API (Drupal 7) reindexes a node when the node gets updated. But what if you want to reindex a node / an entity on demand or via some other hook i.e. outside of update cycle? Turned out it is a quite simple exercise. You just need to execute this function call whenever you want to reindex a node / an entity:

  1. search_api_track_item_change('node', array($nid));

See this snippet at dropbucket: http://dropbucket.org/node/1600 search_api_track_item_change marks the items with the specified IDs as "dirty", i.e., as needing to be reindexed. You need to supply this function with two arguments: entity_type ('node' in our example) and an array of entity_ids you want to be reindexed. Once you've done this, Search API will take care of the rest as if you've just updated your node / entity. Additional tip: In some cases, it's worth to clear field_cache for an entity before sending it to reindex:

  1. // Clear field cache for the node.

  2. cache_clear_all('field:node:' . $nid, 'cache_field');

  3. // Reindex the node.

  4. search_api_track_item_change('node', array($nid));

This is the case, when you manually save / update entity values via sql queries and then want to reindex the result (for example, radioactivity module doesn't save / update a node, it directly manipulates data is sql tables). That way you'll ensure that search_api reindexes fresh node / entity and not the cached one.

Apr 01 2015
Apr 01

I had a case recently, where I needed to add custom data to the node display and wanted this data to behave like a field, however the data itself didn't belong to a field. By "behaving like a field" I mean you can that field at node display settings and able to control it's visibility, label and weight by dragging and dropping that field. So, as you may have undestood, hook_preprocess_node / node_view_alter approach alone wasn't enough. But we do Drupal right? Then there should be a clever way to do what we want and it is here: hook_field_extra_fields() comes for help! hook_field_extra_fields() (docs: https://api.drupal.org/api/drupal/modules!field!field.api.php/function/hook_field_extra_fields/7) exposes "pseudo-field" components on fieldable entities. Neat! Here's how it works, let's say we want to expose a welcoming text message as a field for a node, here's how we do that:

  1. /**

  2. * Implements MODULE_NAME_field_extra_fields().

  3. */

  4. function hook_field_extra_fields() {

  5. $extra['node']['article']['display']['welcome_message'] = array(

  6. 'label' => t('Welcome message'),

  7. 'description' => t('A welcome message'),

  8. 'weight' => 0,

  9. );

  10. return $extra;

  11. }

As you see in example above, we used hook_field_extra_fields() to define an extra field for an enity type of 'node' and 'article' bundle (content type). You can actually choose any other type of entity that's available on your system (think user, taxonomy_term, profile2, etc). Now if you'll clear your cache and go to display settings for Node -> Article you should see 'A welcome message' field available. Ok the last bit is to actually force our "extra" field to output some data, we do this in hook_node_view:

  1. /**

  2. * Implements hook_node_view().

  3. */

  4. function MODULE_NAME_node_view($node, $view_mode, $langcode) {

  5. // Only show the field for node of article type

  6. if ($node->type == 'article') {

  7. $node->content['welcome_message'] = array(

  8. '#markup' => 'Hello and welcome to our Drupal site!',

  9. );

  10. }

  11. }

That should be all. Now you should see a welcome message on your node oage. Please note, if you're adding an extra field to another entity type (like, taxonomy_term for example), you should do the last bit in this entity's _view() hook.

UPDATE: I put code snippets for this tutorial at dropbucket.org here: http://dropbucket.org/node/1398

Apr 01 2015
Apr 01

I'm a big fan of fighting with Drupal's inefficiencies and bottlenecks. Most of these come from contrib modules. Everytime we install a contrib module we should be ready for surprises which come on board with the module.

One of the latest examples is Menu item visibility (https://drupal.org/project/menu_item_visibility) that turned out to be a big trouble maker on one of my client's sites. Menu item visibility is a simple module that let's you define link visibility based on a user's role. Simple and innocent... until you look under the hood.

The thing is Menu item visibility stores it's data in database and does a query per every menu item on the page. In my case it produced around 30 queries per page and 600 queries on menu/cache rebuild (which normally equals to the number of menu items you have in your system).

The functionality that this module gives to an end user is good and useful (according to drupal.org: 6,181 sites currently report using this module) but as you see, storing these settings in db can become a huge bottleneck for your site. I looked at the Menu item visibility source and came to this "in code" solutions that fully replicates the module functionality but stores data in code.

Step 1.

Create a custom module and call it like Better menu item visibility., machine name: better_menu_item_visibility.

Step 2.

Let's add the first function that holds our menu link item id (mlid) and role id (rid) data:

  1. /**

  2. * This function returns a list of mlid's with a list of roles that have access to link items.

  3. * You can change the list to add new menu items or/and roles

  4. * The list is presented in a format:

  5. * 'mlid' => array('role_id', 'role_id),

  6. */

  7. function better_menu_item_visibility_menu_item_visibility_role_data() {

  8. return array(

  9. '15' => array('1', '2'),

  10. '321' => array('1'),

  11. '593' => array('3'),

  12. // Add as many combinations as you want.

  13. );

  14. }

This function returns an array with menu link item ids and roles that can access the item. If you already have Menu item visibility installed, you can easily port the data from the db table {menu_links_visibility_role} into this function.

Step 3.

And now let's do the dirty job and process the menu items:

  1. /**

  2. * Implements hook_translated_menu_link_alter().

  3. */

  4. function better_menu_item_visibility_translated_menu_link_alter(&$item, $map) {

  5. if (!empty($item['access'])) {

  6. global $user;

  7. // Menu administrators can see all links.

  8. if ($user->uid == '1' || (strpos(current_path(), 'admin/structure/menu/manage/' . $item['menu_name']) === 0 && user_access('administer menu'))) {

  9. return;

  10. }

  11. $visibility_items_for_roles = better_menu_item_visibility_menu_item_visibility_role_data();

  12. if (!empty($visibility_items_for_roles[$item['mlid']]) && !array_intersect($visibility_items_for_roles[$item['mlid']], array_keys($user->roles))) {

  13. $item['access'] = FALSE;

  14. }

  15. }

  16. }

In short this function skips access check for user 1 and for user that has 'administer menu' permission and does the access check for link menu items listed in better_menu_item_visibility_menu_item_visibility_role_data. As you see, instead of calling database it gets data from the code which is really fast. Let me know what you think and share your ways of fighting with Drupal's inefficiencies.

Apr 01 2015
Apr 01

Drupal Views offers us a cool feature: ajaxified pagers. When you click on a pager, it changes the page without reloading the main page itself and then scrolls to the top of the view. It works great, but sometimes you may encounter a problem: if you have a fixed header on your page (the one that stays on top when you scroll the page) it will overlap with the top of your view container thus scroll to top won't work preciselly correct and the header will cover the top part of your view.

I've just encountered that problem and making a note here for the future myself and, probably yourself, about how I solved this problem. If you'll look into the views internal, you'll see it uses internal Drupal JS Framework command called viewsScrollTop that's responsible for scrolling to the top of the container. What we need here is to override this command to add some offset to the top of our view.

1. Overriding JS Command

Thankfully, Views is flexible enough and provides hook_views_ajax_data_alter() so we can alter js data and commands before they got sent to the browser, let's overwrite viewsScrollTop command with our own. In your custom module put something like this:

  1. /**

  2. * This hook allows to alter the commands which are used on a views ajax

  3. * request.

  4. *

  5. * @param $commands

  6. * An array of ajax commands

  7. * @param $view view

  8. * The view which is requested.

  9. */

  10. function MODULE_NAME_views_ajax_data_alter(&$commands, $view) {

  11. // Replace Views' method for scrolling to the top of the element with your

  12. // custom scrolling method.

  13. foreach ($commands as &$command) {

  14. if ($command['command'] == 'viewsScrollTop') {

  15. $command['command'] = 'customViewsScrollTop';

  16. }

  17. }

  18. }

Now, everytime Views emits viewsScrollTop command, we replace it with our own custom one customViewsScrollTop.

2. Creating custom JS command

Ok, custom command is just a JS function attached to Drupal global object, let's create a js file and put it into it:

  1. (function ($) {

  2. Drupal.ajax.prototype.commands.customViewsScrollTop = function (ajax, response, status) {

  3. // Scroll to the top of the view. This will allow users

  4. // to browse newly loaded content after e.g. clicking a pager

  5. // link.

  6. var offset = $(response.selector).offset();

  7. // We can't guarantee that the scrollable object should be

  8. // the body, as the view could be embedded in something

  9. // more complex such as a modal popup. Recurse up the DOM

  10. // and scroll the first element that has a non-zero top.

  11. var scrollTarget = response.selector;

  12. while ($(scrollTarget).scrollTop() == 0 && $(scrollTarget).parent()) {

  13. scrollTarget = $(scrollTarget).parent();

  14. }

  15. var header_height = 90;

  16. // Only scroll upward

  17. if (offset.top - header_height < $(scrollTarget).scrollTop()) {

  18. $(scrollTarget).animate({scrollTop: (offset.top - header_height)}, 500);

  19. }

  20. };

  21. })(jQuery);

As you may see, I just copied the standard Drupal.ajax.prototype.commands.viewsScrollTop function and added header_height variable that equals to the offset/fixed header height. You may play with this value and set it according to your own taste. Note the name of the function Drupal.ajax.prototype.commands.customViewsScrollTop, the last part should match your custom command name. Save the file in your custom module dir, in my case it's: custom_views_scroll.js

3. Attaching JS to the view

There are multiple ways to do it, let's go with with the simplest one, to your custom_module.info file add scripts[] = js/custom_views_scroll.js and clear caches, that'll make this file to be autoloaded on every page load. That's all, since now, your views ajax page scrolls should be powered by your customViewsScrollTop instead of stock viewsScrollTop, see the difference?

Apr 01 2015
Apr 01

If you have a fieldgroup in a node, you may want to hide it on some conditions. Here's how to do that programmatically. At first, we need to preprocess our node like this:

  1. /**

  2. * Implements hook_preprocess_HOOK().

  3. */

  4. function MODULE_NAME_preprocess_node(&$variables) {

  5. }

The tricky part starts here, if you'll google for "hide a fieldgroup" you'll get lots of results referencing a usage of field_group_hide_field_groups() like this snippet: http://dropbucket.org/node/130.

While this function perfectly works on forms it is useless if you apply it in hook_preprocess_node() (at least I couldn't make it work). The problem is fieldgroup uses 'field_group_build_pre_render' function that is get called at the end of the preprocessing call and populates your $variables['content'] with a field group and its children, so you can't alter this in hook_preprocess_node(). But as always in Drupal there's a workaround. At first let's define some simple logic in our preprocess_node() to determine if we want to hide a field group:

  1. /**

  2. * Implements hook_preprocess_HOOK().

  3. */

  4. function MODULE_NAME_preprocess_node(&$variables) {

  5. if ($variables['uid'] != 1) {

  6. // You can call this variable any way you want, just put it into $variables['element'] and set as TRUE.

  7. $variables['element']['hide_admin_field_group'] = TRUE;

  8. }

  9. }

Ok, so if user's id is not 1 we want to hide some fantasy 'admin_field_group'. We define logic here and pass result into elements array that is to be used later. As I previously noted, field group uses 'field_group_build_pre_render' to combine fields into a group, so we just need to alter this call in our module:

  1. /**

  2. * Hide admin field group on a node display.

  3. */

  4. function MODULE_NAME_field_group_build_pre_render_alter(&$element) {

  5. if (isset($element['hide_admin_field_group']) && isset($element['hide_admin_field_group'])) {

  6. $element['hide_admin_field_group']['#access'] = FALSE;

  7. }

  8. }

We made a check for our condition and if it is met, we set field group's access to FALSE that means: hide the field group. So now you should have a field group hidden on your node display. Of course, this example is the simplest case, you may add dependencies on node view_mode, content type and other conditions, so sky is the limit here. You can find and copy this snippet at dropbucket: http://dropbucket.org/node/927 I wonder, if you have another way of doing this?

Mar 31 2015
Mar 31

When developing websites, we always aim take a “COPE” approach to web content management. COPE – Create Once Publish Everywhere – was popularised by NPR (National Public Radio) in the United States. It is a content management philosophy that seeks to allow content creators to add content in one place and then use it in various forms in other places. As an illustration, a journalist might write an article and upload it to a central content repository; different parts of it can then be used in different media - online, print,web app, etc. In a smaller - closer to home manner - a web editor might create a news article, complete with an image, an attachment, the main article content, some keywords, and embedded media (audio and video). The fields used on the form to create this page can then be reused, so:

  • On an events page, you might see “Related News” showing a title of the news piece and a link to read the full article
  • On the news listing page, you might see the title of the news piece, a reduced image, the first two sentences of the news article, and a link to the full article
  • On a media gallery page, you might see the image and a link to a hi-resolution version of same
  • On a mobile device, you might decide to not load the video unless the user has a wi-fi connection

As you can see “COPE” is revolutionary in allowing a lot of work to be done by a reduced staff, saving time, effort, and overhead. We use it on this website. Within our content architecture we have a number of content types such as "Client", "Project", and "Testimonial". We create a client page which has image, description, projects, and testimonial fields. The testimonial content type has client, short testimonial, and long testimonial fields. The project has client, testimonial, project description and project images fields.

Some examples:

  • To create a page for a case study, for example, we create a project page. Within this, we reference the client page and this pulls in our data dynamically to show the client's name, logo, and a short description. We then reference the client's testimonial and this pulls in the long testimonial content. We then fill out the information needed for that project/case study.
  • On the client listing page, we pull in the client logo and link that to the client's page.
  • On the homepage we pull in the short testimonial field for the testimonials block and the logo for the "Trusted by" block

The great thing is, if the client changes logo, or if we update the testimonial, then all instances of this will update. No more searching and replacing text and hoping that we replace every instance of it on every page.

We use Drupal to build our content management systems. Drupal gives us great tools to make COPE a reality, such as:

  • Fields - to allow us to present different information (image, video, product, text, select list, etc) at different places on the website
  • Content types - to make our data more semantic and atomic for later re-use
  • Views - to create lists of content curated by content type and other categories
  • Entity reference - to dynamically pull content from one page into another
  • View modes - to present the information in different formats on different pages
  • Services - so we can make the information available to other publishing organs such as apps and feeds

This allows us to create (our content) once, and publish (it) everywhere (on our website).

If you want to discuss Annertech helping you build an award-winning website, please feel free to contact us by phone on 01 524 0312, by email at [email protected], or using our contact form.

Mar 31 2015
Mar 31

By

This book review was published by Slashdot, 2015-03-31.

As with any content management system, building a website using Drupal typically requires extensive use of its administrative interface, as one navigates through its menus, fills out its forms, and reads the admin pages and notifications — or barely skims them, as they have likely been seen by the site builder countless times before. With the aim of avoiding this tedium, speeding up the process, and making it more programmatic, members of the Drupal community created a "shell" program, Drush, which allows one to perform most of these tasks on the command line. At this time, there is only one current print book that covers this tool, Drush for Developers, Second Edition, which is ostensibly an update of its predecessor, Drush User's Guide.

Both editions were written by Juampy Novillo Requena, although in the transition from the first edition to the second, both the author's name and the book title were changed. The most recent edition's title seems redundant, because of course such a book is going to be "for developers"; after all, who but Drupal developers would have an interest in Drush? The edition under review was published on 29 January 2015 by Packt Publishing, under the ISBN 978-1784393786. (My thanks to the publisher for a review copy.) At 180 pages, this edition is longer than its predecessor, but still a manageable size. Its content is divided among half a dozen chapters. Anyone interested in learning more about the book may wish to visit the publisher's website, which provides a brief description of the book, the table of contents, free sample content (Chapter 3), and the source code files.

The first chapter begins by presenting a brief comparison of the steps needed to run database updates on a Drupal website, using the GUI versus using Drush. As expected, the latter requires fewer steps. The author then discusses the prerequisites for installing Drush in a Linux or OS X environment. For Windows, the given download URL, http://www.drush.org/drush_windows_installer, is incorrect and should instead be http://drush.readthedocs.org/en/master/install/#windows-zip-package. The author states that "the installer installs an older version of Drush", but actually the installer has disappeared from its former locations. Fortunately, the current Windows archive file has the latest version as of this writing, 7.0.0-alpha7. This version is more recent than the alpha5 used in the book, but the commands and their options seem identical. On the other hand, it is a large archive file containing the Drush application files, Msys, PHP, and parts of PEAR and Symfony's YAML — but no helpful installer. The chapter continues with explication of Drush command invocation, arguments, options, aliases, and context. The only apparent blemish is that the variable name "site-name" (page 14) should instead read "site_name".

After this introductory material, one would expect the next chapter or so to explain and illustrate the details of Drush commands frequently used by site developers, such as those for installing, enabling, and updating modules and themes. Instead, the author jumps far ahead to much more advanced topics (more on this below). In the case of the second chapter, the goal is to learn how to synchronize code, database configuration, and content among different server environments, including capturing database configuration settings in files so they can be version controlled in Git. This is arguably worthwhile knowledge, but certainly not what the average reader would expect so early in the book.

Readers attempting to follow and replicate the demonstrations in the book, may become frustrated with the pitfalls in the second chapter — such as the instances where it does not provide all the needed instructions, or they don't match the example code. When readers starting from scratch encounter the Drush script (page 23), they may be tempted to try it right away on their own test sites, but this would be ill-advised because the first command will fail until the Registry Rebuild command is installed (later in the chapter), and the fourth command will fail if the chosen website does not have the Features module already installed and enabled. When learning about database updates, the reader is instructed to create a new Boolean field, but only later learns that the test website should have contained nodes of the "Basic Page" content type. When readers learn these things the hard way, they must circle back and redo steps or, even worse, try to revert the state of files or the database.

The mymodule custom module found in the downloadable archive does not match what the reader will need on page 30, so she will need to modify mymodule.install to match that listed in the book, and also presumably comment out the last two lines in mymodule.info related to the Features module — but not the first two, because that would result in worse problems later. This initial code should have been included in the downloadable archive. Before running the command drush --verbose updatedb, should she have enabled the mymodule custom module? Apparently so, since the expected output includes "Executing mymodule_update_7100", but when I tried it, the provided module's update hook was not recognized as a database update, using Drush or the admin interface (update.php). On page 32, the reader is told to download and enable the Features module, but that must have been done already because the mymodule module required it earlier. Lastly, the book's preface states that PHP version 5.2 (or higher) would be sufficient, but 5.5 is needed, otherwise a fatal PHP error is generated by the empty() call on line 29 of the "7101" example code.

The third chapter covers the use of Drush for running and monitoring a variety of tasks in a Drupal website, such as updating the database or reindexing the searchable content in Apache Solr. The author begins by briefly describing the uses for the cron utility, and some advantages of executing it from Drush. A technique shown for preventing Drupal from running cron automatically, is to set the cron_safe_threshold variable to 0, export it to code (as a Features module), and then deploy it to the target environments. The author also demonstrates how to use Jenkins in conjunction with Drush to periodically run and monitor cron jobs. As an example of running a task without using cron, a Feeds importer is set up to work with Drush, using a custom module and a Drush command to trigger the Feeds importer. It's not mentioned in the book, but for the importer, in the settings for the node processor, be sure to assign the bundle, otherwise there will be EntityMalformedException errors; also, map the essential feed and node elements, otherwise the nodes created will be empty.

The book then explores a number of topics that are somewhat related to one another: how to use Drush and the Drupal Batch API to run time-consuming tasks so as to avoid PHP and database limits of memory and time; how to run PHP code after Drupal has been bootstrapped; how to best log messages using the drush_log() function; how to capture Drush output in a file; how to implement your own logging mechanism by overriding the Drush default logging function; and how to run Drush commands in the background. Despite the complexity of the processing implemented in this chapter, readers should encounter few problems trying it out. For the drush php-eval commands, Windows command line users will need to replace the single quotes with double quotes. In the section titled "The php-script command", two of the three "php-eval" terms should instead read "php-script" (page 65).

Debugging and error handling are addressed in detail in the fourth chapter: how to validate user input values and Drush command line options prior to passing them to a command's callback; how to define custom validation within a command; how to discover all of the available hooks for any given Drush command; utilizing the Devel module, how to discover all of the Drupal modules that use a given hook, and how to find the location of a given function or class method. In the midst of all this, readers get a detailed tour of the steps that Drush executes when bootstrapping Drupal. Readers should note that, as with the second chapter, some of the code in the downloadable archive does not match the initial code presented in the text, but rather its final state. As readers may have been seen in earlier chapters, the "-- verbose" versions of the Drush commands can produce a lot more informational output than what is presented in the text, including the MySQL commands (that may be a consequence of, in this case, the Windows command line). In the case of drush --debug testhooks, the output is remarkably different, but at least all of the commands are executed.

The penultimate chapter explores techniques for leveraging Drush to better manage Drupal websites on local and remote servers, utilizing site aliases. Developers will undoubtedly be intrigued if not thrilled with the possibilities of being able to execute Drush, Linux, and MySQL commands within remote environments from the local command line. The only questionable aspect is that in the first chapter it is claimed that one "does not even have to open an SSH connection" to perform these feats of digital derring-do, and yet all of them presented in this chapter seem to depend upon an SSH connection — if not explicitly on the command line, then at least established and used in the background by Drush. Nonetheless, the potential power of using Drush in this manner is clearly significant for Drupal site builders and maintainers, and thus the author wisely shows how to avoid inadvertently corrupting the files or database of a target installation.

The final chapter blends and builds upon most if not all of the topics addressed in the earlier chapters, to show how Drush can be used to set up an effective development workflow for teams building Drupal websites. To this end, the author demonstrates how to move Drush commands out of a project's web document root, and how to use Drupal Boilerplate to achieve this and more. The instructions employ wget to download Boilerplate, but other readers as well may encounter an error of wget not being able to verify github.com's certificate. Readers learn how to use Jenkins to synchronize the Drupal files and databases in disparate environments, how to use Drush commands to improve database synchronization and sanitization, and how to prevent inadvertently emailing production addresses.

Like seemingly any Packt Publishing book, this one has plenty of errata relative to its length: "OSX" (page 9; should read "OS X"), "an input data" (page 14; should read "an input datum"), "inform [Drush] where" (page 19), "Dated" (page 21; should read "It is dated"), "sites/all/drush/command[s]" (page 28), "type Page" (page 29; should read "type Basic Page"), "PHP.ini" (page 34; should read "php.ini"), "cover [the] Queue API" (page 58), "context" (page 66; probably should read "content"), "run[ning]" (page 66), "straight brackets" (page 68; just "brackets"), "thanks to [']allow-additional-options'" (page 83), "require [the] minimum" (page 94), "a valid Drupal's root directory" (page 94; no "'s"), "point [to] our local Drupal project" (page 117), "logged as message" (page 120), "our the $HOME path" (page 139), "password;." (page 149), and "offers [a] hook" (ditto). Some of the phrasing is odd, e.g., "output can be logged in to" (page 34), "tasks running at cron" (page 52), and "equals to 1" (page 61). Some of the sentences are incomplete, e.g., "Importing configuration into the database." (page 34). Fortunately, none of the narrative is incomprehensible, and it is generally smoother in this edition than in the first.

The structure of this book is more logical than that of its predecessor. As Drupal expert Mike Anello correctly pointed out in his review of the first edition, "the book could have easily been improved by splitting out various sections of chapters into their own stand-alone chapters." The same criticism still holds true for this second edition, particularly the third chapter, though to a much lesser extent overall.

As with most if not all titles offered by Packt Publishing, this book's chapters are lengthened with summaries, none of which serve any useful purpose, since they repeat what was presented just pages earlier, but do not include enough detail to be of any value.

One major problem with the book is that it is billed as a second edition to the earlier user guide, which covered introductory and intermediate topics; yet this second edition does not, and instead is almost entirely devoted to advanced topics. In fact, much of the material is preparatory for the final chapter, on utilizing Drush to improve a team's project workflow. This is not made clear to the prospective buyer. This is truly a new book, and not an update of the first edition. Furthermore, it is more focused on specific uses of Drush.

Whether this book could be recommended to any potential reader, depends upon what that individual is hoping to learn. For anyone who wishes full coverage of the beginner and intermediate topics of Drush, this book would be completely inappropriate, and the individual would be best pointed to the Drush documentation. On the other hand, the book would be much better suited for a Drupal developer looking to improve his or her understanding of using Drush for managing database configuration and other topics related to project workflow, particularly in team settings — in which case it could be extremely valuable.

Copyright © 2015 Michael J. Ross. All rights reserved.

This book is available on Amazon

Mar 31 2015
Mar 31

Heading into Chicago’s Midcamp, my coworker Andy and I were excited to talk to other front end developers about using style guides with Drupal. We decided to put the word out and organize a BOF (birds of a feather talk) to find our kindred front end spirits. Indeed, we found a small group of folks who have started using them and had a great conversation: tools, workflow and pain points galore! So if you have already been using them or if you are brand new to the idea, read on.

Andy on a Divvy bike.

We looked pretty cool riding Chicago’s Divvy bikes to and from the conference!

So what is a style guide?

It can mean different things in different contexts, but for front end development, it means a fully-realized library of elements and components, using clean HTML/CSS/Javascript. I’ve heard them described as “tiny Bootstraps for every client” (Dave Rupert) — a client with a style guide has all the classes and/or markup they need to properly add new elements and components. A living style guide asserts that the style guide is maintained throughout the life cycle of a project.

At Advomatic, we’ve been integrating style guides into our workflow for about a year. We’ve had a few discussions about when it makes sense to have one, and when not. In the past, I’ve even argued against them, in the case of small projects. But at this point, we’ve come to the conclusion that it ALWAYS makes sense to use one. Smaller sites might have smaller styleguides — perhaps just with the the baseline elements included — but with a boilerplate style guide and a compiler in place, the style guide will, in fact, build itself.

So what can you use to build a style guide?

I heard many static markup generators and/or prototyping software mentioned at Midcamp: Jekyll, Pattern Lab, Prontotype, and Sculpin.

At Advomatic, we’ve been using KSS (Knyle Style Sheets), which is more specific to just generating style guides. It uses a Grunt task to compile a style guide from markup (commented out in your Sass files) and the corresponding CSS. This section documents setting up KSS to auto-generate your style guide using KSS. We use the NodeJS implementation of KSS, which, coincidentally, JohnAlbin (the brains behind Zen base theme and Drupal theming in general) has taken the reins on.

If you still haven’t found one you like, here’s a handy list of styleguide generators!

Scared? I hear you. It SOUNDS like an extra layer of work.

Here were my fears moving to style guides:

  • It might add another layer of complexity and chance to break things.
  • If the markup differs significantly in the style guide and Drupal, we’d have to do the work twice.
  • The style guide is not within Drupal, so you cannot write javascript with the Drupal.behaviors convention.
  • If your style guide includes components that are layout-dependent, you’ll need to set up your grid system within KSS.
  • If the style guide rots on the vine or gets out of sync, it could be a pain to fix.

But let’s look at the pros:

  • Clients love to see the style guide, it can be an early, easy win.
  • Keeps the front-end decision-making at the beginning of the process, and agnostic of the back end.
  • Front end work can happen alongside back end work.
  • A HTML/CSS style guide can be a fully responsive document, unlike a PDF.
  • A style guide can be a stand-alone deliverable, if the client needs to pause or implement it themselves.
  • The modularity of a style guide helps clients think about the site as a system rather than individual pages. The result is flexible when the client wants to add more pages down the line.
  • A style guide helps onboard new people coming onto a project or keep consistency among more than one front end dev. A FED can see if a similar component has already been built or if certain styles can be reused or expanded on.
  • Helpful for QA testers — something that they can refer back to if something “in the wild” doesn’t look quite right.
  • Having the markup embedded in the style guide helps multiple developers produce consistent markup for the front end.

We have found that components that we chose to not prototype in a style guide often ended up taking more time than expected. When the back end devs could see what our preferred markup was, they built our components very closely to what we prototyped. In the end, the pros outweigh the cons.

So what is the holy grail style guide workflow?

We’re still looking for it, but here’s some tips:

  • Automate your workflow — style guides should compile every time you make a change to the site. We use Grunt for this.
  • Use a boilerplate style guide — you won’t forget to theme anything that way.
  • Use Drupal-specific markup in your boilerplate to make the transition easier. Use the Drupal style guide module for boilerplate markup.
  • Try not to put too many components on the same page to reduce endless scrolling, ease testing for accessibility by tabbing through components, reduce the amount of javascript and images loading on the page.
  • I haven’t yet, but I’d love to incorporate something like Ish to make each component responsive without having to resize the whole browser window when testing responsiveness.

What else would you suggest? Any pain points that you are feeling when using style guides with Drupal?

Or if you are just dipping your toes in, check out these resources for more good information:

Website Style Guides Resources
http://styleguides.io/

Style Guide podcast, Anna Debenham and Brad Frost
http://styleguides.io/podcast/

Front End Styleguides by Anna Debenham
http://24ways.org/2011/front-end-style-guides/

Design Components presentation from JohnAlbin:
http://www.slideshare.net/JohnAlbin/managing-design

Example style guides
http://ux.mailchimp.com/patterns
http://rizzo.lonelyplanet.com/styleguide
http://www.starbucks.com/static/reference/styleguide
http://www.bbc.co.uk/gel
http://primercss.io (Github’s style documentation)

Style guide comparison chart (google doc)

Responsive Deliverables
http://daverupert.com/2013/04/responsive-deliverables

Modularity and Style Guides
http://dbushell.com/2012/04/23/modularity-and-style-guides

You should also check out:

Mar 31 2015
Mar 31

In the first part of our blog serie we discovered why we need "objectives" to give projects a solid base to succeed. In this blog post we will describe how to manage agile projects for a fixed price. Doesn't work? Es it does, if you respect some rules and do a detailled planning.

In general you should be careful with agile projects on a fixed price agreement. Both parties, the vendor and the customer should be aware of what this agreement means.

What does that mean exactly?

Agile:

Changes are allowed in a running project and they are needed. Especially for large projects, changes need to be allowed to continue work even with changing conditions in the defined goals. As described in the blogpost about 3 rules for setting objectives in projects a project should reach certain goals. Agile methods allow to evaluate milestone deliveries often, validate requirements against the end users and involve the feedback in the next sprint. This ensures that the final project result has really the attributes to be accepted by end users.

Fixed price:

The price is fixed and must not be exceeded.

And where is the problem?

The price to be paid always reflects the value of a service or the result a project should deliver. The price tag is a fixed unit. The price and the value of a project should be in a direct relation, since prices are arbitrary otherwise. As the basis for a good and fair calculated project, the requirement description consisting of performance definition, specification and design is valid. If this is detailed enough, realistic assessments can be made. One of the first steps in a project creating the specifications. The more detailled the better. In contrast, if estimations are not realistic and not understanderstandable, conflicts will inevitably arise at some point during the project. This happens latest when additional expenses arise due to new or changing requirements. If it is not possible to proof the changing requirements objectivly, conflicts may appear as one of the project parties may feel disadventaged. So it is urgently to ensure that services and prices are clearly related and are both traceable and transparent. Then also later occurring changes can be considered smoothly. This is the base to manage agile projects for a fixed price. As you know all detailled requirements and their price tag or estimation, it is possible to change existing requirements that are obsolete with new requirements that appeared. Priorities of in the backlog of all requirements help to stick to the initial estimation. Otherwise it is easy to argument, why the fixed price will be overrun.

Another problem arises when you schedule a buffer without a fixed size. It will never be possible for you to find an explanation for when the buffer is finally exhausted, if you dont have clearly estimated and specified requirements that will help to realize a real change request. Because you do not know how big a change is and what changes have already been posted with what size of the buffer. So you should offer a fixed time schedule and buffer, which can be used for changing requests. These "change requests" have to be transparent and documentated for your costumer. So all parties can understand why sometime the buffer is depleted. This avoids conflicts.

Agile work at fixed price wirh ERPAL

In the next part of the series we will draw attention on the topic "Specifications".

Mar 31 2015
Mar 31

After a very busy year and a half, we're nearly done shoring up on new hires here at the Drupal Association. We’ve been working hard to bring in the best talent around, and are thrilled to announce our three new staff members: Matt, Tina, and Brad!

Matt Tsugawa, CFO, Finance and HR Team

Matt TMatt (mtsugawa) is joining the Association as our new CFO, where he will be responsible for Finance and HR, and will help develop and drive the strategy of the organization as a member of our leadership team. He brings a rich professional history with him: he has worked in industries across the country and around the world. Early in his career, Matt worked in Japan as a management consultant at the professional services firms, KPMG and Arthur Andersen. After spending a few years as an analyst and business development manager in New York at A&E Television Networks, Matt returned to Portland, where he was born and raised.

Most recently, Matt worked in the energy efficiency industry as the Head of Finance. He holds a BA from University of Colorado at Boulder and an MBA from Yale. When not at work, Matt enjoys “managing" his three children and overgrown puppy with his wife, and when not doing that, he is an enthusiastic, if not yet expert, practitioner of Brazilian Jiu Jitsu.

Tina Krauss, DrupalCon Coordinator, Events Team

Tina KTina (tinakrauss) is the newest member of the DrupalCon team, and came on board in mid March. As a DrupalCon Coordinator, Tina will work with each con’s volunteers, assist in con programming and logistics, and work with website content. Tina is also focused on customer support and responds to tickets submitted to our Contact Us form related to the Cons.

A native of Germany, Tina moved to Portland, Oregon several years ago, where she currently resides. In her free time, Tina is an adventurer. She loves to travel around the world -- the farther, the better! She also enjoys outdoor activities like hiking, biking, backpacking, skiing, and more.

Bradley Fields, Content Manager, Marcomm and Membership Team

Bradley FieldsBradley (bradleyfields) joins the Marketing and Communications team as Content Manager. He will focus on the planning, creation, and maintenance of content—across all of the Association-managed platforms—that engages and strengthens the Drupal community. For the last six years, he worked to help associations, federal agencies, and universities make their content work better for all sorts of users and audiences.

When he is not at his desk, Bradley is curating Spotify playlists, watching one of his 50+ animated Disney movies, on the hunt for great whisky, or reading Offscreen magazine. He wishes he were Batman, but his superhero powers are definitely still under development.

Mar 31 2015
Mar 31

Drupal 8

Last December, I had the privilege of hosting a podcast with guests Holly Ross and Angie Byron on the new funding initiative, Drupal 8 Accelerate. One of the things mentioned in the podcast was the desire to provide a way for businesses to contribute to the fund. Since that time, we at Drupalize.Me having been waiting for the opportunity to donate. Well, that opportunity has arrived! As you might have already seen, in this announcement on Lullabot.com, Drupalize.Me and Lullabot together have made a donation of $5,000 to the Drupal 8 Accelerate Fund, becoming an anchor donor of this critical funding initiative. We heartily believe in funding core development and are so excited to be a part of providing a much needed final push to a Drupal 8 stable release.

To learn more about how you can accelerate the release of Drupal 8, check out these resources:

Mar 31 2015
Mar 31

The "Long text and summary" field has a pretty handy formatter called "Summary or trimmed". This will display a summary, if one is supplied, or Drupal will simply trim the text and display it.

The problem with this formatter is that you can't trim the summary. For example, if an editor adds three paragraphs into the summary section, then the whole summary is shown. But sometimes you may need control over how much of the summary is displayed. This is especially true if your design requires the teaser to have a consistent height.

What's the best way of offering a summary to your editors that also trims it? Enter Smart Trim.

The Smart Trim module is an improved version of the "Summary or Trimmed" formatter and a whole lot more. It does a lot of useful stuff, but the one we want to discuss is the ability to trim summaries.

Getting Started

To get going simply download and install the Smart Trim module. No other modules are required just enable it and you're good to go.

If you use Drush, run the following command:

drush dl smart_trim
drush en smart_trim

How to Control Summaries

Let's first examine the problem. It'll make more sense if you can see it in action.

Go ahead and create an article with a lot of text in the summary section.

Fig 1.0

Now, if you view the article teaser, you'll see the whole summary is being displayed.

Fig 1.1

For most websites this is fine. An editor can add a summary if needed, if no summary is supplied then Drupal will generate its own by trimming the body.

But what if you want to trim the text in the summary and not display the whole thing? This is where Smart Trim can help.

How to Use Smart Trim

Let's configure the article teaser so it trims the summary to the first 600 characters, this will make the size of the teaser more predictable. An editor can not accidentally enter in five paragraphs and then wonder why the teaser listing is so long.

1. Go to Structure, "Content types" and click on "manage display" on the Article row.

2. Because we want to change the teaser, click on Teaser in the top right corner.

Fig 1.2

3. From the Format drop-down on the Body row select "Smart trimmed".

Fig 1.3

4. Click on the cogwheel and from the Summary drop-down list, select "Use summary if present, honor trim settings", then click on Update. Don't forget to click on Save on the Teaser page.

Fig 1.4

We configured the formatter to apply the trim settings on a summary if present. This is exactly what we want.

5. Now if you view the teaser, it'll be trimmed after 600 characters.

Fig 1.5

Trim Units

Another useful feature that Smart Trim offers is the ability to change the trim unit. This allows you to trim by the amount of characters or words.

Just select the trim unit when configuring the formatter.

Fig 1.6

Conclusion

As we've seen Smart Trim is a pretty versatile module. It does one thing and one thing well, it trims text. The major functions of the module are, as we learnt, trimming summaries and changing the trim unit. These two things make the module very powerful.

Like what you see?

Join our free email list and receive the following:

  1. Discover our best tutorials in a 6 part series
  2. Be notified when free content is published
  3. Receive our monthly newsletter
Mar 31 2015
Mar 31

Start: 

2015-04-01 (All day) America/New_York

Organizers: 

The monthly Drupal core bug fix/feature release window is this Wednesday, April 1, and since it has been a while since the last one, I plan to release Drupal 7.36 on that date.

The final patches for 7.36 have been committed and the code is frozen (excluding documentation fixes and fixes for any regressions that may be found in the next couple days). So, now is a wonderful time to update your development/staging servers to the latest 7.x code and help us catch any regressions in advance.

There are three relevant change records for Drupal 7.36 which are listed below. This is not the full list of changes, rather only a list of notable API additions and other changes that might affect a number of other modules, so it's a good place to start looking for any problems:

You might also be interested in the tentative CHANGELOG.txt for Drupal 7.36 and the corresponding list of important issues that will be highlighted in the Drupal 7.36 release notes.

If you do find any regressions, please report them in the issue queue. Thanks!

Upcoming release windows after this week include:

  • Wednesday, April 15 (security release window)
  • Wednesday, May 6 (bug fix/feature release window)

For more information on Drupal core release windows, see the documentation on release timing and security releases, and the discussion that led to this policy being implemented.

Mar 31 2015
Mar 31

Store locators are a useful functionality for businesses who have multiple outlets. Drupal has a number of map rendering modules that allow us to provide store locator functionality. This article will cover the basics of setting up a simple store locator with proximity search functionality.

Create and setup location content type

Required modules

  1. Install the required modules.
    drush dl addressfield geocoder geofield geophp ctools -y
  2. Enable the required modules.
    drush en addressfield geocoder geofield geofield_map geophp ctools -y
  3. Go to admin/structure/types/add and create your location content type.
  4. Add a new field for Address.

    Create address field Click Save, then click Save field settings. You can adjust the defaults settings to suit your locale, if you wish, then click the Save settings button.

  5. Add new field for Position.

    Create position field Click Save, then click Save field settings.

    Position field settings Select Address from the drop-down for the Geocode from field option, and select Google Geocoder for the Geocoder option. You can tweak the other default settings, if you wish.

  6. Optional steps: To setup display for the new content type
    • Install Display suite.
    drush dl ds -y
  7. Enable Display Suite and Display Suite UI
  8. drush en ds ds_ui -y
  9. Go to admin/structure/types/manage/location/display and activate display suite settings for your new content type by choosing a layout and click Save. I'm using One column for this example. Turn on display suite
  10. Select the fields you want displayed and click Save.
  11. Adjust display
  12. Do the same for any other view modes you will be using.
  13. If you chose not to use Display suite, you still need to make sure the Format for the Position field is set to Geofield Map. If you do not see the Geofield Map option in the drop-down, check that the Geofield Map module is enabled. This module is part of the Geofield module.

Importing Location data using feeds

If you have a lot of data, it doesn’t make sense to enter each location manually. I suggest using Feeds to import the data instead. This particular example uses data from a spreadsheet, which is easily converted to CSV via Excel. For setting up feeds in other formats, refer to my previous post on Feeds.

  1. Install the Feeds module.
    drush dl feeds -y
  2. Enable Feeds, Feeds Importer and Feeds UI.
    drush en feeds feeds_importer feeds_ui -y
  3. Go to admin/structure/feeds and click on ? Add importer.
  4. Under Basic settings, select Off for the Periodic import option.
  5. Change the Fetcher to File upload. You can retain the default settings for this.
  6. Change the Parser to CSV parser. You can keep the default settings for this as well.
  7. Keep the Processor as Node processor and under Bundle, select the new content type you created earlier. You can keep the default settings, if you wish.
  8. For Mapping, ensure all the fields in your data set are mapped out accordingly, with the headers of your CSV file matching the SOURCE exactly. My dataset has the following field mapping: Mapping location importer

    With reference to the official documentation, take note of the following:

    • Always supply a country value in their two character ISO 3166-1 country codes.
    • Address components are as follows:
      • Address: Country => Country
      • Address: Administrative area => State
      • Address: Locality => City
      • Address: Postal code => Postal Code
      • Address: Thoroughfare => Address 1
      • Address: Premise => Address 2
  9. Go to import and select the importer you just created.
  10. Import your CSV file. Cross your fingers and hope everything imports successfully.

Create and setup location views

Required modules

Part 1: Location listing

  1. Install required modules.
    drush dl views leaflet libraries entity ip_geoloc -y
  2. Enable the required modules.
    drush en views views_ui leaflet leaflet_views ip_geoloc libraries entity -y
  3. Create a libraries folder in the sites/all folder. Download the Leaflet JavaScript Library and extract the files to the libraries folder. Ensure the folder name is leaflet. Libraries folder structure
  4. Go to admin/structure/views/add and create a new view for the Location content type. Check Create a page and fill in the fields as you see fit, then click Continue & edit. These options can be changed on the next screen. Setup location views
  5. Under Format, change the Show options to Fields. Change listing display format
  6. Add a Rendered Node field. Click on Add and type Rendered Node in the search filter. Check Content: Rendered Node and click Apply.
  7. Select Show complete entity under Display and choose the view mode you used for displaying your fields when you set up the Location content type. Add rendered node field
  8. Add a Proximity field. Click on Add and type Proximity in the search filter. Check Content: Position (field_position) - proximity and click Apply. Adjust the field settings as you see fit. I recommend checking the Round option and specifying Precision to 2, as the default option gives a long string of decimal points. Proximity field settings Set the Source of Origin Point to Exposed Geofield Proximity Filter.
  9. Add a Proximity filter. Under Filter, click on Add and type Proximity in the search filter. Check Content: Position (field_position) - proximity and click Apply.
  10. Check Expose this filter to visitors. Change the Label if you need to, this field can be left blank. Set the Operator to is less than or equal to and enter the starting value in the Proximity Search field. Proximity filter settings
  11. Remove all existing Sort Criteria. Click on Add and type Proximity in the search filter. Check Content: Position (field_position) - proximity and click Apply. Select Sort ascending, and under Source of Origin Point, select Exposed Geofield Proximity Filter. Proximity sort settings
  12. Go to the path of your views page to check that the listing is rendering correctly. Test the proximity search by typing a location into the exposed filter. Location listing

Part 2: Map display

  1. Add a new Attachment view display to the Location view. Map view display
  2. Add a Position field. Click on Add and type Position in the search filter. Check Content: Position and click Apply.
  3. Check Exclude from display. This field is used for plotting the locations on the map. Pick Latitude/Longitude as the formatter and click Apply.
  4. Under Format, choose This attachment (override), select Map (Leaflet API, via IPGV&M) and click Apply.
  5. Adjust the height of the map as you see fit. Under Name of latitude field in Views query, select Content: Position, the field you just added.
  6. The location marker styles can be customised and the help text provides detailed information on how to do that. For this example, I chose green as the default marker and left the Visitor marker as default, so they are differentiated.
  7. Under Map centering options, select Center the map on visitor’s current location.
  8. Under No locations behaviour, enter visitor so the map will centre on the user’s location when no results are found.
  9. Click on More map options to reveal the map zoom settings. For this example, the default Initial zoom level was too low, and I set it to 15 instead.
  10. There are many customisation options that IP Geolocation provides, and you can tweak them to suit your needs. Click Apply when done.
  11. Under Attachment settings, attach the display to the listing view created in Part 1. Ensure that Inherit exposed filters is set to Yes. Map attachment
  12. Go to the views page URL and check that your map is rendering correctly. Map information

Next steps

Once everything is rendering correctly, it’s just a matter of theming the views to look like your design.

Theming before and after

This was pretty much the summary of how I implemented IP Geolocation and Leaflet for Battlehack. I was quite satisfied with the end result as the map was smooth and responsive. If your project requires map rendering, why not give this combination a try?

Mar 30 2015
Mar 30

Submitted by ansondparker on Mon, 03/30/2015 - 15:08

When working with government agencies the sacred form may raise it's fugly formatted head now and again.  Despite attempts at logic "Wouldn't an XLS spreadsheet be easier for everyone?" it sometimes comes down to what's simpler - gettin' er done vs doin' it right.... and if no one really cares about doin' it right, gettin' er done becomes the (sloppy) way, (half)truth, and (dim) light....

So yeah - I had a form that needed to be pixel perfect so that a state-wide agency could print the forms up and store them in a manilla folder... I started working with Views PDF.  This did generate pdf's... and along with mimemail and rules we were sending PDF's out... but they just weren't looking like folks wanted them... FillPDF - thank you.

To use FillPDF we started by installing pdftk (apt-get install pdftk on ubuntu) and then installing the module as per usual....  here's the rest step-by-step 

Anyhow - the net result was a bunch of pdfs with the proper fields filled in and all pixel pretty and such.... a miserable project made significantly less miserable - thanks to the good folks who keep FillPDF rolling!

Mar 30 2015
Mar 30

Drupal 8 Accelerate benefit concertWhat’s the most clever way to raise funds for Drupal? Ask Ralf Hendel of Comm Press in Hamburg Germany.

The Drupal Association is working with the Drupal 8 branch maintainers to provide $250,000 in Drupal 8 Acceleration Grants that will be awarded to individuals and groups, helping them get Drupal 8 from beta to release.

Now The Association needs to raise the funds to support the grants and we are working with the Association Board to kick off a D8 Accelerate fundraiser. We are asking community members to help out and donate here. We all want to get D8 released so we can enjoy all the launch parties!

The good news is that we only need to raise $125,000 as a community because all donations will be matched! The Association contributed $62,500 and the Association Board raised another $62,500 from Anchor Donors: Acquia, Appnovation, Drupalize.me by Lullabot, Palantir.net, Phase2, PreviousNext, and Wunderkraut.

Having Anchor Partners means…
Every dollar you donate is matched, doubling your impact.

Ralf Hendel, CEO of Comm Press, heard the call and took action in the most creative way. Over the last few years, Ralf learned how to play the piano (very well I might add) and he recently held a benefit recital where he played Bach, Schubert, and Skrjabin. Those attending were asked to donate to D8 Accelerate and together they raised €345. Of course, with the matching funds from our Anchor Partners, that contribution is €690.

At The Drupal Association, we are always amazed at the many talents our community has and we are especially thankful to Ralf for sharing his passion for music and Drupal with others and raising these funds.

There’s so many clever ways to raise funds to help D8 get across the finish line. What ideas do you have? Or if feel like going the traditional route, you can donate here.

Thanks for considering this opportunity to get Drupal 8 released sooner. If you would like to learn more about how D8 Accelerate grants are being given out, please read Angie Byron’s blog post.

vp
Mar 30 2015
Mar 30

The first requirement of a registration system is to have something to reserve.

The second requirement of a registration system is to manage conflicting reservations.

Setting up validation of submitted reservations based on the existing reservation nodes was probably the most complex part of this project. A booking module like MERCI has this functionality baked in — but again, that was too heavy for us so we had to do it on our own. We started off on a fairly thankless path of Views/Rules integration. Basically we were building a view of existing reservations, contextually filtering that view by content id (the item that someone was trying to reserve) and then setting a rule that would delete the content and redirect the user to a “oops that’s not available” page. We ran into issues building the view contextually with rules (for some reason the rule wouldn’t pass the nid …) and even if we would have got that wired up, it would have been clunky.

Scrap that.

On to Field Validation.

The Field Validation module offers client-side form validation (not to be confused with Clientside Validation or Webform Validation based on any number of conditions at the field level. We were trying to validate the submitted reservation on length (no longer than 5 days) and availability (no reservations of the same item during any of the days requested).

The length turned out to be pretty straightforward — we set the “Date range2? field validator on the reservation date field. The validator lets you choose “global” date format, which means you can input logic like “+ X days” so long as it can be converted by the strtotime() function.

field validation supports global date values, which means anything that the strtotime() PHP function can convert

Field Validation also gives you configurations to bypass the validation criteria by role — this was helpful in our case given that there are special circumstances when “approved” reservations can be made for longer than 5 days. And if something doesn’t validate, you can plug in a custom error message in the validator configuration.

bypass length restriction so staff can accommodate special cases for longer reservations, and field validation also lets you write a custom error message

With the condition set for the length of the reservation, we could tackle the real beast. Determining reservation conflicts required us to use the “powerfull [sic] but dangerous” PHP validator from Field Validation. Squirting custom code into our Drupal instance is something we try to avoid as much as possible — it’s difficult to maintain … and as you’ll see below it can be difficult to understand. To be honest, a big part of the impetus for writing this series of blog posts was to document the 60+ lines of code that we strung together to get our booking system to recognize conflicts.

The script starts by identifying information about the item that the patron is trying to reserve  (item = $arg1, checkout date = $arg2, return date = $arg3) and then builds an array of dates from the start to finish of the requested reservation. Then we use EntityFieldQuery() to find all of the reservations that have dates less than or equal to the end date request. That’s where we use the fieldCondition() with <= to the $arg3_for_fc value. What that gives us is all of the reservations on that item that could possibly conflict. Then we sort by descending and trim the top value out of the list to get the nearest reservation to the date requested. With that record in hand, we can build another array of start and end dates and use array_intersetct() to see if there is any overlap.

I bet that was fun to read.

I’ll leave you with the code and comments:

<?php
//find arguments from nid and dates for the requested reservation
$arg1 = $this->entity->field_equipmentt_item[und][0][target_id];

$arg2 = $this->entity->field_reservation_date[und][0][value];
$arg2 = new DateTime($arg2);

$arg3 = $this->entity->field_reservation_date[und][0][value2];
$arg3 = new DateTime($arg3);
$arg3_for_fc = $arg3->format("Ymd");


//build out array of argument dates for comparison with existing reservation
$beginning = $arg2;
$ending = $arg3;
$ending = $ending->modify( '+1 day' );

$argumentinterval = new DateInterval('P1D');
$argumentdaterange = new DatePeriod($beginning, $argumentinterval ,$ending);

$arraydates = array();
foreach($argumentdaterange as $argumentdates){
 $arraydates []= $argumentdates->format("Ymd");  
}

//execute entityfieldquery to find the most recent reservation that could conflict

$query = new EntityFieldQuery();

$fullquery = $query->entityCondition('entity_type', 'node')
  ->entityCondition('bundle', 'reservation')
  ->propertyCondition('status', NODE_PUBLISHED)
  ->fieldCondition('field_equipmentt_item', 'target_id', $arg1, '=')
  ->fieldCondition('field_reservation_date', 'value', $arg3_for_fc, '<=')
  ->fieldOrderBy('field_reservation_date', 'value', 'desc')
  ->range(0,1);

$fetchrecords = $fullquery->execute();

if (isset($fetchrecords['node'])) {
  $reservation_nids = array_keys($fetchrecords['node']);
  $reservations = entity_load('node', $reservation_nids);
}

//find std object for the nearest reservation from the top
$reservations_test = array_slice($reservations, 0, 1);

//parse and record values for dates
$startdate = $reservations_test[0]->field_reservation_date[und][0][value];
$enddate = $reservations_test[0]->field_reservation_date[und][0][value2];

//iterate through to create interval date array
$begin = new DateTime($startdate);
$end = new DateTime($enddate);
$end = $end->modify( '+1 day' );

$interval = new DateInterval('P1D');
$daterange = new DatePeriod($begin, $interval ,$end);

$arraydates2 = array();
foreach($daterange as $date){
 $arraydates2 []= $date->format("Ymd");  
}

$conflicts = array_intersect($arraydates, $arraydates2);

if($conflicts != NULL){
$this->set_error();
}
?>
Mar 30 2015
Mar 30

Within the Lift ecosystem, "contexts" can be thought of as pre-defined functionality that makes data available to the personalization tools, when that data exists in the current state (of the site/user/environment/whatever else).

Use cases

The simplest use of contexts is in mapping their data to User Defined Fields (UDFs) on /admin/config/content/personalize/acquia_lift_profiles. When the context is available, its data is assigned to a UDF field and included with Lift requests. For example, the Personalize URL Context module (part of the Personalize suite) does exactly this with query string contexts.

First steps

The first thing to do is to implement hook_ctools_plugin_api() and hook_personalize_visitor_contexts(). These will make the Personalize module aware of your code, and will allow it to load your context declaration class.

Our module is called yuba_lift:

/**
 * Implements hook_ctools_plugin_api();
 */
function yuba_lift_ctools_plugin_api($owner$api) {
  if (
$owner == 'personalize' && $api == 'personalize') {
    return array(
'version' => 1);
  }
}
/**
 * Implements hook_personalize_visitor_contexts();
 */
function yuba_lift_personalize_visitor_context() {
  
$info = array();
  
$path drupal_get_path('module''yuba_lift') . '/plugins';$info['yuba_lift'] = array(
    
'path' => $path '/visitor_context',
    
'handler' => array(
      
'file' => 'YubaLift.inc',
      
'class' => 'YubaLift',
    ),
  );

  return 

$info;
}

The latter hook tells Personalize that we have a class called YubaLift located at /plugins/visitor_context/YubaLift.inc (relative to our module's folder).

The context class

Our context class must extend the abstract PersonalizeContextBase and implement a couple required methods:

<?php
/**
 * @file
 * Provides a visitor context plugin for Custom Yuba data.
 */
class YubaLift extends PersonalizeContextBase {
  
/**
   * Implements PersonalizeContextInterface::create().
   */
  
public static function create(PersonalizeAgentInterface $agent NULL$selected_context = array()) {
    return new 
self($agent$selected_context);
  }
/**
   * Implements PersonalizeContextInterface::getOptions().
   */
  
public static function getOptions() {
    
$options = array();$options['car_color']   = array('name' => t('Car color'),);
    
$options['destination'] = array('name' => t('Destination'),);

    foreach (

$options as &$option) {
      
$option['group'] = t('Yuba');
    }

    return 

$options;
  }
}

The getOptions method is what we're interested in; it returns an array of context options (individual items that can be assigned to UDF fields, among other uses). The options are grouped into a 'Yuba' group, which will be visible in the UDF selects.

With this code in place (and cache cleared - for the hooks above), the 'Yuba' group and its context options become available for mapping to UDFs.

Values for options

The context options now need actual values. This is achieved by providing those values to an appropriate JavaScript object. We'll do this in hook_page_build().

/**
 * Implements hook_page_build();
 */
function yuba_lift_page_build(&$page) {
  
// build values corresponding to our context options
  
$values = array(
    
'car_color' => t('Red'),
    
'destination' => t('Beach'),
  );
// add the options' values to JS data, and load separate JS file
  
$page['page_top']['yuba_lift'] = array(
    
'#attached' => array(
      
'js' => array(
        
drupal_get_path('module''yuba_lift') . '/js/yuba_lift.js' => array(),
        array(
          
'data' => array(
            
'yuba_lift' => array(
              
'contexts' => $values,
            ),
          ),
          
'type' => 'setting'
        
),
      ),
    )
  );
}

In the example above we hardcoded our values. In real use cases, the context options' values would vary from page to page, or be entirely omitted (when they're not appropriate) - this will, of course, be specific to your individual application.

With the values in place, we add them to a JS setting (Drupal.settings.yuba_lift.contexts), and also load a JS file. You could store the values in any arbitrary JS variable, but it will need to be accessible from the JS file we're about to create.

The JavaScript

The last piece of the puzzle is creating a new object within Drupal.personalize.visitor_context that will implement the getContext method. This method will look at the enabled contexts (provided via a parameter), and map them to the appropriate values (which were passed via hook_page_build() above):

(function ($) {
  
/**
   * Visitor Context object.
   * Code is mostly pulled together from Personalize modules.
   */
  
Drupal.personalize Drupal.personalize || {};
  
Drupal.personalize.visitor_context Drupal.personalize.visitor_context || {};
  
Drupal.personalize.visitor_context.yuba_lift = {
    
'getContext': function(enabled) {
      if (!
Drupal.settings.hasOwnProperty('yuba_lift')) {
        return [];
      }

      var 

0;
      var 
context_values = {};

      for (

i in enabled) {
        if (
enabled.hasOwnProperty(i) && Drupal.settings.yuba_lift.contexts.hasOwnProperty(i)) {
          
context_values[i] = Drupal.settings.yuba_lift.contexts[i];
        }
      }
      
      return 
context_values;
    }
  };

})(

jQuery);

That's it! You'll now see your UDF values showing up in Lift requests. You may also want to create new column(s) for the custom UDF mappings in your Lift admin interface.

You can grab the completed module from my GitHub.

Mar 30 2015
Mar 30

So I recently participated in my first ever hackathon over the weekend of March 28. Battlehack Singapore to be exact (oddly, there was another hackathon taking place at the same time). A UX designer friend of mine had told me about the event and asked if I wanted to join as a team.
Me: Is there gonna be food at this thing?
Her: Erm…yes.
Me: Sold!
Joking aside, I’d never done a hackathon before and thought it’d be fun to try. We managed to recruit another friend and went as a team of three.

Battlehack Singapore 2015

The idea

The theme of the hackathon was to solve a local or global problem so before the event, we kicked around a couple of ideas and settled on a Clinic Finder app. Think Yelp for clinics. Singapore provides a large number of medical schemes that offer subsidised rates for healthcare. Not every clinic is covered by every scheme though, so we thought it’d be good if people could find clinics based on the medical scheme they are covered by.

Of course, there will be people who aren’t covered by any medical scheme at all, like me. But I’ve also had the experience of being brought on a wild goose chase by Google while trying to find an open clinic at 2am in the morning. I’d like to think this is a relatable scenario. Being idealistic people, we wanted our app to provide updated information on each clinic, like actual opening hours and phone numbers with real people on the other end of the line. And trust me, we’ve pondered the BIG question of: where will you ever get such data?

The most viable idea we could think of at the time was to work with the relevant government agencies that had access to such data. But since it was a hackathon project, we just wanted to see if we could build out the functionality and make it look decent within 24 hours. Then, there was the decision of which platform the app would run on. Ideally, this would work well on a mobile device, but our team recognised that we didn’t have the capabilities to build out a mobile app in 24 hours.

Our expertise was building Drupal sites. Thus, that would be our best bet to have a working application at the end of 24 hours. Maybe one day we’ll join hackathons to win, but not this time. This time, we just wanted to finish. Gotta know how to crawl before learning to walk, and walk before learning to run.

Day 1

Battlehack Singapore took place at the Cliftons office in the Finexis Building. Rooms on both floors were set up for teams of four, with power points and LAN cables for each member. We took a spot near the wall because there was a nice spot to the side for napping. Turns out there wouldn’t be much of that.

Shortly into the hackathon, we hit our first snag. The internet access went out. Definitely an “Oh, crap” moment for me. I mentioned in my last post how much I used Google throughout the day. I guess the Hackathon Fates decided, no Google for you, kiddo.

No internet also meant no way to download modules. Luckily for me, I had a bunch of local development sites still sitting in my hard drive, and a majority of the module files I needed were in there somewhere. Sure, they were outdated, but beggars can’t be choosers. The organisers were working hard to fix the problem, so I figured I’d just download the newer versions when we got back online. The moral of the story is: Don’t delete all your old development sites, you never know when they might come in handy.

I’ll admit I got a little grumpy about the situation, but pouting wasn’t going to solve anything, so why not take a little time to chill with the Dinosaur Game? Just in case you didn’t know, as of version 39, the guys at Chrome snuck an easter egg into the browser. Useless trivia: I eventually got to a 1045 high score :satisfied:

Chrome Dinosaur Game

We wanted the app to have proximity location capabilities. There are quite a number of solutions for this on Drupal. Coincidentally, I’d listened to the latest episode of Talking Drupal the night before and the topic was Map Rendering. The two modules that stuck in my mind were Leaflet and IP Geolocation as it was mentioned they seemed “smoother”.

The IP Geolocation module had very good integration with Views and the end result (after the all-nighter, of course) was pretty close to the original design we had in mind. Given the tight schedule we had, this was definitely a plus. The only custom code I had to write were minor tweaks to facilitate theming, one to add placeholder attribute to the search filter, and another to add CSS classes to boolean fields based on their values.


/**
 &ast; Implements hook_form_alter().
 */
//Add placeholder attribute to search boxes
function custom_form_alter(&$form, &$form_state, $form_id) {
  if($form_id == "views_exposed_form") {
    if (isset($form['field_geofield_distance'])) {
      $form['field_geofield_distance']['#origin_options']['#attributes'] = array('placeholder' => array(t('Enter Postal Code/Street Name')));
    }
    if (isset($form['field_medical_scheme_tid'])) {
      $form['field_medical_scheme_tid']['#options']['All'] = t('Medical Scheme');
    }
  }
}

/*
 &ast; Implements template_preprocess_field()
 */
function clinicfinder_preprocess_field(&$variables) {
  //check to see if the field is a boolean
  if ($variables['element']['#field_type'] == 'list_boolean') {
    //check to see if the value is TRUE
    if ($variables['element']['#items'][0]['value'] == '1') {
      //add the class .is-true
      $variables['classes_array'][] = 'is-true';
    } else {
      //add the class .is-false
      $variables['classes_array'][] = 'is-false';
    }
  }
}

Even though it was a hackathon, and we were pressed for time, I still tried my best to adhere to Drupal best practices. So the template_preprocess_field went into the template.php file while the hook_form_alter went into a custom module.

Day 2

The presentation at the end of the hackathon was only two minutes long. We figured that as long as we could articulate the app’s key features and demo those features successfully, that would be our pitch. As Sheryl Sandberg said:

Done is better than perfect.

Clinic Finder home page

The Battlehack guys were really helpful in this regard. There were rehearsal slots the next morning for us to present our pitch to a panel of mentors, who’d provide feedback on our idea and presentation pitch. Their suggestion to us was to get to straight to the point on our key feature, the bit about medical schemes, since that was the local problem we were trying to address.

Clinic Finder map page

That was a really good piece of advice, as we watched a number of participants who presented before us run out of time before they got to best part of their product. We managed to pitch our app within the time and answer the judges’ questions. As expected, we did get the “so where will you get the data?” question. So we talked about partnership with government organisations. Another question we got was about advertising, which tied into a point we didn’t really consider, on the sustainability of the app.

Hackathon takeaways

  1. Expect that things may go wrong and adapt accordingly.
  2. Be focused. You only have 24 hours.
  3. Keep your pitch concise. Two minutes goes by quicker than you think.
  4. Unless you’re a ninja coder, you won’t get much sleep.
  5. Consciously remind yourself to be a nice person, especially when you haven’t slept at all.

At the end of the day, we did manage to build a working application in 24 hours and present it on time. Definitely a valuable learning experience. It’s always nice to build something that works, especially if you do it together with friends. Looking forward to the next one.

Mar 29 2015
Mar 29

I’m approaching the 4 year mark at my agency and along with it my 4 year mark working with Drupal. It’s been an interesting journey so far and I’ve learned a fair bit, but, as with anything in technology, there’s still a great deal left to discover. This is my journey so far and a few key points I learned along the way.

Picking up a new technology to use can be a daunting task, aside from deciding if it fits your needs/requirements you need to ensure it will be something you enjoy working with as well as something you can make a living from. I never really choose to use Drupal, it just happened to be one of the CMS’ my agency use and as such I picked it up. However after working with it for the last 4 years I can say that I do enjoy projects using Drupal (though as with any technology it has it’s cons as well as pros). It also appears to be growing and going from strength to strength, so is likely to be around for a fair while yet.

Getting up to speed at first was tricky, learning how the different elements that make Drupal slot together took some time, and more than a few attempts (rather like an ikea flatpack). Helping me along the way were a few resources, such as the documentation, fantastic community and of course any question can be a quick google away from an answer.

The dev tools

As I started to get involved with Drupal I learnt about the tools of the trade. Most noteworthy of these being Drush - An awesome tool I regularly use now. A massive time saver at the start of builds for setting up core and modules when used alongside make files. Also useful during development and hugely when updating core/modules.
It can be downloaded from here: http://docs.drush.org/en/master/install
A library of drush commands is available here: http://www.drushcommands.com

Alongside this are developer modules designed to help speed up development and test the code written such as devel - a suite of modules to help module developers and themers, and coder - a module that scans your code and reports any issues available.

Re-inventing the wheel is lengthy and unneeded, it’s the same with writing code that already exists.
http://dropbucket.org provides a code snippet repo that can be searched and added to, useful for adding anything you found helpful or as a starting block/idea for your own code. Each snippet can be tagged and commented upon.

For deployments from development to production sites:

  • Features - The biggest and most documented module. It places config changes into new modules that can be committed, deployed and installed. Each new module can be overridden, deleted and reverted if needed.
  • Configuration Management - A backport of the D8 core module, a ‘features lite’ module it provides similar functionality but rather than creating new modules configurations are saves in tar files. An example of this process can be seen here: https://www.drupal.org/node/1872288

For modules that don’t store these settings in files you can use:

  • Bundle Copy - This provides an export/import (similar to views) for Vocabs, Content Types, Users and fields. The dev version adds support for field collections, cloning content types and commerce entity bundles.
  • Taxonomy CSV import/export - This allows you to import/export vocabs and terms from a csv.

The community

Outside of the CMS I began to get involved in other aspects, namely Drupal.org. Here I was able to talk to other developers, post bug reports (to which I always received a speedy response) and patches where I could. I was also able to create my own module and have it reviewed by other developers before it could be downloaded and used by others. Getting involved in this way has not only allowed me to give back to the community that has helped me, but also allows me to develop further - which can only be a good thing.

Over the last couple of years I’ve also attended DrupalCamp London and a couple of other Drupal events (such as beer and chat), which is great way to get to know other people and talk all things Drupal.  

Mar 29 2015
Mar 29

This is the second in a series of articles involving the writing and launching of my DurableDrupal Lean ebook series website on platform.sh. Since it's a real world application, this article is for real world website and web application developers. If you are starting from scratch but enthusiastic and willing to learn, that means you too. I'm fortunate enough to have their sponsorship and full technical support, so everything in the article has been tested out on the platform. A link will be edited in here as soon as it goes live.

Diving in

Diving right in I setup a Trello Kanban Board for Project Inception as follows:

Project Inception Kanban

Both Vision (Process, Product) and Candidate Architecture (Process, Product) jobs have been completed, and have been moved to the MVP 1 column. We know what we want to do, and we're doing it with Drupal 7, based on some initial configuration as a starting point (expressed both as an install profile and a drush configuration script). At this point there are three jobs in the To Do column, constituting the remaining preparation for the Team Product Kickoff. And two of them (setup for continuous integration and continuous delivery) are about to be made much easier by virtue of using platform.sh, not only as a home for the production instance, but as a central point of organization for the entire development and deployment process.

Beginning Continuous Integration Workflow

What we'll be doing in this article:

Overcoming the confusion between Continuous Integration (team development with a codebase) and Continuous Delivery (deploying to an environment).

"So what is CI? In short, it is an integration of code into a known or working code base.... The top benefits are to provide fast feed back to the members of the team and to ensure any new changes don’t break the working branch."

"CD... is an automated process to deliver a software package to an environment.... we can now extend the fast feedback loops and reduction of constraints with packaging techniques, automation workflows, and integrated tools that keep track of the software versions in different environments."

"CI and CD are two completely separated practices that are tightly interlocked to create a unified ALM [Application Lifecycle Management] workflow." – Bryan Root

And let's take it step by step by breaking the CI Workflow Job into tasks:

Continuous Integration Tasks

Create Ansible playbook to create local development VM suitable for working with platform.sh

Based on Jeff Geerling's great work both on Ansible itself as well as its use with Drupal provisioning, I whipped up and tested ansible-vm-platformsh. Dependencies:

Clone the playbook from GitHub into a workspace on local dev box and vagrant up

With the dependencies installed, and once the ubuntu/trusty box was downloaded (I had already used it before on several projects), it only took a few minutes to bring up our local dev box:

$ git clone [email protected]:DurableDrupal/ansible-vm-platformsh.git platformsh-vk-sandbox
$ cd platformsh-vk-sandbox
$ vagrant up

The box is brought up in a private network with ip 192.168.19.46 (editable in Vagrantfile). So to be able to bring up the website in a local browser by name, I edited my dev box's /etc/hosts file by including the following line in accordance with the Virtual Host template parameters (editable in provisioning/vars.yml):

$ grep platformsh /etc/hosts
192.168.19.46 platformshvk.dev

ssh into the local dev box, log into platform.sh and manage ssh keys

platformsh-vk is a newly provisioned development box, so the user vagrant hasn't yet got a set of public and private keys for the purpose of securely authenticating onto the platform.sh account using ssh.

We first login to our local newly provisioned dev box (that's already configured by playbook operations), also using ssh:

$ vagrant ssh
vagrant@vagrant-ubuntu-trusty-64:~$
vagrant@vagrant-ubuntu-trusty-64:~$ ssh-keygen -t rsa -C "[email protected]"
Generating public/private rsa key pair.
Enter file in which to save the key (/home/vagrant/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/vagrant/.ssh/id_rsa.
Your public key has been saved in /home/vagrant/.ssh/id_rsa.pub.

The Ansible Playbook has already installed the platform.sh CLI (command-line interface), called, appropriately enough, platform, so we're all set!

vagrant@vagrant-ubuntu-trusty-64:~$ cd /var/www
vagrant@vagrant-ubuntu-trusty-64:/var/www$ ls -l
total 4
drwxr-xr-x 2 root root 4096 Mar 22 16:10 html

The first time you execute the CLI you will be asked to login with your platform.sh account.

vagrant@vagrant-ubuntu-trusty-64:/var/www$ platform
Welcome to Platform.sh!
Please log in using your Platform.sh account
Your email address: [email protected]
Your password:
Thank you, you are all set.
Your projects are:
+---------------+---------------------+-------------------------------------------------+
| ID            | Name                | URL                                             |
+---------------+---------------------+-------------------------------------------------+
| myproject | Victor Kane Sandbox | https://us.platform.sh/#/projects/myproject |
+---------------+---------------------+-------------------------------------------------+
Get a project by running platform get [id].
List a project's environments by running platform environments.
Manage your SSH keys by running platform ssh-keys.
Type platform list to see all available command

Now, you can manage your keys from your online sandbox on platform.sh. But by using the platform CLI you can do anything from your local dev box that can be done online. To upload your public key, we did:

vagrant@vagrant-ubuntu-trusty-64:/var/www/platformsh-vk-dev$ platform ssh-keys
Add a new SSH key by running platform ssh-key:add [path]
Delete an SSH key by running platform ssh-key:delete [id]
vagrant@vagrant-ubuntu-trusty-64:/var/www/platformsh-vk-dev$ platform ssh-key:add ~/.ssh/id_rsa.pub
Enter a name for the key: vkvm

In a single line:

$ platform ssh-key:add --name=vkvm0323 ~/.ssh/id_rsa.pub
The SSH key id_rsa.pub has been successfully added to your Platform.sh account

Get the project and sync the platform server and local databases

The Ansible playbook has already set up the virtual host to be edited in the file /etc/apache2/sites-available/platformshvk.dev.conf and will expect the project to be cloned under /var/www. So to get the project from the platform server, we first locate ourselves at /var/www/platformsh-vk-dev and then get the project. Permissions have already been taken care of in this directory anticipating that we are operating locally as the user vagrant in the dev box.

vagrant@vagrant-ubuntu-trusty-64:/var/www/platformsh-vk-dev$ platform get myproject
  Cloning into 'myproject/repository'...
  Warning: Permanently added the RSA host key for IP address '54.210.49.244' to the list of known hosts.
Downloaded myproject to myproject
Building application php using the toolstack php:drupal
  Beginning to build                                                   [ok]
  /var/www/platformsh-vk-dev/myproject/repository/project.make.
  drupal-7.34 downloaded.                                              [ok]
  drupal patched with                                                  [ok]
  install-redirect-on-empty-database-728702-36.patch.
  Generated PATCHES.txt file for drupal                                [ok]
  platform-7.x-1.3 downloaded.                                         [ok]
Saving build archive...
Creating file: /var/www/platformsh-vk-dev/myproject/shared/settings.local.php
Edit this file to add your database credentials and other Drupal configuration.
Creating directory: /var/www/platformsh-vk-dev/myproject/shared/files
This is where Drupal can store public files.
Symlinking files from the 'shared' directory to sites/default
Build complete for application php

In order to sync the local database (already created with the Ansible playbook, you guessed it!) from the platform server, we must first enter the credentials (user root and name taken from the domain variable on line 105 of provisioning/playbook.yml) into the settings.local.php file created in the shared sub-directory of the project build.

vagrant@vagrant-ubuntu-trusty-64:/var/www/platformsh-vk-dev/myproject/shared$ cat settings.local.php
<?php

// Database configuration.
$databases['default']['default'] = array(
  'driver' => 'mysql',
  'host' => 'localhost',
  'username' => 'root',
  'password' => '',
  'database' => 'platformshvk',
  'prefix' => '',
);

Now let's grab the drush aliases and sync databases! First I did a remote drush status on the platform

$ platform drush status

Then I grabbed the aliases:

vagrant@vagrant-ubuntu-trusty-64:/var/www/platformsh-vk-dev/myproject$ platform drush-aliases
Aliases for Victor Kane Sandbox (myproject):
    @myproject._local
    @myproject.master

And used them to sync the local dev box database with what is on the platform server:

vagrant@vagrant-ubuntu-trusty-64:/var/www/platformsh-vk-dev/myproject$ drush sql-sync @myproject.master @myproject._local
WARNING:  Using temporary files to store and transfer sql-dump.  It is recommended that you specify --source-dump and --target-dump options on the command line, or set '%dump' or '%dump-dir' in the path-aliases section of your site alias records. This facilitates fast file transfer via rsync.
You will destroy data in platformshvk and replace with data from ssh.us.platform.sh/main.
You might want to make a backup first, using the sql-dump command.
Do you really want to continue? (y/n): y
vagrant@vagrant-ubuntu-trusty-64:/var/www/platformsh-vk-dev/myproject$

Configure local dev virtual host and bring up website

We can now confirm that our local instance is alive and well via a drush status:

vagrant@vagrant-ubuntu-trusty-64:/var/www/platformsh-vk-dev/myproject$ cd www
vagrant@vagrant-ubuntu-trusty-64:/var/www/platformsh-vk-dev/myproject/www$ drush status
 Drupal version                  :  7.34
 Site URI                        :  http://default
 Database driver                 :  mysql
 Database username               :  root
 Database name                   :  platformshvk
 Database                        :  Connected
 Drupal bootstrap                :  Successful
 Drupal user                     :  Anonymous
 Default theme                   :  bartik
 Administration theme            :  seven
 PHP executable                  :  /usr/bin/php
 PHP configuration               :  /etc/php5/cli/php.ini
 PHP OS                          :  Linux
 Drush version                   :  6.6-dev
 Drush configuration             :
 Drush alias files               :  /home/vagrant/.drush/myproject.aliases.drushrc.php
 Drupal root                     :  /var/www/platformsh-vk-dev/myproject/builds/2015-03-23--11-27-44--master
 Site path                       :  sites/default
 File directory path             :  sites/default/files
 Temporary file directory path   :  /tmp

We now configure the virtual host on the dev box to take into account the project name. After editing:

$ sudo vi /etc/apache2/sites-available/ 
<VirtualHost *:80>
    ServerAdmin webmaster@localhost
    ServerName platformshvk.dev
    ServerAlias www.platformshvk.dev
    DocumentRoot /var/www/platformsh-vk-dev/myproject/www
    <Directory "/var/www/platformsh-vk-dev/myproject/www">
        Options FollowSymLinks Indexes
        AllowOverride All
    </Directory>
</VirtualHost> 

After reloading the Apache server configuration we can point our browser at the local web server

Local website instance

Exercise CI workflow by doing an upgrade and pushing via repo to platform

First we find out what needs updating

$ drush pm-update
Update information last refreshed: Sun, 03/22/2015 - 19:21
 Name    Installed Version  Proposed version  Message
 Drupal  7.34               7.35              SECURITY UPDATE available

Don't update with drush! Instead edit drush makefile and rebuild locally to test

We edit drush make file with the change in Drupal core version

$ cd repository
$ vi project.make
api = 2
core = 7.x
; Drupal core.
projects[drupal][type] = core
projects[drupal][version] = 7.35

We build locally to test

$ platform project:build
Building application php using the toolstack php:drupal
  Beginning to build                                                   [ok]
  /var/www/platformsh-vk-dev/myproject/repository/project.make.
  drupal-7.35 downloaded.                                              [ok]
  drupal patched with                                                  [ok]
  install-redirect-on-empty-database-728702-36.patch.
  Generated PATCHES.txt file for drupal                                [ok]
  platform-7.x-1.3 downloaded.                                         [ok]
Saving build archive...
Symlinking files from the 'shared' directory to sites/default
Build complete for application php

We confirm that the version has indeed been updated interactively in the browser.

Push to platform (automatically rebuilds everything on master!)

Wow! Talk about “everything in code”:

vagrant@vagrant-ubuntu-trusty-64:/var/www/platformsh-vk-dev/myproject/repository$ git config --global user.email [email protected]
vagrant@vagrant-ubuntu-trusty-64:/var/www/platformsh-vk-dev/myproject/repository$ git config --global user.name jimsmith
vagrant@vagrant-ubuntu-trusty-64:/var/www/platformsh-vk-dev/myproject/repository$ git commit -am "Updated Drupal core to 7-35"
[master 6a0f997] Updated Drupal core to 7-35
 1 file changed, 1 insertion(+), 1 deletion(-)
vagrant@vagrant-ubuntu-trusty-64:/var/www/platformsh-vk-dev/myproject/repository$ git push origin master
Counting objects: 5, done.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 295 bytes | 0 bytes/s, done.
Total 3 (delta 2), reused 0 (delta 0)
Validating submodules.
Validating configuration files.
Processing activity: **Victor Kane** pushed to **Master**
    Found 1 new commit.
    Building application 'php' with toolstack 'php:drupal' (tree: b53af8a)
      Installing build dependencies...
        Installing php build dependencies: drush/drush
      Making project using Drush make...
        Executing `drush -y make --cache-duration-releasexml=300 --concurrency=8 project.make /app/out/public`...
          Beginning to build project.make.                                            [ok]
          drupal-7.35 downloaded.                                                     [ok]
          drupal patched with                                                         [ok]
          install-redirect-on-empty-database-728702-36.patch.
          Generated PATCHES.txt file for drupal                                       [ok]
          platform-7.x-1.3 downloaded.                                                [ok]
      Moving checkout directory to `/sites/default`.
      Detected a `/sites/default` directory, initializing Drupal-specific files.
      Creating a default `/sites/default/settings.php` file.
      Creating the environment-specific `/sites/default/settings.local.php` file.
      Executing pre-flight checks...
      Compressing application.
      Beaming package to its final destination.
    W: Route 'www.{default}' doesn't map to a domain of the project, mangling the route.
    W: Route '{default}' doesn't map to a domain of the project, mangling the route.
    Re-deploying environment myproject-master.
      Environment configuration:
        php: size M
        mysql: size M
        redis: size M
        solr: size M
      Environment routes:
        http://master-myproject.us.platform.sh/ is served by application `php`
        http://www---master-myproject.us.platform.sh/ redirects to http://master-myproject.us.platform.sh/
        https://master-myproject.us.platform.sh/ is served by application `php`
        https://www---master-myproject.us.platform.sh/ redirects to http://master-myproject.us.platform.sh/
To [email protected]:myproject.git
   f2e12d8..6a0f997  master -> master

We confirm via browser pointed to platform server that the update has been effectuated.

Next time we'll drill down into some serious development workflow by implementing the startup landing page for the website.  

Bookmark/Search this post with

Mar 29 2015
Mar 29

Drupal's Form API has everything that we love about the Drupal framework. It's powerful, flexible, and easily extendable with our custom modules and themes. But lets face it; it's boooorrrrriinnnnggg. Users these days are used to their browsers doing the heavy lifting. Page reloads are becoming fewer and fewer, especially when we are expecting our users to take action on our websites. If we are asking our visitors to take time out of their day to fill out a form on our website, that form should be intuitive, easy to use, and not distracting.

Lucky for us, the Form API has the ability to magically transform our forms into silky smooth ajax enabled interfaces using the #ajax property in the form array. The #ajax property allows us to jump in at any point in form's render array and add some javascript goodness to improve user experience.

Progressive Enhancement

The beautiful thing about utilizing the #ajax property in our Drupal forms is that, if done correctly, it degrades gracefully allowing users with javascript disabled to use our form without issue. "If done correctly" being the operative phrase. The principals of progressive enhancement dictate that the widget must work for everyone before you can begin taking advantage of all of the cool new features available to us as developers. And yes, javascript is still in the "cool new feature" bucket as far as progressive enhancement goes; but that's an argument for a different time :). So this means that the form submission, validation, and messaging must be functional and consistent regardless of javascript being enabled or not. As soon as we throw that #ajax property on the render array, the owness is on us to maintain that consistency.

As an example, lets take a look at the email signup widget from the Listserv module. The form is very basic. It consists of a text field and a submit button. It also has a validation hook, to make sure the user is entering a valid email, as well as a submit handler that calls a Listserv function to subscribe the users email to a listserv.


/**
 * Listserv Subscribe form.
 */
function listserv_subscribe_form($form, &$form_state) {
  $form['email'] = array(
    '#type' =--> 'textfield',
    '#title' => t('Email'),
    '#required' => TRUE,
  );
  $form['submit'] = array(
    '#type' => 'submit',
    '#value' => t('Subscribe'),
  );
  return $form;
}

/**
 * Listserv Subscribe form validate handler.
 */
function listserv_subscribe_form_validate($form, &$form_state) {
  $email = $form_state['values']['email'];
  // Verify that the email address is valid.
  if (!valid_email_address($email)) {
    form_set_error('email', t('You must provide a valid email address.'));
  }
}

/**
 * Listserv Subscribe form submit handler.
 */
function listserv_subscribe_form_submit($form, &$form_state) {
  $email = $form_state['values']['email'];
  listserv_listserv_subscription($email, 'subscribe');
}

On paper, this form does everything we want, but in the wild it isn't all that we had hoped. This is not the type of form that requires its own page. It's meant to be placed in sidebars and footers throughout the site. If we place this in the footer of our homepage, our users aren't going to be happy when we take their email, then refresh the page and change their scroll position.

Lets image a user that accidently leaves a ".com" off of the end of their address. When the page is refreshed and they are returned to the top of the page, totally removed from the flow of content, only to see a message that says "Please enter a valid email address", what are the chances that they actually scroll back down the page to fulfill that action? I'm no UX expert but I'm guessing it's not good.

AJAX It Up

We can definitely do better. In our theme's template.php file lets add a form alter.


/**
 * Impliments hook_form_FORMID_alter.
 */
function heymp_form_listserv_subscribe_form_alter(&$form, &$form_state, $form_id) {
  $form['submit']['#ajax'] = array(
    'callback' =--> 'heymp_ajax_listserv_subscribe_callback',
    'wrapper' => 'listserv-subscribe-form',
  );
}

In this form alter, we are tapping into the specific signup form that we want to enhance. We then add the '#ajax' property to the submit button which tells form that when the user clicks the "submit" button, we are going to jump in and handle things with ajax. We define what function is going to handle our logic under 'callback', and we tell the form which DOM element on the page is going to display our results with 'wrapper'.

Now lets add our callback.


/**
 * Callback for heymp_form_listserv_subscribe_form_alter
 */
function heymp_ajax_listserv_subscribe_callback($form, &$form_state) {
  if (form_get_errors()) {
    $form_state['rebuild'] = TRUE;
        $commands = array();
        $commands[] = ajax_command_prepend(NULL, theme('status_messages'));  
    return array('#type' => 'ajax', '#commands' => $commands);
  } 
  else {
    $system_message = drupal_get_messages();
    return t('Thank you for your submission!');
  }
}

Validation

In this callback we are doing a few things. For one, we are checking to see if there are any form validation errors. The form submission is still using the validation handler from our listserv_subscribe_form_validate function above. If the submission doesn't pass form validation it will throw an error, allowing us to see it when we call form_get_errors() in our ajax callback. If there is an error, we need to tell the form to rebuild itself by setting $form_state['rebuild'] = TRUE'. This will allow the form to submit more entries.

Error messaging

Next we need to handle printing out the error message. Drupal's Ajax framework has some built in commands for interacting with the DOM without needing to write any javascript at all. By adding operations to the $commands array, we can display our error messaging as well as adding and removing elements as needed. In our case we are going to use the ajax_command_prepend function to print a message. We then return the commands in a render array fashon. See the Ajax framework documentation for a full list of command options.

Override system message

We also need prevent the system message from printing out our success/error messages when the user does eventually reload the page. Since the messages are sitting in a queue, we can easily empty that queue by calling drupal_get_messages(). The theme('status_messages') function takes care of calling it for us in the error if statement so we need to explicitly call it in the success statement.

Success

If there are no errors, we are just going to return a message that lets our user know that they have successfully completed the operation. The message is returned in whatever element we specified our ajax wrapper property above, which was the #listserv-subscribe-form element.

Summary

So there you go! With a minimal amount of code we've drastically improved our user experience using the #ajax property in the Forms API.

In Part 2 we'll take a look at ditching the Form API's commands array and writting our own javascript.

Mar 29 2015
Mar 29

In the last episode, we learned about the Drupal Subuser module. In this episode, we continue where we left off but take a look under the hood at the module code of the Drupal Subuser module.

By following along with this episode you will learn some things such as:

  • How to open up a Drupal module file and what to expect
  • How to find and locate an issue within a Drupal module file
  • How modules modify forms with hook_form_alter()
  • How to debug PHP variables in a Drupal module
  • How to test our fix to ensure it works correctly

If you have never seen a Drupal module before this might be a little intimidating and I might go a little fast, but you will still learn a lot. You should be able to start seeing patterns within different Drupal modules on how modules are structured. Good luck and happy module investigating!

Mar 28 2015
Mar 28

After my #epicfail that was BADCamp, to say that I was entering MidCamp with trepidation would be the understatement of the year. Two full days of sessions and a 1-and-1 track record was weighing heavily upon my soul. Add to the mix that I was coming directly off of a 5-day con my company runs, and responsible for MidCamp venue and catering logistics. Oh right, and I ran out of time to make instructions and train anyone else on setup, which only added to my on-site burden.

Testing is good.

After BADCamp, I added a powered 4-port USB hub to the kits, as well as an accessory pack for the H2N voice recorder, mainly for the powered A/C adapter and remote. All total, these two items bring the current cost of the kit to about $425.

In addition, at one of our venue walk-throughs, I was able to actually test the kits with the projectors UIC would be using. The units in two of the rooms had an unexplainable random few-second blackout of the screens, but the records were good and the rest of the rooms checked out.

Success.

After the mad scramble setting up three breakout rooms and the main stage leading up to the opening keynote, I can't begin to describe the feeling in the pit of my stomach after I pulled the USB stick after stopping the keynote recording. I can’t begin to describe the elation I felt after seeing a full record, complete with audio.

We hit a few snags with presenters not starting their records (fixable) and older PCs not connecting (possibly fixable), and a couple sessions that didn’t have audio (hello redundancy from the voice recorder). Aside from that, froboy and I were able to trim and upload all the successful records during the Sunday sprint.

A huge shout out also goes to jason.bell for helping me on-site with setups and capture. He helped me during Fox Valley’s camp, so I deputized him as soon as I saw him Friday morning.

Learnings.

With the addition of the powered USB hub, we no longer need to steal any ports from the presenter laptop. For all of the first day, we were unnecessarily hooking up the hub’s USB cable to the presenter laptop. Doing this caused a restart of the record kit. We did lose a session to a presenter laptop going to sleep, and I have to wonder whether we would have still captured it if the hub hadn’t been attached.

The VGA to HDMI dongle is too unreliable to be part of the kit. When used, either there was no connection, or it would cycle between on and off. Most, if not all, machines that didn’t have mini display port or direct HMDI out had full display port. I will be testing a display port to HDMI dongle for a more reliable option.

Redundant audio is essential. The default record format for the voice recorders is a WAV file. These are best quality, but enormous, which is why I failed at capturing most of BADCamp’s audio (RTFM, right?). By changing the settings to 192kbs MP3, two days of session audio barely made a dent in the 2GB cards that are included with the recorders. Thankfully, this saved three session records: two with no audio at all (still a mystery) and one with blown out audio.

Trimming and combining in YouTube is a thing. Kudos again to froboy for pointing me to YouTube’s editing capabilities. A couple sessions had split records (also a mystery), which we then stitched together after upload, and several sessions needed some pre- or post-record trimming. This can all be done in YouTube instead of using a video editor and re-encoding. Granted, YouTube takes what seems like forever to process, but it works and once you do the editing, you can forget about it.

There is a known issue with mini display port to HDMI where a green tint is added to the output. Setting the external PVR to 720p generally fixed this. There were a couple times where it didn’t, but switching either between direct HDMI or mini display port to HDMI seemed to resolve most of the issues. Sorry for the few presenters that opted for funky colors before we learned this during the camp. The recording is always fine, but the on-site experience is borked.

Finally, we need to tell presenters to adjust their energy saver settings. I take this for granted, because the con my company runs is for marketing people who present frequently, and this is basically just assumed to be set correctly. We are a more casual bunch and don’t fret when the laptop sleeps or the screen saver comes up during a presentation. Just move the cursor and roll with it. But that can kill a record...even with the Drupal Association kits. I do plan to test this, now that I’ve learned we don’t need any power at all from the presenter laptop, but it’s still an easy fix with documentation.

Next steps.

Documentation. I need to make simple instructions sheets to include with the kits. Overall, they are really easy to use and connect, but it’s completely unfamiliar territory. With foolproof instructions, presenters can be at ease and room monitors can be tasked with assisting without fear.

Packaging. With the mad dash to set these up — combined with hourly hookups — these were a hot mess on the podium. I’ll be working to tighten these up so they look less intimidating and take up less space. No idea what this entails yet, so I’ll gladly accept ideas.

Testing. As mentioned, I will test regular display port to HDMI, as well as various sleep states while recording.

Shipping. Because these kits are so light weight, part of the plan is to be able to share them with regional camps. There was a lot of interest from other organizers in these kits during the camp. Someone from Twin Cities even offered to purchase a kit to add to the mix, as long as they could borrow the others. A Pelican box with adjustable inserts would be just the ticket.

Sponsors. If you are willing to help finance this project, please contact me at [email protected]. While Fox Valley Camp owns three kits and MidCamp owns one, wouldn’t it be great to have your branding on these as they make their way around the camp circuit? The equipment costs have (mostly) been reimbursed, but I’ve devoted a lot of time to testing and documenting the process, and will be spending more time with the next steps listed above.

Mar 28 2015
Mar 28

The event was organized by DASA that is doing a stunning job in gathering and energizing South Africa's Drupal community. From community subjects to security and Drupal 8, we got to see a lekker variety of talks including those by Michael and me on "Drupal 8" and "How to run a successful Drupal Shop". 

Special thanks to the organizers Riaan, Renate, Adam, Greg and Robin. Up next will be DrupalCamp Cape Town in September 2015.

Mar 28 2015
Mar 28

I rencently read a very interesting article of an all time .Net developer comparing the MEAN stack (Mongo-Express-Angluar-Node) with traditional .Net application design. (I can't post the link because I'm unable to find the article!!).

Among other things he compared the tedious process of adding a new field in the RDBM model (modifying the database, then the data layer, then views and controllers), whereas in the MEAN stack it was as simple as adding two lines of code to the UI.

In this article we will see how to get native JSON document support in your Drupal entities (mixing traditional RDBM storage with NoSQL in the same database) and explore the benefits of having such a combination.

Native JSON support in your current database engine

There's a chance your traditional database engine is already offering NoSQL/Document Storage like capabilities.

In MS SQL server we have had the XML field type for a long time now. This field type allows you to store XML documents inside a MS SQL Server field and query it's contents using XPATH. And it's been around since the year 2005.

This datatype was a great candidate to store data for our experiment, until I found out that PHP has crap XML support. I could not find a properly working (in a reasonable amount of time such as 5 minutes or less) serialization/deserialization method to convert PHP objects to XML and the other way round.

So what about JSON native support? PostgreSQL already offers that. MS SQL Server users have been demanding support for JSON since 2011, but still MS has not made any progress in this. I guess they are too much focused on making SQL Server work properly (and scale) to support Azure, rather than improving or adding new features.

Thanks God MS SQL Server is extremely well designed and pluggable, and someone came out with a CLR port of the PostgreSQL Json data type here. In the same way MS has never supported GROUP_CONCAT and we had to use more CLR to get it working

Adding JSON support to Drupal's database abstraction layer

Now that we have the capability of storing and querying JSON documents in our RDBM database, let's see how can we support this from Drupal 7 in the least disruptive way possible.

We would have loved to be able to use our JSON property just as 'serialized' field specification, but to do so we would have to modify the serialization/deserialization scattered all over Drupal (and contrib) and add another setting to tell Drupal to use json_decode/json_encode instead of serialize/unserialize.

No problem, we will tell Drupal that our JSON based property is a text field and do the encoding/decoding ourselves when we need it.

The database engine is internally storing JSON fields as binary data.

For INSERTS and UPDATES there is no issue as you can pass the string based document in your statement and the database engine will convert this to the binary internal representation. But when you try to retrieve a JSON field without modifying the the database abstraction layer you won't get the JSON document as expected but the binary data, so we will make a simple fix to the SelectQuery->__toString method code to retrieve this fields as JSON strings:

    foreach ($this->fields as $alias => $field) {
      $table_name = isset($this->tables[$field['table']]['table']) ? $this->tables[$field['table']]['table'] : $field['table'];
      $field_prefix =  (isset($field['table']) ? $this->connection->escapeTable($field['table']) . '.' : '');
      $field_suffix = '';
      $field_name = $field['field'];
      $field_alias = $field['alias'];
      if (isset($field['table'])) {
        $info = $this->connection->schema()->queryColumnInformation($table_name);
        // If we are retrieving a JSON type column, make sure we bring back
        // the string representation and not the binary data!
        if(isset($info['columns'][$field_name]['type']) && $info['columns'][$field_name]['type'] == 'JSON') {
          $field_suffix = '.ToString()';
        }
      }
      $fields[] = $field_prefix . $this->connection->escapeField($field_name) . $field_suffix . ' AS ' . $this->connection->escapeField($field_alias);
    }

Using JSON based fields from Drupal

First of all we are going to add a storage property (field) to the node entity with the name extend. Our aim is going to be able to store new "fields" inside this JSON field without having to alter the database structure.

To do so, implement an update to alter the database structure (you can use hook_update for this) :

<?php

namespace Drupal\cerpie\plugins\updates;

use \Drupal\fdf\Update\UpdateGeneric;
use \Drupal\fdf\Update\IUpdate;

class update002 extends UpdateGeneric implements IUpdate {

  /**
   * {@inheritdoc}
   */
  public static function Run(&$sandbox) {
    if (!db_field_exists('node', 'extend')) {
      db_add_field('node', 'extend', array(
        'type' => 'text',
        'length' => 255,
        'not null' => FALSE,
        // Tell the native type!
        'sqlsrv_type' => 'JSON'
      ));
    }
  }
}

Then with a few hooks let's expose this new property:

/**
 * Implements hook_schema_alter();
 */
function cerpie_schema_alter(&$schema) {
  $schema['node']['fields']['extend'] = array(
    'type' => 'text',
    'length' => 255,
    'not null' => FALSE,
  );
}

/**
 * Implements hook_entity_property_info_alter(&$info);
 */
function cerpie_entity_property_info_alter(&$info) {
  $info['node']['properties']['extend'] = array(
    'label' => 'JSON Extend',
    'description' => 'Store aditional JSON based data',
    'type' => 'text',
    'sanitize' => 'check_plain'
  );
}

That's basically everything you need to start storing and retrieving data inside a node in JSON format.

To store something inside the JSON document:

      // Fields to store inside the JSON document.
      $mydata = array('field1' => 'data1', 'field2' => 'data2');
      $e = entity_create('node', array ('type' => 'curso'));
      // Specify the author
      $e->uid = $user->uid;
      $e->extend = json_encode($mydata, JSON_UNESCAPED_UNICODE);
      // Create a Entity Wrapper of that new Entity
      $entity = entity_metadata_wrapper('node', $e);
      $entity->save();

Imagine we now wanted to add some additional data to the node storage (field3 in the example), no need to alter the database schema:

      $e = node_load($nid);
      $mydata = json_decode($e->extend);
      // Fields to store inside the JSON document.
      $mydata['field3'] = 'data3';
      $e->extend = json_encode($mydata, JSON_UNESCAPED_UNICODE);
      // Create a Entity Wrapper of that new Entity
      $entity = entity_metadata_wrapper('node', $e);
      $entity->save();

Up to now there's nothing new we could not have done with a string based storage field. The power is unleashed when we are able to query/retrieve the JSON based data atomically.

What he have now is just something that looks like text based storage, but is natively backed by a JSON data type.

Exposing JSON based fields to Views

We are going to create a views field handler that will allow us to expose the json based fields as regular fields (sort of - remember this is a proof of concept).

Tell views that our module will be using it's API:

/**
 * Implements hook_views_api().
 */
function cerpie_views_api() {
  return array('api' => 3);
}

Create a *.views.inc file with that consumes the hook_views_data_alter:

function cerpie_views_data_alter(&$data) {
  $data['node']['extend'] = array(
    'title' => t('Entity Json Extend'),
    'help' => t('Entity Json Stored Extended Properties'),
    'field' => array(
      'help' => t(''),
      'handler' => '\\Drupal\\cerpie\\views\\JsonExtendDataFieldHandler',
    )
  );
}

Now implement our JsonExtendDataFieldHandler class:

<?php

namespace Drupal\cerpie\views;

class JsonExtendDataFieldHandler extends \views_handler_field {
  /**
   * Implements views_handler_field#query().
   *
   * @see views_php_views_pre_execute()
   */
  function query() {
    $this->field_alias = 'extend_json_' . $this->position;
    $this->query->fields[$this->field_alias] = array(
      // TODO: This is user input directly in the query!! find something else...
      'field' => '[extend].Get(\'' . $this->options['selector'] . '\')', 
      'table' => NULL,
      'alias' => $this->field_alias);
  }
  
  /**
   * Implements views_handler_field#pre_render().
   */
  function pre_render(&$values) {
  }

  /**
   * Default options form.
   */
  function option_definition() {
    $options = parent::option_definition();
    $options['selector'] = array('default' => '$');
    return $options;
  }
  
  /**
   * Creates the form item for the options added.
   */
  function options_form(&$form, &$form_state) {
    parent::options_form($form, $form_state);
    
    $form['selector'] = array(
      '#type' => 'textfield',
      '#title' => t('Json Selector'),
      '#default_value' => $this->options['selector'],
      '#description' => t('Use a JSON path to select your data.'),
      '#weight' => -10,
    );
  }

  /**
   * Implements views_handler_field#render().
   */
  function render($values) {
    return $values->{$this->field_alias};
  }
}

We are done. Now you can easily retrieve any of the properties in the JSON document stored in the Extend field from the Views UI using a JsonPath selector.

A JsonPath selector is similar to XPATH and will allow you to retrieve any piece of data from within the JSON document.

Final Words

This was just a proof of concept but promissing experiment. With the given sample code you can easily implement a Views Filter or Sort Handler to the view based on any of the JSON stored fields. The database abstraction layer probably needs some more love to support querying JSON fields in a more user friendly way so that users do not need to learn the JsonPath notation.

You can of course directly use the JsonPath notation inside your queries to filter, sort or retrieve information from the database.

What are the benefits of storing data in this way?

Flexibility and speed when storing new fields (reduced time to market and application disruption). If a customer asks you to store some additional information in one of your entities, just drop it into the Json Document. If in the future you need to sort or filter using this field, no need to convert it to a real field or property because you can operate directly on the Json document. Depending on the use you give to the fields stored in the Json it can get to a point where performance will indicate to promote this to a real database field. 

Improved support for flexible (unforecasted) data needs. Imagine how much the Webform module's database usage could be improved if they stored form submissions in JSON format instead of this mess (which is indeed a smart approach):


Going back to the original comparison of adding a new field to a MEAN stack application vs a traditiona RDBM based stack, you can use the here explained technique as a base to be able to easily support new fields by simply touching the UI layer.

I think a very interesting contrib module could come out of this experiment (after solving some current design flaws and implemeting the missing functionality) that would allow users to attach a JSON document to any entity and have this information exposed to views. This potentially means no more hook_updates to alter the database schema for small/medium projects. And this module has the potential to be cross database portable (Postgre and MS SQL will work for sure, don't know about MySQL).

Knowing that donations simply don't work I wished there was a more robust commercial ecosystem for Drupal Contrib modules that would motivate to get this sort of ideas off the ground and into the contrib scene faster and with better support. This would also help Drupal have better site builder oriented quality modules like Wordpress has (fostering Drupal usage for quick site builds) and reduce the number of abandoned or half baked projects.

Mar 27 2015
Mar 27

The Drupalize.Me team typically gets together each quarter to go over how we did with our goals and to plan out what we want to accomplish and prioritize in the upcoming quarter. These goals range from site upgrades to our next content sprints. A few weeks ago we all flew into Atlanta and did just that. We feel it is important to communicate to our members and the Drupal community at-large, what we've been doing in the world of Drupal training and what our plans are for the near future. What better way to do this than our own podcast. Kyle Hofmeyer is joined by Joe Shindelar, Amber Matz, Blake Hall, and Will Hetherington to talk about our Q1 successes to our Drupal 8 curriculum plans. Take a listen, celebrate with us, and hear about what we are working on next.

Mar 27 2015
Mar 27

If you are new to Drupal, take a look at our previous blog New To Drupal? These Videos Will Help You Get StartedIf you just got started in Drupal, how about we provide you these short but thorough tutorial videos on Working with Content.

Introduction to nodes tutorial

In this tutorial, we take a 20,000 foot look at what nodes are and how they get imbedded into a site. To get you creating content as quickly as possible, we're only going to look at the basics, and leave the more complex elements to later videos. This foundation is vital for orienting yourself for doing common adding and editing tasks.

[embedded content]

Adding and editing content in Drupal 7

Drupal makes it easy to add and edit content on your website. In this tutorial, we'll cover the fundamentals of how to add basic pages and articles, and how to go back later and edit them. We'll conclude by reviewing Drupal's built-in tools for finding content, which comes in pretty handy, particularly in larger websites.

[embedded content]

 

Drupal 7 node displays tutorial

Nodes on a Drupal site can be displayed in several different modes. Each useful in a different context. For each mode, we can configure what content gets displayed and how it is formatted. In this tutorial we look at the different node modes and how to manage how they are displayed. Drupal provides several different display modes for nodes.

[embedded content]

Drupal Node publishing controls tutorial

Drupal can handle some pretty advanced publishing scenarios. The core installation provides us with a handful of settings that enable us to control many aspects of how nodes are published. In this video we dig deeper into node publishing and the revision options.

[embedded content]

We'll explore advanced content work flows in a later blog post, but for now, what do you think about our videos? Helpful? Have something to add? Leave them in the comments below!

Mar 27 2015
Mar 27

If you’re working on a site that needs subscriptions, take a look at Recurly. Recurly’s biggest strength is its simple handling of subscriptions, billing, invoices, and all that goes along with it. But how do you get that integrated into your Drupal site? Let’s walk through it.

There are a handful of pieces that work to connect your Recurly account and your Drupal site.

  1. The Recurly PHP library.
  2. The recurly.js library (optional, but recommended).
  3. The Recurly module for Drupal.

The first thing you need to do is bookmark is the Recurly API documentation.
Note: The Drupal Recurly module is still using v2 of the API. A re-write of the module to support v3 is in the works, but we have few active maintainers right now (few meaning one, and you’re looking at her). If you find this module of use or potential interest, pop into the issue queue and lend a hand writing or reviewing patches!

Okay, now that I’ve gotten that pitch out of the way, let’s get started.

I’ll be using a new Recurly account and a fresh install of Drupal 7.35 on a local MAMP environment. I’ll also be using drush as I go along (Not using drush?! Stop reading this and get it set up, then come back. Your life will be easier and you’ll thank us.)

  1. The first step is to sign up at https://recurly.com/ and get your account set up with your subscription plan(s). Your account will start out in a sandbox mode, and once you have everything set up with Recurly (it’s a paid service), you can switch to production mode. For our production site, we have a separate account that’s entirely in sandbox mode just for dev and QA, which is nice for testing, knowing we can’t break anything.
  2. Recurly is dependent on the Libraries module, so make sure you’ve got that installed (7.x-2.x version). drush dl libraries && drush en libraries
  3. You’ll need the Recurly Client PHP library, which you’ll need to put into sites/all/libraries/recurly. This is also an open-source, community-supported library, using v2 of the Recurly API. If you’re using composer, you can set this as a dependency. You will probably have to make the libraries directory. From the root of your installation, run mkdir sites/all/libraries.
  4. You need the Recurly module, which comes with two sub-modules: Recurly Hosted Pages and Recurly.js. drush dl recurly && drush en recurly
  5. If you are using Recurly.js, you will need that library, v2 of which can be found here. This will need to be placed into sites/all/libraries/recurly-js.
    Your /libraries/ directory should look something like this now:

Which integration option is best for my site?

There are three different ways to use Recurly with Drupal.

You can just use the library and the module, which include some built-in pages and basic functionality. If you need a great deal of customization and your own functionality, this might be the option for you.

Recurly offers hosted pages, for which there is also a Drupal sub-module. This is the least amount of integration with Drupal; your site won’t be handling any of the account management. If you are low on dev hours or availability, this may be a good option.

Thirdly, and this is the option we are using for one of our clients and demonstrating in this tutorial, you can use the recurly.js library (there is a sub-module to integrate this). Recurly.js is a client-side credit-card authorization service which keeps credit card data from ever touching your server. Users can then make payments directly from your site, but with much less responsibility on your end. You can still do a great deal of customization around the forms – this is what we do, as well as customized versions of the built-in pages.

Please note: Whichever of these options you choose, your site will still need a level of PCI-DSS Compliance (Payment Card Industry Data Security Standard). You can read more about PCI Compliance here. This is not prohibitively complex or difficult, and just requires a self-assessment questionnaire.

Settings

You should now have everything in the right place. Let’s get set up.

  1. Go to yoursite.dev/admin/config (just click Configuration at the top) and you’ll see Recurly under Web Services.
  2. You’ll now see a form with a handful of settings. Here’s where to find the values in your Recurly account. Once you set up a subscription plan in Recurly, you’ll find yourself on this page. On the right hand side, go to API Credentials. You may have to scroll down or collapse some menus in order to see it.
  3. Your Private API Key is the first key found on this page (I’ve blocked mine out):
  4. Next, you’ll need to go to Manage Transparent Post Keys on the right. You will not need the public key, as it’s not used in Recurly.js v2.
  5. Click to Enable Transparent Post and Recurly.js v2 API.
  6. Now you’ll see your key. This is the value you’ll enter into the Transparent Post Private Key field.
  7. The last basic setup step is to enter your subdomain. The help text for this field is currently incorrect as of 3/26/2015 and will be corrected in the next release. It is correct in the README file, and on the project page. There is no longer a -test suffix for sandbox mode. Copy your subdomain either from the address bar or from the Site Settings. You don’t need the entire url, so in my case, the subdomain is alanna-demo.
  8. With these settings, you can accept the rest of the default values and be ready to go. The rest of the configuration is specific to how you’d like to set up your account, how your subscription is configured, what fields you want to record in Recurly, how much custom configuration you want to do, and what functionality you need. The next step, if you are using Recurly’s built-in pages, is to enable your subscription plans. In Drupal, head over to the Subscription Plans tab and enable the plans you want to use on your site. Here I’ve just created one test plan in Recurly. Check the boxes next to the plan(s) you want enabled, and click Update Plans.

Getting Ready for Customers

So you have Recurly integrated, but how are people going to use it on your Drupal site? Good question. For this tutorial, we’ll use Recurly.js. Make sure you enable the submodule if you haven’t already: drush en recurlyjs. Now you’ll see some new options on the Recurly admin setting page.

I’m going to keep the defaults for this example. Now when you go to a user account page, you’ll see a Subscription tab with the option to sign up for a plan.

Clicking Sign up will bring you to the signup page provided by Recurly.js.

After filling out the fields and clicking Purchase, you’ll see a handful of brand new tabs. I set this subscription plan to have a trial period, which is reflected here.

Keep in mind, this is the default Drupal theme with no styling applied at all. If you head over to your Recurly account, you’ll see this new subscription.

There are a lot of configuration options, but your site is now integrated with Recurly. You can sign up, change, view, and cancel accounts. If you choose to use coupons, you can do that as well, and we’ve done all of this without any custom code.

If you have any questions, please read the documentation, or head over to the Recurly project page on Drupal.org and see if it’s answered in the issue queue. If not, make sure to submit your issue so that we can address it!

Mar 27 2015
Mar 27

Keep Calm and Clear Cache!

This is an often used phrase in Drupal land. Clearing cache fixes many issues that can occur in Drupal, usually after a change is made and then isn't being reflected on the site.

But sometimes, clearing cache isn't enough and a registry rebuild is in order.

The Drupal 7 registry contains an inventory of all classes and interfaces for all enabled modules and Drupal's core files. The registry stores the path to the file that a given class or interface is defined in, and loads the file when necessary. On occasion a class maybe moved or renamed and then Drupal doesn't know where to find it and what appears to be unrecoverable problems occur.

One such example might be if you move the location of a module. This can happen if you have taken over a site and all the contrib and custom modules are stored in the sites/all/modules folder and you want to separate that out into sites/all/modules/contrib and sites/all/modules/custom.  After moving the modules into your neat sub folders, things stop working and clearing caches doesn't seem to help.

Enter, registry rebuild.  This isn't a module, its a drush command. After downloading from drupal.org, the registry_rebuild folder should be placed into the directory sites/all/drush.

You should then clear the drush cache so drush knows about the new command

drush cc drush

Then you are ready to rebuild the registry

drush rr

Registry rebuild is a standard tool we use on all projects now and forms part of our deployment scripts when new code is deployed to an environment.

So the next time you feel yourself about to tear your hair out and you've run clear cache ten times, keep calm and give registry rebuild a try.

Mar 27 2015
Mar 27

I have a standard format for patchnames: 1234-99.project.brief-description.patch, where 1234 is the issue number and 99 is the (expected) comment number. However, it involves two copy-pastes: one for the issue number, taken from my browser, and one for the project name, taken from my command line prompt.

Some automation of this is clearly possible, especially as I usually name my git branches 1234-brief-description. More automation is less typing, and so in true XKCD condiment-passing style, I've now written that script, which you can find on github as dorgpatch. (The hardest part was thinking of a good name, and as you can see, in the end I gave up.)

Out of the components of the patch name, the issue number and description can be deduced from the current git branch, and the project from the current folder. For the comment number, a bit more work is needed: but drupal.org now has a public API, so a simple REST request to that gives us data about the issue node including the comment count.

So far, so good: we can generate the filename for a new patch. But really, the script should take care of doing the diff too. That's actually the trickiest part: figuring out which branch to diff against. It requires a bit of git branch wizardry to look at the branches that the current branch forks off from, and some regular expression matching to find one that looks like a Drupal development branch (i.e., 8.x-4.x, or 8.0.x). It's probably not perfect; I don't know if I accounted for a possibility such as 8.x-4.x branching off a 7.x-3.x which then has no further commits and so is also reachable from the feature branch.

The other thing this script can do is create a tests-only patch. These are useful, and generally advisable on drupal.org issues, to demonstrate that the test not only checks for the correct behaviour, but also fails for the problem that's being fixed. The script assumes that you have two branches: the one you're on, 1234-brief-description, and also one called 1234-tests, which contains only commits that change tests.

The git workflow to get to that point would be:

  1. Create the branch 1234-brief-description
  2. Make commits to fix the bug
  3. Create a branch 1234-tests
  4. Make commits to tests (I assume most people are like me, and write the tests after the fix)
  5. Move the string of commits that are only tests so they fork off at the same point as the feature branch: git rebase --onto 8.x-4.x 1234-brief-description 1234-tests
  6. Go back to 1234-brief-description and do: git merge 1234-tests, so the feature branch includes the tests.
  7. If you need to do further work on the tests, you can repeat with a temporary branch that you rebase onto the tip of 1234-tests. (Or you can cherry-pick the commits. Or do cherry-pick with git rev-list, which is a trick I discovered today.)

Next step will be having the script make an interdiff file, which is a task I find particularly fiddly.

Mar 27 2015
Mar 27

We proudly present our first official drupal.org project, which we released about two months ago: the Outdated Browser module :-)

So what is it about?

This module integrates the Outdated Browser library in Drupal. It detects outdated browsers and advises users to upgrade to a new version - in a very pretty looking way. The library ships with various languages. Its look and feel is configurable, and the targeting browser can be configured either specifying a CSS property or an Internet Explorer version.

Yet another browser deprecation library?

You may ask, why you should need it because there are already other modules/libraries doing the same thing, like Browser update or jReject, and other solutions with the help of simple conditional comments? I would, so here's our motivation behind the project:

In the past, we've always used a "browse happy" bar on top of the page, simply embedded with conditional comments for IE lower or equal than version 8. But that's was not very pretty looking, also not multilingual. As mentioned above, there several initivatives and scripts available, like browser-update.org, which also already has a Drupal integration, but most of them are pretty ugly without customizing them.

A few months ago, we've found the Outdated Browser library on Github, where both the updated prompt on the website, as well as the platform itself have a very nice look and feel. It also ships with a broad variety of language files, which makes this script perfectly suitable for any multilingual Drupal project. So we tried it and were absolutely satisfied with it and decided to use it as our default browser deprecation warning tool. Further we decided that we want to build a Drupal module for easy integration and share it with the community.

Benefits

Here are the main benefits of using Outdated Browser in comparison to other alternatives:

  • It looks pretty out of the box.
  • The look and feel can be easily customized via configuration.
  • The targeting browsers can be configured either by specifying an Internet Explorer version or a CSS property. So you are not limited to older IE versions.
  • The library ships with various language files (and they are constantly growing). Our module automatically serves the correct language, including fallbacks to default language and English.

So come on and try it out yourself! Feedback welcomed :-) In the meantime, I'll work on the Drupal 8 version already...

Last but not least, many thanks to Bürocratik for developing and sharing this great library!

Mar 27 2015
Mar 27

In Drupal 7 the Field API introduced the concept of swappable field storage. This means that field data can live in any kind of storage, for instance a NoSQL database like MongoDB, provided that a corresponding backend is enabled in the system. This feature allows support of some nice use cases, like remotely-stored entity data or exploit storage backends that perform better in specific scenarios. However it also introduces some problems with entity querying, because a query involving conditions on two fields might end up needing to query two different storage backends, which may become impractical or simply unfeasible.

That's the main reason why in Drupal 8, we switched from field-based storage to entity-based storage, which means that all fields attached to an entity type share the same storage backend. This nicely resolves the querying issue without imposing any practical limitation, because to obtain a truly working system you were basically forced to configure all fields attached to the same entity type to share the same storage engine. The main feature that was dropped in the process, was the ability to share a field between different entity types, which was another design choice that introduced quite a few troubles on its own and had no compelling reason to exist.

With this change each entity type has a dedicated storage handler, that for fieldable entity types is responsible for loading, storing, and deleting field data. The storage handler is defined in the handlers section of the entity type definition, through the storage key (surprise!) and can be swapped by modules implementing hook_entity_type_alter().

Querying Entity Data

Since we now support pluggable storage backends, we need to write storage-agnostic contrib code. This means we cannot assume entities of any type will be stored in a SQL database, hence we need to rely more than ever on the Entity Query API, which is the successor of the Entity Field Query system available in Drupal 7. This API allows you to write complex queries involving relationships between entity types (implemented via entity reference fields) and aggregation, without making any assumption on the underlying storage. Each storage backend requires a corresponding entity query backend, translating the generic query into a storage-specific one. For instance, the default SQL query backend translates entity relationships to JOINs between entity data tables.

Entity identifiers can be obtained via an entity query or any other viable means, but existing entity (field) data should always be obtained from the storage handler via a load operation. Contrib module authors should be aware that retrieving partial entity data via direct DB queries is a deprecated approach and is strongly discouraged. In fact by doing this you are actually completely bypassing many layers of the Entity API, including the entity cache system, which is likely to make your code less performant than the recommended approach. Aside from that, your code will break as soon as the storage backend is changed, and may not work as intended with modules correctly exploiting the API. The only legal usage of backend-specific queries is when they cannot be expressed through the Entity Query API. However also in this case only entity identifiers should be retrieved and used to perform a regular (multiple) load operation.

Storage Schema

Probably one of the biggest changes introduced with the Entity Storage API, is that now the storage backend is responsible for managing its own schema, if it uses any. Entity type and field definitions are used to derive the information required to generate the storage schema. For instance the core SQL storage creates (and deletes) all the tables required to store data for the entity types it manages. An entity type can define a storage schema handler via the aptly-named storage_schema key in the handlers section of the entity type definition. However it does not need to define one if it has no use for it.

Updates are also supported, and they are managed via the regular DB updates UI, which means that the schema will be adapted when the entity type and field definitions change or are added or removed. The definition update manager also triggers some events for entity type and field definitions, that can be useful to react to the related changes. It is important to note that not all kind of changes are allowed: if a change implies a data migration, Drupal will refuse to apply it and a migration (or a manual adjustment) will be required to proceed.

This means that if a module requires an additional field on a particular entity type to implement its business logic, it just needs to provide a field definition and apply changes (there is also an API available to do this) and the system will do the rest. The schema will be created, if needed, and field data will be natively loaded and stored. This is definitely a good reason to define every piece of data attached to an entity type as a field. However if for any reason the system-provided storage were not a good fit, a field definition can specify that it has custom storage, which means the field provider will handle storage on its own. A typical example are computed fields, which may need no storage at all.

Core SQL Storage

The default storage backend provided by core is obviously SQL-based. It distinguishes between shared field tables and dedicated field tables: the former are used to store data for all the single-value base fields, that is fields attached to every bundle like the node title, while the latter are used to store data for multiple-value base fields and bundle fields, which are attached only to certain bundles. As the name suggests, dedicated tables store data for just one field.

The default storage supports four different shared table layouts depending on whether the entity type is translatable and/or revisionable:

  • Simple entity types use only a single table, the base table, to store all base field data.
    | entity_id | uuid | bundle_name | label | … |
    
  • Translatable entity types use two shared tables: the base table stores entity keys and metadata only, while the data table stores base field data per language.
    | entity_id | uuid | bundle_name | langcode |
    
    | entity_id | bundle_name | langcode | default_langcode | label | … |
    
  • Revisionable entity types also use two shared tables: the base table stores all base field data, while the revision table stores revision data for revisionable base fields and revision metadata.
    | entity_id | revision_id | uuid | bundle_name | label | … |
    
    | entity_id | revision_id | label | revision_timestamp | revision_uid | revision_log | … |
    
  • Translatable and revisionable entity types use four shared tables, combining the types described above: the base table stores entity keys and metadata only, the data table stores base field data per language, the revision table stores basic entity key revisions and revision metadata, and finally the revision data table stores base field revision data per language for revisionable fields.
    | entity_id | revision_id | uuid | bundle_name | langcode |
    
    | entity_id | revision_id | bundle_name | langcode | default_langcode | label | … |
    
    | entity_id | revision_id | langcode | revision_timestamp | revision_uid | revision_log |
    
    | entity_id | revision_id | langcode | default_langcode | label | … |
    

The SQL storage schema handler supports switching between these different table layouts, if the entity type definition changes and no data is stored yet.

Core SQL storage aims to support any table layout, hence modules explicitly targeting a SQL storage backend, like for instance Views, should rely on the Table Mapping API to build their queries. This API allows retrieval of information about where field data is stored and thus is helpful to build queries without hard-coding assumptions about a particular table layout. At least this is the theory, however core currently does not fully support this use case, as some required changes have not been implemented yet (more on this below). Core SQL implementations currently rely on the specialized DefaultTableMapping class, which assumes one of the four table layouts described above.

A Real Life Example

We will now have a look at a simple module exemplifying a typical use case: we want to display a list of active users having created at least one published node, along with the total number of nodes created by each user and the title of the most recent node. Basically a simple tracker.

User activity tracker

Displaying such data with a single query can be complex and will usually lead to very poor performance, unless the number of users on the site is quite small. A typical solution in these cases is to rely on denormalized data that is calculated and stored in a way that makes it easy to query efficiently. In our case we will add two fields to the User entity type to track the last node and the total number of nodes created by each user:

function active_users_entity_base_field_info(EntityTypeInterface $entity_type) {
 $fields = [];

 if ($entity_type->id() == 'user') {
   $fields['last_created_node'] = BaseFieldDefinition::create('entity_reference')
     ->setLabel('Last created node')
     ->setRevisionable(TRUE)
     ->setSetting('target_type', 'node')
     ->setSetting('handler', 'default');

   $fields['node_count'] = BaseFieldDefinition::create('integer')
     ->setLabel('Number of created nodes')
     ->setRevisionable(TRUE)
     ->setDefaultValue(0);
 }

 return $fields;
}

Note that fields above are marked as revisionable so that if the User entity type itself is marked as revisionable, our fields will also be revisioned. The revisionable flag is ignored on non-revisionable entity types.

After enabling the module, the status report will warn us that there are DB updates to be applied. Once complete, we will have two new columns in our user_field_data table ready to store our data. We will now create a new ActiveUsersManager service responsible for encapsulating all our business logic. Let's add an ActiveUsersManager::onNodeCreated() method that will be called from a hook_node_insert implementation:

 public function onNodeCreated(NodeInterface $node) {
   $user = $node->getOwner();
   $user->last_created_node = $node;
   $user->node_count = $this->getNodeCount($user);
   $user->save();
 }

 protected function getNodeCount(UserInterface $user) {
   $result = $this->nodeStorage->getAggregateQuery()
     ->aggregate('nid', 'COUNT')
     ->condition('uid', $user->id())
     ->execute();

   return $result[0]['nid_count'];
 }

As you can see this will track exactly the data we need, using an aggregated entity query to compute the number of created nodes.

Since we need to also act on node deletion (hook_node_delete), we need to add a few more methods:

 public function onNodeDeleted(NodeInterface $node) {
   $user = $node->getOwner();
   if ($user->last_created_node->target_id == $node->id()) {
     $user->last_created_node = $this->getLastCreatedNode($user);
   }
   $user->node_count = $this->getNodeCount($user);
   $user->save();
 }

 protected function getLastCreatedNode(UserInterface $user) {
   $result = $this->nodeStorage->getQuery()
     ->condition('uid', $user->id())
     ->sort('created', 'DESC')
     ->range(0, 1)
     ->execute();

   return reset($result);
 }

In the case where the user's last created node is the one being deleted, we use a regular entity query to retrieve an updated identifier for the user's last created node.

Nice, but we still need to display our list. To accomplish this we add one last method to our manager service to retrieve the list of active users:

 public function getActiveUsers() {
   $ids = $this->userStorage->getQuery()
     ->condition('status', 1)
     ->condition('node_count', 0, '>')
     ->condition('last_created_node.entity.status', 1)
     ->sort('login', 'DESC')
     ->execute();

   return User::loadMultiple($ids);
 }

As you can see, in the entity query above we effectively expressed a relationship between the User entity and the Node entity, imposing a condition using the entity syntax, that is implemented through a JOIN by the SQL entity query backend.

Finally we can invoke this method in a separate controller class responsible for building the list markup:

 public function view() {
   $rows = [];

   foreach ($this->manager->getActiveUsers() as $user) {
     $rows[]['data'] = [
       String::checkPlain($user->label()),
       intval($user->node_count->value),
       String::checkPlain($user->last_created_node->entity->label()),
     ];
   }

   return [
     '#theme' => 'table',
     '#header' => [$this->t('User'), $this->t('Node count'), $this->t('Last created node')],
     '#rows' => $rows,
   ];
 }

This approach is way more performant when numbers get big, as we are running a very fast query involving only a single JOIN on indexed columns. We could even skip it by adding more denormalized fields to our User entity, but I wanted to outline the power of the entity syntax. A possible further optimization would be collecting all the identifiers of the nodes whose titles are going to be displayed and preload them in a single multiple load operation preceding the loop.

Aside from the performance considerations, you should note that this code is fully portable: as long as the alternative backend complies with the Entity Storage and Query APIs, the result you will get will be the same. Pretty neat, huh?

What's Left?

What I have shown above is working code, you can use it right now in Drupal 8. However there are still quite some open issues before we can consider the Entity Storage API polished enough:

  • Switching between table layouts is supported by the API, but storage handlers for core entity types still assume the default table layouts, so they need to be adapted to rely on table mappings before we can actually change translatability or revisionability for their entity types. See https://www.drupal.org/node/2274017 and follow-ups.
  • In the example above we might have needed to add indexes to make our query more performant, for example, if we wanted to sort on the total number of nodes created. This is not supported yet, but of course «there's an issue for that!» See https://www.drupal.org/node/2258347.
  • There are cases when you need to provide an initial value for new fields, when entity data already exists. Think for instance to the File entity module, that needs to add a bundle column to the core File entity. Work is also in progress on this: https://www.drupal.org/node/2346019.
  • Last but not least, most of the time we don't want our users to go and run updates after enabling a module, that's bad UX! Instead a friendlier approach would be automatically applying updates under the hood. Guess what? You can join us at https://www.drupal.org/node/2346013.

Your help is welcome :)

So What?

We have seen the recommended ways to store and retrieve entity field data in Drupal 8, along with (just a few of) the advantages of relying on field definitions to write simple, powerful and portable code. Now, Drupal people, go and have fun!

Mar 27 2015
Mar 27

People love maps. People love being able to visually understand how locations relate to each other. And since the advent of Google Maps, people love to pan and zoom, to click and swipe. But what people hate, is a shoddy mapping experience. Mapping can be hard, but fortunately, Drupal takes a lot of the pain away.

Why map?

You might want a map if you:

  • have a list of location-based data, e.g. venues, events, objects, people, groups
  • want a slick way of giving directions
  • need to search for locations

What map?

There's not just the ubiquitous Google Maps, but rather a host of mapping providers to choose from. Maps are made up of 'tiles' (the images that are downloaded and which go into making up the map), a coordinate grid upon which to place the tiles, and the actual location data you wish to display. These can come from different providers too, e.g. you might want to rely on Google's grid data, but want to come up with your own custom tile solution via Tile Mill or the like. Really, it's down to personal preference and your particular usecase. We are rather fans of Open Streetmap, for example, but many of our clients favour Google Maps' look and feel because that's what their users expect in a mapping interface.

Remember that a map can be more than dropping a pin and having done with it. Depending upon the technical solution chosen, you can

  • layer your data and allow users to switch layers on and off, thereby filtering data in a structured manner
  • cluster your data points to display large data sets in a meaningful way
  • draw features on your map, e.g. to outline an area or mark an area as special in some way
  • include popup information windows filled with rich, interactive media
  • have location search (when integrating with a search back-end such as Solr), or even proximity search
  • use custom map tiles, custom icons, a customized look & feel

There are many, many options, solution and Drupal modules available and choices to fit every need.

Map how?

There are a lot of ways to get mapping data onto a Drupal web page, ranging from quick & easy to really complex.

  • Cut & paste embed code from Google Maps
  • Gmap
  • Leaflet
  • Open Layers

The thing is, that if all you need is a pin on a map to show the location of your office, then a straight embedded map with do perfectly. However, once you get into dynamic listings and the display of custom data and fancy-dan functionality, then you need to put in a bit of effort. Open Layers is definitely the weapon of choice for complex mapping projects. The Open Layers module provides everything you need to get going building maps through Views (yes, Views!) and with its ecosystem of plugins and add-ons, it is a really powerful tool.

With the what now?

Adding maps to your website is a great way to increase interactivity, hold users' attention, and encourage return visitors to the site. The latter of which is good for increasing your search engine optimisation. Many people think mapping is difficult, but with some practice it becomes a very enjoyable part of the website creating experience. Annertech have a lot of experience creating maps for our clients. Have a look at some examples of mapping projects that we've done to see what is practical and readily available in the real world.

Simple Google Map

On the website for the National Adult Literacy Agency (NALA), we created a map of Ireland which shows where every NALA course in the country is being run. It is quite a simple feature, where users can click on a pin to get some more information about the course. In the screenshot shown, for example, a user has clicked on a map marker to see details about an adult literacy course in Wexford.

Basic mapping in Drupal with Google Maps

Cluster Google Map

A more complex map was created for the Health Information and Quality Authority (HIQA) where we used a “clustered” approach to mapping. This means that if there are a number of pins/map markers placed in very close proximity, instead of everything getting bunched on top of each other we create “clusters”. Each cluster looks like a broadcasting icon with a number. The number shows how many pins are within that cluster/area and clicking on it allows you to then see those pins much more easily. This is a little tricky to explain in words, so we will post two screenshots to illustrate.

1) The small pins are individual markers, the broadcast signals show how many individual pins are within that "cluster"

Clustering with mapping in Drupal

2) When a cluster is clicked on, the individual pins for that cluster are revealed. When an individual pin is clicked on, information relating to that pin is revealed.

Zooming in to a cluster Google map in Drupal

OpenStreetMap and Open Layers in Drupal

When working with the Local Government Management Agency (LGMA), we created a “filter” map. This map allows users to select which items they would like to see by ticking boxes. As many filter groups as you wish can be created in this manner. This screenshot shows the map with all council buildings in Ireland visible. Options are available here to also show where every library and/or fire station in the country is.

You'll also note that it does not have the default Google Maps style that you see on most sites. For this particular site, we used OpenStreetMap and used different coloured pins for the different types of buildings being mapped. This approach can support other map types too if requested including, but not limited to, Bing Maps, MapBox and XYZ map types.

OpenStreetMaps and Open Layers in Drupal for Ireland's Local Government sector

So that's just a short introduction to some of the fun we've had mapping with Drupal recently. In a follow-up blog post we'll show you how we created some of these maps and how you can integrate them with your project.

If you would like Annertech to help you add maps to your website, please feel free to contact us by phone on 01 524 0312, by email at [email protected], or using our contact form.

Mar 27 2015
Mar 27

I’ve just listened to the latest episode of the Modules Unraveled podcast by Bryan Lewis, which talked about The current job market in Drupal. And it made me think about my own journey as a Drupal developer, from zero to reasonably competent (I hope). The thing about this industry is that everything seems to move faster and faster. There’s a best new tool or framework released every other day. Developers are creating cool things all the time. And I feel like I’m constantly playing catch-up. But looking back to Day 1, I realised that I did make quite a bit of progress since then.

Learning on the job

I’ve been gainfully employed as a Drupal architect (the job title printed on my name cards) for 542 days as of time of writing. That’s about a year and a half. I’m pretty sure I didn’t qualify for the job when I was hired. My experience up to that point was building a Drupal 6 site a couple years before. I barely had a good grasp of HTML and didn’t even know the difference between CSS and Sass.

When I got the job, I felt that I didn’t deserve my job title. And I definitely did not want to feel that way for long. I spent a lot of time reading newbie web development and Drupal articles early on. There’s a lot you can learn from Google, and trust me, even now, I still google my way out of a lot of challenges on the job. However, I do recognise the fact that I seriously lucked out with this job. Let me explain.

I’m the type of person who learns best if I’m doing something for real. Meaning, as nice as courses from Codeacademy and Dash are for learning the basics of web development, for me, they don’t really stick to my little brain as well as if I was building a real live website. Two weeks into the job, I was handed an assignment to build an entire website. It was like learning to swim by being tossed into the ocean, just the way I like it.

Having a mentor

But of course, googling alone is far from enough. Here’s where the lucked out bit comes in. I didn’t know it at the time, but the fact is, I joined a development team that was very strong in Drupal fundamentals and best practices. Our technical lead was a stickler for detail, and insistent on doing things the Drupal way. As someone who didn’t know a class from a method, he didn’t have to do any re-education of programming habits for Drupal because I had none to begin with. Sometimes it’s easier to deal with a blank slate.

First thing I learnt was version control, using Git. Now that was a steep learning curve. Oh, the amount of time I spent resolving git conflicts, undoing damage done to git repositories, the list goes on. Though it really feels like learning to ride a bicycle, once you get it, you can’t unlearn it.

I also learnt about the multitude of environments needed for web development very early on, as my technical lead patiently explained to me why we needed to follow the proper git workflow.

All the time.
No, you can’t just make the change on the live server.
Yes, you have to commit even if the change is tiny.

So even though I asked a lot of questions (I probably questioned everything I was told to do) I always got clear, justifiable answers. Which shut me up pretty quickly. In the rare event that I caught something that was arbitrary, my technical lead was gracious enough to let me “win”. My fellow developer was also extremely generous with her knowledge and patience. Let’s just say, with all the support I was getting, it would be odd if I didn’t pick things up quickly.

With such a small team, I got a lot of opportunities to take responsibility for entire projects. I wasn’t micromanaged and was pretty much given free reign to implement things so long as I followed best practices. Right, so that’s the day job bit.

Make stuff for fun

I think I’m somewhat of a control freak. I’ve tried using CSS frameworks on my projects. Didn’t like them. Because I didn’t use a lot of the stuff that came in the whole package. I started off using the Zen theme as my starter, but 3 projects in, I decided to just write my own starter theme. Nothing fancy, it’s a practically a blank slate, even the grid isn’t pre-written in. Partly because I wanted to know exactly how everything worked, and partly because I didn’t want anything in my code that I didn’t use.

I gravitated toward front-end development largely because HTML and CSS were very easy for me to pick up and understand. I also liked the fact that I could make things look pretty (yes, I'm shallow like that). Brilliant developers create amazing things almost every day. Let me give you a list:

Maybe one day I’ll be able to create something mind-blowing as well. For now, I’m content with just writing my own themes, playing around with Jekyll, exploring the Drupal APIs (can’t be a Drupal developer without this) and mucking around on CodePen.

Mingle with people who are better than you

Another thing I did was to attend meet-ups. Even though I felt like a total noob, I showed up anyway (though sometimes I dragged a friend with me). For the more technical meet-ups, it was quite intimidating, and still is even now sometimes. But I realised that as time went by, I understood more of what was happening. People didn’t seem like they were speaking Greek anymore. It’s still cool to get my mind blown by all the creative stuff people present at the meet-ups. They remind me of how much more there is to learn.

I found it really beneficial to attend creative meet-ups as well. The perspectives shared by all those speakers triggers me to examine my own thought processes. Because at the end of the day, it’s not just about writing code, it’s about how the artefacts we create are used by others.

Rounding things off

I’d like to hope that as I get better at this, I’ll be able to create more useful stuff. Despite the fact I grumble and rant about fixing UAT bugs (can’t we just tell them it’s a feature), inheriting legacy code, hitting deadlines and so on, the truth is, I love what I do. Nobody can predict the future, but I do hope that 5420 days in, I’ll still be doing this and loving it the same.

Mar 26 2015
Mar 26

Drupal is a great choice for your enterprise level web application project, and maybe even your website. Like every other framework under the sun, it’s also a terrible choice if you’re cavalier about the management of its implementation. Things go awry, scope creeps, budgets get drained, and hellfire rains down from the sky… You get it.  The good news is that there are steps you can take to prevent this from happening.

1. Consider Paid Discovery

Software projects go over budget. A lot. It would be disingenuous for me to tell you not to be prepared for that.

There’s good news on this front though. You, as the client, have complete control over this. Most software developers, or at least this one, don’t sit around thinking of ways to increase scope and add to cost, though that’s sometimes the image clients get.

The fact is we need good information to give you good estimates. You wouldn’t tell a real estate broker you’re looking for a log cabin in the $20,000 price range, then get mad when they don’t deliver the Taj Mahhal. The more transparent you are about your needs early on, the better chances you have of hitting your budget and getting what you need.

The basic truth is: you get what you pay for. Investing in paid discovery will increase the time and effort that is put into the estimate, giving you a much better chance of hitting your cost / value sweet spot.

2. Work Closely with the Development Team, But not too Closely

It’s important to bring at least one member of the development team into the conversation early, even if you are weeks away from any actual development work. This exposes the developer to your thought process and allows him to begin thinking about how to work effectively with you to accomplish your goals for the project. Open communication between the development team and Project Owner should be a regular process.

But, there’s always a but. At the end of the day the developer’s job is to find technical solutions to your business goals. Communication is an important part of this process, but can become a hindrance if not performed in a controlled manner.  Short daily stand ups or longer weekly check-ins should be sufficient. Any other communications should go through the Project Manager. Constantly peppering the development team with requests for Just-In-Time reaction is a sure fire way to force your budget down the sink and drive down your derived value with it.

3. Be Available

The absolute worst thing you can do, that will lead to the certain demise of your project, is to neglect to nurture it. Nothing is more frustrating as a developer than a client who does not answer your questions. We’re not trying to waste your money with needless communications, honest. If you’re being asked a question by a developer, it’s probably a blocker. If the developer is blocked long enough, it leads to an assumption. More assumptions means less ROI.

4. Demos are Dangerous

Demos have an important place in the development world. They give clients a first-hand experience of what’s going on, and that’s great. There’s a flip side of the coin though.

Demoing incomplete work helps neither the client nor the development firm. This is usually because clients have unrealistic expectations about what a “Demo” is for. We are showing baseline functionality here. Not bells and whistles.

To avoid frustrating clients (we do want to make you happy after all) developers will soak time and energy into temporary solutions created strictly for the demo, then replace them with long term solutions later.

Ultimately you’d save money by reducing the number of demos and allowing developers to build things correctly in the first place.

5. Be Assertive About Your Goals, Be Flexible About Your Needs

Wait, shouldn’t I be assertive about both my goals and my needs? Well, no, at least not if you want to get the most out of your investment.

Clients understand their business goals. They don’t usually understand their technical needs to accomplish those goals. That’s where we, or whoever you end up hiring, come in.

Unfortunately clients usually lead with what they believe their technical needs are. If you advocate fervently enough, we’ll believe you. That’s where everyone gets into trouble. You’re far better off communicating the business goals you want to accomplish so we can work with you to determine the most cost-effective solution for your budget.

6. Be Afraid, Very Afraid of “Deliverables”

Lawyers love deliverables. They make contracts simple. They also tie you to a specific technical need, long before it has been determined that the technical need will achieve your business goals.

To be clear, web projects should have deliverables. They can be defined during the discovery process (the longer and more thorough discovery is, the better we can define the deliverables, see item 1), but you’re much better off defining the actual deliverable closer to the middle of the project. That is the only way to assess if the technical solution we’ve agreed upon provides you with the ROI you expect.

Along those lines, deliverables can also give a false sense of progress. If we meet 100% of the deliverables defined in the contract but fail to meet your business goals, I call that a waste of my time and your money.

The last piece of advice borrows from the Agile methodology. Ultimately the ball is in your court to determine what the best implementation process is for your organization, but remember that you, as the client, have a significant level of control over the amount you pay and the value you get.

Mar 26 2015
Mar 26

Palantir CEO Tiffany Farriss recently keynoted MidCamp here in Chicago where she spoke about the economics of Drupal contribution. In it, she explored some of the challenges of open-source contribution, showing how similar projects like Linux have managed growth and releases, and what the Drupal Association might do to help push things along toward a Drupal 8 release. You can give her presentation a watch here.

With this post, we want to highlight one of the important takeaways from the keynote: the Drupal 8 Accelerate Fund.

Drupal 8 Accelerate

Some of you are clients and use Drupal for your sites, others build those sites for clients around the world, and still others provide technology that enhances Drupal greatly. We also know that Drupal 8 is going to be a game changer for us and our clients for a lot of reasons.

While we use a number of tools and technologies to drive success for our clients, Drupal is in our DNA. In addition to being a premium supporting partner of the Drupal Association, we also count amongst our team members prominent Drupal core and contributed module maintainers, initiative leads, and Drupal Association Board, Advisory Board, and Working Group members.

We've all done our part, but despite years of support and contributions from countless companies and individuals, we need to take a new approach to incentivize contributors to get Drupal 8 done. That’s where the Drupal 8 Accelerate Fund comes in.

Palantir is one of seven anchor donors who are raising funds alongside the Drupal Association to support Drupal 8 development. These efforts relate directly to the Drupal Association's mission of uniting a global open source community to build and promote Drupal, and will (and already have) support contributors directly through additional dollars for grants.

The fund breaks down like this:

  • The Drupal Association has contributed $62,500
  • The Drupal Association Board has raised another $62,500 from Anchor Donors
  • Now, the Drupal Association’s goal is to raise contributions from the Drupal community. This is the chance for everyone from end users to independents to Drupal shops to show your support for Drupal 8. Every dollar donated by the community has already been matched, doubling your impact. That means the total pool could be as much as $250,000 with your help.

 
Drupal 8 Accelerate is first and foremost about getting Drupal 8 to release. However, it’s also a pilot program for Drupal Association to obtain and provide financial support for the project. This is a recognition that, as a community, Drupal must find (and fund) a sustainable model for core development.

This is huge on a lot of levels, and those in the community have already seen the benefits with awards for sprints and other specific progress in D8. Now it’s our turn to rally. Give today and spread the word so we can all help move Drupal 8 a little closer to release.

Fundraising Websites - Crowdrise

Mar 26 2015
Mar 26

I've offered to help mentor a Google Summer of Code student to work on DruCall. Here is a link to the project details.

The original DruCall was based on SIPml5 and released in 2013 as a proof-of-concept.

It was later adapted to use JSCommunicator as the webphone implementation. JSCommunicator itself was updated by another GSoC student, Juliana Louback, in 2014.

It would be great to take DruCall further in 2015, here are some of the possibilities that are achievable in GSoC:

  • Updating it for Drupal 8
  • Support for logged-in users (currently it just makes anonymous calls, like a phone box)
  • Support for relaying shopping cart or other session cookie details to the call center operative who accepts the call

Help needed: could you be a co-mentor?

My background is in real-time and server-side infrastructure and I'm providing all the WebRTC SIP infrastructure that the student may need. However, for the project to have the most impact, it would also be helpful to have some input from a second mentor who knows about UI design, the Drupal way of doing things and maybe some Drupal 8 experience. Please contact me ASAP if you would be keen to participate either as a mentor or as a student. The deadline for student applications is just hours away but there is still more time for potential co-mentors to join in.

WebRTC at mini-DebConf Lyon in April

The next mini-DebConf takes place in Lyon, France on April 11 and 12. On the Saturday morning, there will be a brief WebRTC demo and there will be other opportunities to demo or test it and ask questions throughout the day. If you are interested in trying to get WebRTC into your web site, with or without Drupal, please see the RTC Quick Start guide.

Mar 26 2015
Mar 26

The Kansas City Drupal Users Group has decided to build a new site to document our activities as a group. We have been using Meetup.com to organize, which is good for exposure. However it isn't that great for communication in the group. Being a co-organizer for the group, I registered a domain and pointed it to a "coming soon" page on one of my Linodes. I then spoke to Pantheon about hosting. They graciously offered to host the site as a Community Project. Pantheon even allowed us to experiment with their Multidev environment. More on Multidev later.

I have been working independently for the last 9 yrs and haven't had to work in a team since the days of Dreamweaver MX! After searching for guidance on how we might attack the project, and getting little feedback, we decided to just dive in! There were a couple bumps early and a quick reboot, but I think we are now on track. So I felt it was important to document our process, and hopefully even get some feedback on how we might improve our approach!

First Attempt

Unfortunately the night we decided to start the sprint, our primary organizer, Karl, was out of town. Karl has plenty of experience leading teams with our sponsor, VML. The rest of us, not so much. After some discussion on what purpose the site would serve and a little head scratching, we dove in. A Trello board was setup to handle issues and discussion. A Pantheon dev site was created, which provided us with a Git repo to work from. To access the repo on Pantheon, each user needs to have access to the Pantheon backend for the site. While I trust everyone in the group, this did mean that anyone could move items through to live or even delete the site. Luckily we didn't have any issues with everyone being able to clone the site and get a local development copy running. We also got some issues posted for discussion on Trello about functionality, themes, content types and roles. After a little playing we wrapped things up. While we didn't achieve much of substance on the actual site, it was a productive start as a group.

Back on Track

At our next meetup Karl gave us a demonstration on how a forking workflow and Github work together. He also explained how upstream repos would allow individuals to work on Github, but allow code to flow on over to Pantheon for testing and deployment. Shortly there after we were up to speed on how we could use forking, branching and upstream remotes to work together. I am sure this is all quite familiar to many of you reading this. Many in our group were like myself, working independently on sites. I use Git on a regular basis with my projects on Pantheon, but don't have to share the repo with others.

So now we seem to be ready for action. The Trello board is tracking issues, Github has a public repo anyone can pull from, and Pantheon has the production repo.

In coming posts I will explain the actual steps used to get code from a member into the Pantheon repo. I would say the live site, but we aren't that far along just yet. I will also provide updates on how the process, and site is coming together, including any more problems we encounter.

Mar 26 2015
Mar 26

Looking back on 2014, it was a great year of events and conversations with people in and around Acquia, open source, government, and business. I think I could happily repost at least 75% of the podcasts I published in 2014 as "greatest hits," but then we'd never get on to all the cool stuff I have been up to so far in 2015!

Nonetheless, here's one of my favorite recordings from 2014: a terrific session that will help you wrap your head around developing for Drupal 8 and a great conversation with Frederic Mitchell that covered the use of Drupal and open source in government, government decision-making versus corporate decision-making, designing Drupal 7 sites with Drupal 8 in mind, designing sites for the end users and where the maximum business value comes from in your organization, and more!

---Original post from October, 2014---

Drupal in Government

"We were part of the original Whitehouse.gov [Drupal] build, and the We the People petition system that they use for the democratization of ideas. I feel it opened up this conversation about what open source was, the security of open source, and what it really meant in terms of democratic principles. Of course, in the land of politics, perception is important."

"I was the lead developer at one point on the Energy.gov project; that was one of the first Drupal 7 sites. That checked all of the political checkboxes:

  1. it was going to save money
  2. it brought various offices under the Department of Energy under the same branding
  3. they didn't have the recurring licenses
  4. and because it is open source, because we can manipulate it, we can custom tailor those content authoring experiences and those tools to the needs of the various offices while still having that kind of super administrator and allowing that person to control what needed to be controlled.""

"Being able to enable government officials to get their message ... It's a public service ... We're continuing to work with the Senate, with the House of Representatives; it's been absolutely great because everyone understands at this point in 2014, that open, transparency, open source, democratization – they all have a single, underlying thread."

Presenter Dossier: Fredric Mitchell

  • Senior Engineer, Phase 2
  • Drupal.org profile: fmitchell
  • Website: http://brightplum.com/
  • Twitter: fredricmitchell
  • 1st Drupal memory: Meeting Drupal while working with Larry Garfield at Palantir ... "My first Drupal memory was trying to absorb all of Larry's eagerness and enthusiasm about Drupal 5."

Session Description

Now that you know how to build sites, it's time to take the next step and jump into the Drupal 8 API. This session reviews the 30 API functions that you should know to begin your journey.

This is an updated version of my popular talk at Drupalcamp Chicago and Drupalcamp Costa Rica that now covers Drupal 8!

We'll jump through common API examples with some derived examples from the excellent Examples module and others that will carry over from Drupal 7.

Attendees will learn the behind-the-scenes functions that power common UI elements with the idea of being able to build or customize them for your projects. Some of these include:

  • drupal_render()
  • entity_load_multiple() (node_load_multiple in D7)
  • entity_view_multiple()
  • menu_get_tree()
  • taxonomy_get_tree()
  • Field::fieldInfo()->getField() (field_info_field in D7)
  • QueryBase class (EntityFieldQuery in D7)
  • Request->attributes->get(‘entity’) (menu_get_object in D7)

Session Video

[embedded content]

Session slides

You can grab a copy of Fredric's slides he updated for DrupalCon Austin from his GitHub repository.

Interview video

[embedded content]

Pages