Mar 23 2020
Mar 23

With Drupal 9 coming this summer, my team at the Eberly College of Science at Penn State felt it necessary to get all of the custom modules we have written (or are mission critical contrib modules) updated and as future-ready as possible. A happy side effect of that process is that we've taken the time to modernize the code in the Drupal 8 version of Cosign ( After joining the Cosign project, I'm happy to announce that we have released an official Alpha version of the Drupal 8 version of Cosign. 

There is still room for work to be done, primarily documentation and unit tests, but the functionality is there. Cosign is now correctly using dependency injection and has had any (D9) deprecated code replaced. We're also doing our best to follow Drupal code standards across the board. 

If you are using Cosign and have some time to help with testing, writing tests, or documentation, please let me know! 


Jul 09 2019
Jul 09

This was a 90 minute session from DrupalCon Seattle 2019. The room was not recorded :( BUT, we recorded locally from Mike’s laptop! Enjoy! Our slides are also attached in the links below. The room was overflowing and we got great feedback on it so I hope you enjoy it too

Seems that it was pretty well received given this tweet of me floating around jumping up and down :)

Amazing and entertaining session about web components @btopro :thumb: pic.

– Abdelrahman Ibrahim (@IAboEyad) April 11, 2019
Apr 05 2019
Apr 05

It’s been about a year since Nikki and I HAX‘ed DrupalCon Nashville. Now, Mike and I are about to embark to DrupalCon Seatle to once again, HAX all the things. HAXeditor has come a really long way since the last reveal. In the time since last year:

Web component oppurtunities at DrupalCon to connect with us and others

If you’d like to see what we’ve been up to and what we’re doing in the web components space (where we spend 99% of our day now), watch this video below that shows the theme layer in HAXcms and how it separates state from design.

May 24 2018
May 24

Photo by Patrick Fore on Unsplash

Nikki and I came, we saw, we talked quickly, and #haxtheweb has started to make sense to some people. But not enough, and that’s where you come in. There’s some amazing parallels in two talks from DrupalCon; Ours in HAX the web, and another by Pantheon about WordPress 5.0 changes / Gutenberg. If you didn’t get a chance to see them I recommend watching the WordPress one and then our HAX one as almost a “What if Drupal had it’s own Gutenberg project” which the speakers are talking about int he first one.

What’s possible with WordPress 5.0

Web components, Polymer and HAX

The unspoken crisis

There’s a great truth spoken towards the end of the What’s new in WordPress 5.0. How does one sell Drupal if Wordpress has this amazing authoring experience? It’s exactly the prompting we gave at Drupalcon Baltimore in 2017, save Drupal through superior authoring experience. If Wordpress beats us to the ultimate, usable experience in content production (or hell, even just a 1/2 decent one), how is your agency able to sell Drupal? It can’t.

I’m really glad that this conversation happened (I had a conflict or would have been there) and that people were willing to be honest and basically have an open discussion of how do we survive in the future if they do land this thing right.

State of the WP in bullet points

If you haven’t been following Wordpress, there’s effectively two communities in one right now when it comes to mindset about WP’s future direction (it’s very well outlined in the WP 5.0 post).

The biggest criticisms of Gutenberg are (at a high level):

  • You’re making a major change with no real input
  • I just want my shortcodes and other conventions I’ve always had
  • I don’t want to learn javascript just to work
  • Please don’t do this

These concerns are situated against reality of the web:

  • If PHP based projects are to survive they’ve gotta go all-in on a JS framework (most likely)
  • jQuery isn’t going to cut it
  • Shortcodes aren’t going to cut it
  • Javascript is where everyone is at and building amazing experiences on the front-end
  • A huge group of people saying “Please do this it’s over due”

Another way out

There’s always another way out and it usually comes in manners we least expect. How can we possibly compete with the AX patterns expressed in Gutenberg? We can’t right? It’s incredible. I mean even in it’s unfinished state, it’s in React and people love React. How could we possibly top what they are working on. Here’s where the WP approach is vulnerable

  • Once you adopt it, you will never be able to get out of it. It’s markup is all made up tags and comments and DEEPLY tied to WordPress.
  • To contribute, get ready to ramp up on React and all the tooling chain that comes with it
  • There is enough strife in said community (even just from a distance I can see it) to provide options for people to migrate away from WordPress. Great change (as we saw from D8) can cause great advancement but when people review options they really review all options
  • The body field is king and their world will devolve into body field only, even more as they seemingly position themselves to get out of the PHP business *cough*

Ok, so how can we turn these against them. How is it that my team reviewed what they are doing at a high level and said how can we avoid these mistakes. Don’t get me wrong, I love Drupal; but I’m going to make the best decisions for my team and our projects when it comes to selecting it. So here’s intentional decisions we’re making with HAX the Web in order to beat them at their own game:

  • HAX is built on Web components and while we use Polymer to build ours, we’ve integrated “Vanilla” JS elements. Think of Polymer like JQuery or underscore as a helper library, not an entire reframing of the way you build
  • We also write to the body field, but we’re not using any Drupal specific conventions which means our content could write anywhere and work anywhere provided the tag definitions are available.
  • We currently have integrations with Drupal 6, Drupal 7, BackdropCMS and GravCMS, meaning that you can unify the AX patterns between Drupal and anything
  • Our solution’s integration methodology is trivial and writing new components to work with HAX is simple. Extend the web visually, then fire an event informing HAX (if it exists) about how to modify yourself. This means our elements will on any web site and if HAX happens to be there, they’ll work with HAX there.
  • Our Developer Experience is second to none and incredibly simple. We have students with a few weeks experience and junior developers making meaningful contributions to our design assets and HAX in weeks not months. We have people that have never coded before making more impressive elements then much of the themes I’ve seen in Drupal for years.
  • Drupal will learn the design, Drupal will not be in charge of the design.

An authoring experience where Drupal just happens to have really great integration but that anyone can plug into and extend. Unite the tribes, join us. We’re already having discussion with Joomla, LMS and other vendors we never would have been able to previously because we’ve completely decoupled design from Drupal, and then taught Drupal about our design.

Please. Join us in these efforts. Challenge us on them. See what it is and how we’re trying to accomplish it. Because if we don’t, if we keep going the “Drupal-isms on Drupal-isms” methodology to layout and panel-esk development, we will lose long term. The door hasn’t slammed yet and no, we’re not the only path to truth; but Drupal has a real opportunity right now to start down a truly innovative path and say that we are here for one purpose: To build the authoring experience for the web.

Jan 04 2018
Jan 04

Over on the ELMS:LN team, we’ve been busy every place not Drupal (huh what?) lately. We have barely been touching Drupal and couldn’t be happier (…huh?). In fact, we’re busy these days encouraging people to not focus on Drupal, at all (so this is on planet then why…).

Drupal is fantastic at what it’s great at. It’s great at being a CMF; the crazy abstraction of a CMS that lets us build anything. It’s great for handling complex workflows and raw webservice / integration capabilities (there’s a but coming…).


Drupal is terrible from a theming and new user on-boarding perspective. In these areas it’s incredibly clunky and while Drupal 8 has made great strides, it’s still not going to save us all from the endless array of lightweight, fun to develop for, NodeJS and SailsJS backends seeking to give Drupal death by a thousand microservices.

What I’ve found though from hunting around other CMSs though, is that this isn’t a Drupal problem; it’s a workflow problem that plagues the larger web. GravCMS is reasonable to theme for, but it still sucks compared to just making something look great on the front-end. Wordpress the same thing (security methodology or lack there of aside). This isn’t a uniquely Drupal problem and it’s not to demean the people like @MortenDK and the other core contributors who have done a great job getting Drupal to where it is in these areas.

So Headless then?

Headless isn’t a rallying cry against the great work they’ve done, it’s a response to the fact that no matter how hard we try, we’re putting a round peg into a square hole. Headless allows our front-end teams to focus on the front end and interface with the systems team conversationally as “I need a JSON array that looks like {} here” and get it. It keeps needless markup (which I know div-itus has been heavily killed in core). Headless isn’t everything though; there’s still a HUGE advantage to hybrid methodologies where Drupal is delivering front-end style assets in a well structured and very intentional way. This helps get around issues of rebuilding Drupal but on the front-end when it comes to forms, admin areas, user management, and other sticky areas. If we can at least make these things look great, then who cares if they come from front-end or back-end so long as they leverage the same templates / rendered elements.

HAX and the way forward

HAX is short for Headless Authoring eXperience. HAX means that our authoring experience is completely disconnected from Drupal or any other CMS, and instead is providing a simple way of integrating a uniform UX pattern into a backend. HAX is an idea that me and other members of the Penn State community have kicked around for years, built prototypes against and now, in the last 2 months, finally have something that looks and functions in the direction we’ve been seeking. I think this is another piece of the adoption puzzle in Drupal 8 if we want to seriously start seeing sites like the Whitehouse not flip from Drupal to Wordpress and instead start seeing mass exodus from Wordpress / others to the most powerful platform.

In order to do this, it can’t just be a powerful platform. It has to be the most approachable and able to quickly make high fidelity experiences for content authors. It has to be drop dead simple to setup and gorgeous to use (yeah, super easy,… I know…). Authoring experience we’re working on; trying to make something highly pluggable on the front-end to match the lego-esk pattern that Drupal has on the backend. We’re teaching Drupal to understand webcomponents so that it can easily integrate with systems like HAX (and we’ve done so).

So in other words, I think the best way to increase Drupal 8 adoption is by making Drupal the most enjoyable platform to do front-end development, bar-none. I think one of the most important ways to do this is to get people off their addiction to tpl file theming and into Webcomponent technology natively. Leaving the island and landing on Polymer has been the single biggest enhancement to our own platforms we’re building in the history of the project. Teaching ELMS:LN (and Drupal by proxy) to understand webcomponents has been completely game changing for our developer experience as well as student experience in what kinds of solutions we can build.

I’m so sure that this is the solution that I’m not spending my free time integrating other platforms with Webcomponents so that it can be integrated with HAX. While others were celebrating, I stayed up into the wee hours writing Webcomponents and HAX modules for Backdrop, GravCMS and Drupal 6 (yes. Drupal 6). Included with this post are videos showing what HAX is and how to get it and webcomponents integrated into all of these platforms.

Drupal 8

The Drupal 8 port of both are in the queue and I’m happy to help anyone take them on as co-maintainer as I can only stretch so far. But I think Webcomponents is the answer to increasing Drupal 8 adoption long term. By getting off the addiction to TPL files, we can start progressively collapsing the module requirements of our existing Drupal 6 and 7 sites. When we reduce the requirements on the module side, we decrease the complexity of our sites and open up pathways to migration currently unattainable. People aren’t on Drupal 7 still because it’s more fun, they are there out of the realities of life :).

So if we enhance these platforms and get people developing in webcomponents to improve workflow, we’re getting them out of their Drupal 6/7 shell and into something that’s non-Drupal and WAAaaaaayyyyy more fun to work with as far as front-end is concerned. At the same time, we’re decreasing the complexity of their Drupal sites on the backend via these hybrid and headless sites which then can more easily transition to Drupal 8. Drupal 8’s “problem” then becomes understanding webcomponents and how to integrate with them and our front-end people won’t have to change their workflow. We’ll create a pipeline for awesome development experiences on the front-end via webcomponents and awesome content authoring experience via HAX (which is just leveraging webcomponents).

Jun 02 2017
Jun 02

As twitter, other social networks, conference submissions and drupal planet posts would suggest; I’m a bit over the moon about Polymer for Web Components. I’ve avoided front-end development workflows for years because they made no sense to me. Lots of custom bailing wire that would make a slick (but completely un-reusable) one-page-app. Nay said I! I will never do front end development!

Then Polymer happened

Michael Potter (@hey__mp) finally wore me down that we should be looking into component architecture. I saw pattern lab and went “neat, that makes sense” and then realized it was more then just a design paradygm and went “ugh, front end, this makes no sense”. He suggested we look at Polymer because “the next youtube will be done in it”. So I did. And now 4 months later, here I am writing about it non-stop. It completely changed the way I approach development with Drupal and I think is a key to sustaining and growing the Drupal community.

Drupal’s ability to be a headless, hybrid and template driven system all at the same time make it incredibly attractive for progressively decoupling and eventually moving towards entirely Web Component driven development practices. I’ve identified what I feel are five phases of web component integration:

  1. Design elements in templates
  2. Template-less elements
  3. “Smart” / one-page apps (where we are today)
  4. Headless, multi-page apps
  5. Web components driving information architecture

Phase 1 as I’ve covered before is to just start making some simple design elements and wiring them into tpl / twig files. This reduces the complexity of Drupal theming in general and increases accessibility and reusability of elements, reducing the time to develop down the road. It also allows you to start “theming” across platform at a design layer since you’ve unplugged design from Drupal. This still requires all traditional roles in your web shop in order to deliver a site.

Phase 2 is to start using the webcomponents module we wrote to start replacing template files entirely through views, display modes, and display suite style theming. This reduces the need for front end developers to understand or care about Drupal, which increases our potential target audience for the platform. We can start to hire pure front-end developers and then have pure site builders integrate those design elements with little effort, leaving the big lifts for developers to integrate.

Phase 3 is disconnected, one-page app development. These apps are prototyped by themselves and then wired up to Drupal after the fact. This allows for front end developers to replace the need for site builders by envisioning their development as single “screens”. It boils Developer integrations down to a single auto-loaded file (which the 3rd video below demonstrates). This is where we are currently in the phases of progressively decoupling via Web components.

Phase 4 is effectively a routing system for stitching together the one-page apps. Beacuse one-page apps are just single element tags, it’s easy to stack those together into increasingly complex systems which click and autoload the next page without refreshing the page. This allows for smoother interactions and reduced server transactions (especially if you’ve worked in service workers which I still can’t figure out but it’s gotta be something stupid I’m missing). It also drastically improves the feel of your application / web site. This starts making site building about information architecture only as more and more of the UIs are driven by single tag / click and tweak routes methods.

Phase 5 is a bit out there (but not too far), which would have things like polymer cli’s analyze function actually write to a information-architecture.json style file which could be ingested and start building out entities and bundles to match the potential of the front end. Effectively, design drives the entire process. The one-page-app doesn’t care if a month from now you swap out Drupal for something else though Drupal’s integrations and automation will make it more attractive to use then the competition.

Showing the Web component development process via Polymer

In our own development with the platform and webcomponents module, we’ve quickly moved from Phase 1 (February) to demonstrating the capabilities of Phase 2 (March - April at DrupalCon) and now to auto-loading smart, one-page apps in Phase 3 (as of last week). Because of this, I feel that I’ve got to start unpacking some of the magic at play here and show you what our build process for new apps and elements is. I’ve put together four videos which go through some different phases of planning and creation of a new one-page app which just diplays comments from Drupal and end with a block based example for a simple drag and drop upload widget.

Video 1 starts from right after installing polymer cli (which I show you where to get). I discuss how to go about getting elements, dig into some elements to show their capabilities and start to make a very vanilla, hard coded element to display a card with buttons.

Video 2 starts to discuss properties of custom elements and how to do two-way and one-way data binding between the properties of the element and other elements, allowing you to make some slightly smarter elements. I then abstract the work of the first video and then reference it in a new element, showing the process of making stacking elements. Then I cover the (amazing) iron-ajax tag to wire an element up to a json data source and Polymer’s “template stamper” which can do conditional statements and for-each loops in your element.

Video 3 shows how we then get this disconnected app registering in Drupal and being fed data to display from drupal using the new auto-load capability of the webcomponents module. I modify the manifest.json file, place the app in the correct part of the directory structure, and then have a 1 page app that’s in the menu system, acts like the rest of drupal and securely gets data from drupal but has had almost 0 “drupal” style code written.

Video 4 quickly shows another one-page app which is using the manifest.json ‘block’ function. This allows you to make a one-page app but then present it in Drupal as a block element to place where ever you want. The block element in question is an incredibly slick vaadin-upload element which allows for drag and drop uploads to the Drupal file entity system

May 24 2017
May 24

On the ELMS:LN team, we’ve been working a lot with polymer and webcomponent based development this year. It’s our new workflow for all front-end development and we want Drupal to be the best platform for this type of development. At first, we made little elements and they were good. We stacked them together, and started integrating them into our user interfaces and polyfills made life happy.

But then, we started doing data integrations. We wanted more then just static, pretty elements, we wanted data driven applications that are (basically) headless or hybrid solutions that embed components into Drupal to streamline and decouple their development without being fully headless. I started writing a few modules which seemed to have a TON of code that was the same between them. So, a few refactors later and a lot of white boarding, and now we’ve got the ability to do autoloading of polymer based one-page apps by just enabling modules and dropping apps in discoverable directories!

This approach keeps the design team as disconnected as possible from Drupal while still being able to interface with it in a modular fashion. Think of being able to roll out a new dashboard module with all of it’s display being contained in a single element; now you’ve got 1 page app development workflows. Your design / front-end team doesn’t need to know Drupal (let’s say this again in bigger text):


No, not by knowing twig, but by learning a technology closer to the browser (that works anywhere) in Web Components!

Check out the file for more details about this workflow as text or watch this video!

Apr 05 2017
Apr 05

*Text from youtube description of below video

I love this community. If you do too, throw up a video introducing yourself to the Drupal community. Maybe say why you are involved and why you care so much about it. At the end of my video I challenge others to donate to causes that make the world a better place, open source or otherwise. The organization I am donating to today is

Feb 21 2017
Feb 21

I’ve dreamed of a day when systems start to work like the home automation and listening (NSA Spying…) devices that people are inviting into their home. “Robots” that listen to trigger words and act on commands are very exciting. What’s most interesting to me in trying to build such systems is,… they really aren’t that hard any more. Why?

Well, the semantic web is what’s delivering the things for Siri, Google and Alexa to say on the other end. When you ask about something and it checks wikipedia THAT IS AMAZING…. but not really that difficult. The human voice is being continuously mapped and improved upon in accuracy daily as a result of people using things like Google Voice for years (where you basically give them your voice as data pieces in order to improve their speech engines).

So I said, well, I’d like to play with these things. I’ve written about Voicecommander in the past but it was mostly proof of concept. Today I’d like to announce the release of VoiceCommander 2.0 with built in support to do the “Ok Google” style wikipedia voice querying!

To do this, you’ll need a few things:

Enable the voicecommander_whatis module, tweak the voicecommander settings to your liking and then you’ll be able to build things like in this demo. The 1st video is a quick 1 minute of a voice based navigational system (this is how we do it in ELMSLN). The second is me talking through what’s involved, what’s actually happening, as well as A/B comparing different library configuration settings and how they relate to accuacy downstream.

Oct 27 2016
Oct 27

Last week I gave a small presentation on how I used QuickTabs module to organize some content on a Drupal site I'm working on; I've included this follow up video in case anyone missed the demonstration and/or wanted to see it again.

Prerequisite Modules:
QuickTabs (
Display Suite (

Block Visibility Settings (
Patch #40 applied clean.


Oct 14 2016
Oct 14

A few months ago I did a post showing ELMS:LN’s initial work on the A11y module which provides blocks for improving as well as simulating accessibility conditions. After lots of testing I’m marking a full release of the project we’ve been using in ELMS:LN for months (as dev).

This latest release includes many bug fixes and most notable new features of:

  • Simulating Field loss conditions including central field and peripheral field loss, applied via CSS/JS in real time
  • Colorblindness for many different forms of colorblindness, applied via SVG filters in real time
Aug 23 2016
Aug 23

If you are using the Special Menu Items module to add things like HRs and unlinked titles to your menus, you might run into the Breadcrumb issue.

If you have a non-linked menu item it may show up on your breadcrumbs as a plain text title which you may not want to see. There are a number of issues for this in the modules queue, but as far as I can tell it has not been fixed.

My quick fix is a function in template.php

function ems_college_process_page(&$variables) {
    // Add support for the breadcrumb : nolinks would be still available leading to 404
    $item = menu_get_item();
    $trail = menu_get_active_trail();
    foreach ($trail as $key => $value) {
        if($trail[$key]['href'] == '<nolink>') {
            drupal_alter('menu_breadcrumb', $trail, $item);
            $variables['breadcrumb'] = theme('breadcrumb', array('breadcrumb' => menu_get_active_breadcrumb()));

Aug 07 2016
Aug 07

DrupalcampPA is July 30 and 31st in Pittsburgh PA and Yinz-all should come! We just announced our full schedule with keynotes, giveaways, and more. Some quick highlights of why you should come! Submissions this year tried to take the stance of “getting off the island” by having more and more presentations about topics that plug into or are affiliated with Drupal without just being about Drupal. As a result we’ve got talks ranging from local environment building, Islandora (Drupal based library system), Drupal as an iPhone app backend, Angular / CDN leveraging, to Unit testing, FAITH automated testing, team building, event organizing, CKEditor plugin development, building VR interfaces and voice / keyboard driven interfaces.

Day 2 also has a lot of open time dedicated to Sprints, Help Swaps, BoFs, with a workshop for those needing more hands on / “These are the building blocks of site building” style sessions. Some schedule highlights


  • Mathew Radcliff (and anyone else that wants to) will be organizing Drupal 8 / contrib sprints.
  • The ELMS: Learning Network core team will be holding a global sprint all day Sunday, running late into the night and Monday with people remoting into slack, github and channels from State College, California, Canada, and the UK.

Raffle prizes

This year we’re expanding our “door prize” raffle to include some sweet tech. We’ll be raffling off a Raspberry Pi starter kit, Asus Google Chromebit, hockey jersey, and hoodies!


  • Megan Sanicki the Executive Director of the Drupal Association is Skyping in to kick off the event!
  • Scott Reeves Drupal 8 theme core contributor will be keynoting day 1 to talk about the journey from outside the community to doing core contributions!
  • Bryan Ollendyke (me) will be moderating another fun open panel discussion with voices of the community called 2020, The Panel:
    • Kirsten Burgard - Drupal Govcon organizer; Super voter never missed an election.
    • Katrina Wehr - Instructional Designer @ Penn State University. Former teacher, current #edtech smart design & pedagogy first enthusiast.
    • John P. Weiksnar - Drupal(tm) futurist * Active member of WNYDUG, Drupal Association, Society of Motion Picture and Television Engineers(r)
    • Fatima Sarah Khalid - I bring my own sunshine #CivicHacker & budding #Drupalista | coding for the people @TheCityofBoston | Fmr @MicrosoftNY @NYUengr

Other stuff to do in Pittsburgh

We’ll have activities planned for Saturday night but beyond just DCPA Pittsburgh is a really nice city to visit. We’ve got a great zoo, inclines with great views of the city, boat tours, stadiums, shopping, great places to eat (especially in the Strip District) and lots and lots of Colleges & Universities in the area with lots of parks and places to walk not far from downtown (or downtown at Point State Park among others). If your looking to spend extra time there or need recommendations on where to go (like my favorite coffee / co-working house north of the city) just ask!

We look forward to seeing everyone in three short weeks for a great day of community, code and learning!

Jul 11 2016
Jul 11

I know Dreis caused a lot of smiles when he used Amazon Echo and Drupal 8 to be notified about “Awesome Sauce” going on sale. The future is bright and requires increasingly more ways of engaging with technology. But what if you wanted to start to have a conversation without Echo to do it? What if we wanted to progressively enhance the browser experience to include lessons learned from conversational input technologies.

If you haven’t heard of it before, welcome Annyang and the Web Speech API native in many browsers (Chrome / Opera it’s enabled by default too). Annyang is a simple wrapper library for the Web Speech API. This lets you use simple phrases as the trigger for a javascript callback to fire. Using well formed ones, you can effectively wire voice based commands into anything on the web.

While looking for something that implements annyang, I stumbled on the Voice Commander module. It didn’t do everything I wanted, but it’s a great starting point since it integrated the library and processed menus. What I show in the video below is a fork of voice commander which is core in elmsln, allowing you to start speaking lots of options. I show this in the context of one of our production systems where I’ve done the authorization clicks. Browsers will ask if your microphone can be turned on (so no, this isn’t NSA stuff).

Right now we have support for going to systems, scrolling, going into edit mode, next / prev in a book, browser history forward/backward, common menu items. In the future I’m looking to add a more conversational engine using the other side of Web Speech that allows browsers to talk to you in their nature “voice”. A proof of concept of that called ELMSLN AI is in the links section. Enjoy talking!

Jul 11 2016
Jul 11

I got a question in from twitter asking if we had a video showing what we were doing with Drupal, xAPI and H5P. I said sure! And I hurried off across youtube and my many blogs to find it. Just… gotta… find… that… post.. I mean, I know I did it I HAVE TO HAVE DONE IT ITS SO DAMN COOL.

An hour later. I have realized that I have never done a video about this. #facepalm

So, here’s this post then which shows and talks through the following:

  • What is an LRS - Learning Record Store (We use Learning Locker)
  • What is H5P - An HTML5 interactive widget creator that plugs into Drupal easily
  • What is xAPI - An experience API statement that tracks the action taken by a user

How is ELMS:LN using this? We’ve got native xAPI support thanks to H5P and the Tincan/xAPI drupal modules. These allow you to start doing tracking within Drupal sites, effectively helping turn any Drupal site into a mini-LMS. This doesn’t mean it does everything for you. But considering the Quiz module has native support for xAPI statement emitting and there are open source LRS like Learning Locker built in PHP and on a well documented communications specification; the reality of every system being an LMS is not that far off.

Obviously I’d ask you to explore this through ELMS:LN but we’re a drop in the infinite Drupal ocean, so let’s all learn and collect statements together shall we? :)

Check out the links and video below of it all in action!

Jun 06 2016
Jun 06

The title kind of explains it all. Check out the screencast for a quick demo on how to do it.

Jun 02 2016
Jun 02

This is a recording of a presentation I gave at Drupal Gov Day called Purespeed. There’s many posts on this site that are tagged with purespeed if you want to dig into deeper detail about anything mentioned in this talk. This talk consolidates a lot of lessons learned in optimizing drupal to power elms learning network. I talk through apache, php, mysql, front end, backend, module selection, cache bin management, and other optimizations in this comprehensive full-stack tuning talk. The focus of this is Drupal 7 but the majority of the talk applies to Drupal 8 and beyond and I mention some D8 modules to investigate as well.


May 09 2016
May 09

First off, hope you are all enjoying Drupalcon, super jealous.

Its been almost three months since I wrote about Creating secure, low-level bootstraps in D7, the gist of which is skipping index.php when you make webservice calls so that Drupal doesn't have to bootstrap as high. Now that I've been playing with this, we've been starting to work on a series of simplied calls that can propagate data across the network in different ways.

For example, if data that's in one system but cached in another is updated, I can tell all the other systems of that type to have their cache wiped. This is a series of Services and Authority systems making up ELMS:LN and they form different patterns. The most common use-case though that's easiest to grasp is updating a course name.

Let's say we have a course named 'Stuff' and we want to make it  now called 'Stuff 100'. Well, that simple node with title 'Stuff', actually lives in anywhere from 10 to 15 systems depending on the structure of the network it's related to. Similarly, we don't know where the data is being updated (potenitally) and we don't want to force people to only be allowed to update it in one place.


We could spider the call (and originally were). This assumes figuring out the 10 or so places you want to send data and then sending it to all of them at once (or one then the next, etc). This invokes (N+1) load though as we use non-blocking httprl based calls, meaning that 10 calls will open 10 simultaneous apache / php-fpm threads. That's a lot of network traffic when starting to scale up, invoking self imposed spikes. We still need this call to go everywhere though so we need to borrow a concept from Twitter.

Spider call


When you send a tweet, it doesn't go to 1 massive database which then everyone reads from, it goes to localized databases and then every X number of seconds, messages are passed to neighbors in the twitter network. You can try this out yourself if you VPN to another part of the world then send a tweet and see how long it takes to show up in someone's timeline sitting next to you vs doing it on the same network.

Snaking the same call

For this call, we still send non-blocking and we still send to everyone in the network, but we use distributed recursion in order to accomplish it. Basically site A figures out "I have data that needs to go to these 9 systems". Then it sends the message as well as the list of the next 8 systems over to system B. B then looks, processes the request and JUST before finishing the connect says "wait, do I have others to pass this on to?" and see the 8 systems in the list. It then sees the first one as who to contact next then sends the stack of 7 systems who will also need the call. This process repeates recursively until there are no more calls (at which point I'm thinking of adding a "bite tail" function to call the originator and say it finished).

So that's cool but why do we care?

We care because this doesn't invoke N+1 load even though it's still replicated the message across N+1 systems! The snake methodology of data propegation will keep 2 apache / php-fpm threads open at any one time.

System A calls B (non-blocking call, A hangs up and delivers you the end user the page)

System B calls C (non-block call, B hangs up, you have no idea this is happening)

System C calls D (non-block call, C hangs up, recursively til no more stack to call)

Snake will always invoke at most 2 execution threads instead of N+1 which is a huge performance and scale gain. The difference to the end user is marginal as everything is non-block either way. Just make sure that you correctly block recursive kick offs on `hook_node_update` or you'll get something funny (and crippling) like this issue write up.

Code example for the function testing if it should ship off to the next member of the network:

// look for snake calls which we can perform on anything stupid enough
  // to attempt it. This can destroy things if not done carefully
  if (isset($_elmsln['args']['__snake_stack']) && !empty($_elmsln['args']['__snake_stack'])) {
    // tee up the next request as well as removing this call
    // from our queue of calls to ship off
    while (!isset($path)) {
      $bucket = array_pop($_elmsln['args']['__snake_stack']);
      $systype = _cis_connector_system_type($bucket);
      switch ($systype) {
        case 'service':
          $path = '/' . $_elmsln['args']['__course_context'] . '/';
        case 'authority':
          $path = '/';
          // support for skipping things outside authority / service scope
    // need to queue module / callback back up for the resigning of the request
    $_elmsln['args']['elmsln_module'] = $_elmsln['module'];
    $_elmsln['args']['elmsln_callback'] = $_elmsln['callback'];
    $version = str_replace('v', '', $_elmsln['args']['q']);
    // issue call now against the next item in the stack
    $request = array(
      'method' => strtoupper($method),
      'api' => $version,
      'bucket' => $bucket,
      'path' => $path,
      'data' => $_elmsln['args'],
    // request the next item while indicating that recursive calls are allowed
    // so that it gets passed down the snake
    _elmsln_api_request($request, TRUE, TRUE);

Here's a screenshot of the code diff that it not takes for us to switch off of traditional non-blocking spidered calls to non-blocking snake calls. 8 lines of code for me to switch from Spider to Snake.

Apr 20 2016
Apr 20

Here is a screen cast of how to get started with Drupal 8 theme development.

In the video I cover:

  • using the drupal console to generate a theme from a base theme
  • creating a libraries yml file
  • adding global css to your theme
  • Using Kint with the devel module
  • debugging twig
  • adding your own twig file to your theme
Apr 20 2016
Apr 20

I admit that I haven't really looked at Drupal 8 too much yet. There is a variety of reasons why I haven't and I surely don't want this to turn into a forum listing the pros and cons of D8. We can leave that for another post. 

The main reason for my delay is that looking at the boiler plate for symfony, I have always been a little turned off by how much code I have to write just to get a module up and running. However, the Drupal Console really alleviates a lot of the confusion and tedious work by helping you create all the files and directories that you need to get started with your own custom module.

Check out this video to see how easy it is to get a Hello World module up and running. It will take you less than 5 minutes to create your very own working Drupal 8 module. 

Apr 06 2016
Apr 06

A question on the PSU DUG Slack channel got me thinking. How is it that websites are still being constructed at Penn State without any thought being put in as to how its is going to be maintained? Or by whom?  

To be clear, I am not talking about content creation or maintenance, but maintaining the code/server/DB/etc. that supports or runs the site? Or to develop new features and functionality, going beyond just updating code or applying security patches. Of course, this is not restricted to Drupal development - there are many other examples.

As web developers, I pretty sure we all understand the importance of this, but how has this not 'trickled up'? (To coin an awful phrase.)

So what can we do to truly address this problem? What has worked before? What strategies have you enacted? Are there channels of communication we have not tried? Have we missed something?

Perhaps I am incorrect in my observations...

Please add any comments that you might have...

Mar 24 2016
Mar 24

Accessibility is a big deal! Good, now that that’s in the teaser for this let’s dig in.

ACCESSIBILITY IS A BIG DEAL - But not everyone knows how to help or what to do

The biggest problem is awareness raising about accessibility issues for users of your site and how they NEED the site structured (and what to do to meet them there). As a result, the a11y project exists to help raise awareness as well as provide techniques and checklists for those building things for the web.

This is where the Accessibility Toolkit (a11y) hopes to help by providing some additional tools to Drupal for improving accessibility of your websites. The a11y module currently provides a block which allows users to:

  • Change your site’s fonts to all by optimized for those with dyslexia
  • Change the contrast of your site
  • Change the size of your site’s interface
  • Invert the colors / use a darker interface with black on white text and yellow links

The goal of the module is to provide tools that can be applied to any drupal site to improve options for your users that go beyond aria landmarks and correctly ordered headings. A11y is about far more then “just” blind users; it’s about improving access for all.

This video shows what the module provides in the context of ELMSLN. Backend support so that there isn’t a FOUC is planned but not yet implemented.

Mar 21 2016
Mar 21

I had a question the other day on twitter as to if it’s possible to run drush sites aliases in parallel. Apparently I was asking the question the wrong way as @greg_1_anderson (Drush co-maintainer) told me that Drush supported –concurrent and apparently has for several years. In fact, its supported it so long and is so under used that they might remove it from Drush as a result :).

So here’s my post doing some testing of running drush with concurrency to illustrate how and why you might want to use it. Criteria for using this:

  • You have multiple site aliases you want to run the same command against
  • This could be a multi-site or running against multiple remotes

Multisite is probably the best use-case for improving performance because you might just say “Use ansible” or some other provisioner to hit things in parallel. I live blogged via this issue in the ELMSLN issue queue while playing with this but I’m consolidating the findings here.

The server specs that these tests were run on: 2 CPU, 6 gig RHEL 6 based server. To test these things, I ran drush @courses-all rr –y which in our setup will run registry rebuild against 58 sites in a multi-site configuration. RR is provided by the registry rebuild drush plugin. While drush is a concurrency=1 default, I set it on purpose just to make sure.

time drush rr @coruses-all –concurrency=1 –strict=0 –v –y

Here’s what is produced if we did that command:

real 13m42.743s

Registry Rebuild is an aggressive command, so we kind of expect to see something like this. We’re at almost 14 minutes here. This is what kicked off my research, how to optimize this command

I started with a concurrency value of 10, that proved to be WAY too high. It maxed out my CPU but I got MUCH better results:

real 5m11.379s

It brought things down to 5 minutes, but 5 minutes of CPU max vs 14 minutes of much less so is a good trade. From there, I started trailing off from 10 to see if we could find that threshold. The visual below is runs at 10, 1, 6 and 7 concurrency. As you can see, 10 6 and 7 look very similar; we slam the CPU and we max out. The times for 6 and 7 were around 4:51. A huge gain but we’re still rocking 100% CPU which isn’t a good thing.

visualizing CPU usage

So we kept moving down. This graph shows 5 vs 3 vs 2 for the CPU bumps.

5 vs 3 vs 2

Three and two are very promising because while we hit 100% CPU, it’s for a very short period of time and in two concurrency we don’t hit 100, we just get close (90ish). The most impressive thing when hitting only 2 or 3 concurrent calls is that we finish the job substantially faster then 1 at a time (and not much different from 10).

Concurrency 3 (C3)

real 4m56.193s

Concurrency 2 (C2)

real 5m36.804s


While C3 is 40 seconds faster then C2 it might be worth just using two threads at a time because it’s not really fast enough to justify. C2 is almost 60% faster then the baseline. As a result, I adjusted our upgrade script to take advantage of the concurrency value so that we now do two threads at a time. I still need to run this in non-vagrant environments but the calls against more commands then just registry rebuild should provide an impressive gain.

Mar 08 2016
Mar 08

It’s been a few months since I first mentioned the Git Book module here on DPE. I haven’t done much with it since but was able to scrape together a rather epic sprint today. Coupled with improvements to ELMSLN in general, this thing is getting close to a pretty killer workflow for book creation. The scenario we’re striving for:

  • Read in a git repo (or create a new one) to provide a git repository per book in Drupal
  • Read in any structured markdown repository (starting with support for Read the Docs)
  • Allow people to work either in a CMS or on github and successfully push “code” changes back and forth
  • Allow for working remote (github) and syncing local (CMS) to further democratize publishing
  • Git Book is the first area; we’ll be making an Open Curriculum YAML / markdown based spec that ELMSLN will be able to read and automatically setup… everything

As you’ll see in the video (using Drush docs as an example) this now plays nice with Outline Designer and the rest of Drupal. It now also has packaged support for EpicEditor; a sweet and simple markdown editor that WYSIWYG API has built in support for (though we patch it for settings reflected in the screencast).

Mar 07 2016
Mar 07

This is a Video post that shows how easy it is to whip up ELMSLN on an EC2 instance. Please at least use a t2.small instance as ELMSLN requires 2GB of memory for mysql during automation.


Mar 07 2016
Mar 07

A quick update on an issue I had with an Open Layers map and replacing the OL modules with Leaflet and IP Geolocation Views & Maps .

Mar 07 2016
Mar 07

In order to push education, we’ve needed to at times bend Drupal to our will.

ELMSLN hold at its core foundation Drupal 7, with the idea that in order to innovate, we need to fragment functionality but not user experience. We can push the envelope in one area of education and design while still holding fast to the realities of today’s experiences in others. With ELMSLN, we’ve created an infrastructure for sustaining innovation while keeping your baseline of course experiences in check.

Because of this, we’ve got sites… boatloads and boatloads of sites. A course is not just 1 Drupal site, but the bulk of it’s experience is potentially delivered by 4 or 5 sites, influenced by data from a 6th and provided with high quality media from a 7th. This overhead comes at obvious costs (like complexity, thankfully Drush / bash keep life simple) and replication of database tables / files for gains in flexibility.

In order to correctly keep the experience in sync for students and instructors so they never need to know what’s going on under the hood, we need data and experiences to be in sync. For experiences, that’s easy, we’ve built a solid, sub-themed UI that helps people quickly forget that it’s built in Drupal. For data though, we needed something more. We’ve leaned on two capabilities that were easy to come by: cron jobs with keys, and Restful web services via RESTws module. These each come with their own set of problems. Cron is slow, and destructive to caches and performance; while RESTws is well targeted to modification of objects. The problem is, not everything is an object.

And so, we built an API that can contextually bootstrap Drupal based on the kind of command being run. I based this off of a cool module I found called the JS Module. The JS module provides a drop in, alternate index.php which bootstraps just the database and the JS module itself. I didn’t like that it was making too many assumptions about JavaScript so I took the concept and ran with it.

This screen cast details a low-bootstrap API that I wrote to allow ELMSLN to cheat on D7 bootstraps. We needed this because of the inherit structure of our system (lots of sites that need to keep very specific data in sync) but it’s an example that I think others could fork for their own purposes. A few API capabilities we currently support for lower-bootstrap:

  • remote cache bin clearing - DB bootstrap only
  • remote vset - variables bootstrap only
  • remote theme settings sync - variables bootstrap only
  • sync roster - a complex backend process that uses a full bootstrap but without page execution

[embedded content]

Mar 07 2016
Mar 07

Using drupal_static() on your helper functions is a great habit to get into when developing your modules. Drupal static ensures that your function will only run through its logic once during the bootstrap process. Depending on your functions logic, this can start saving lots of processing time if your function is being called multiple times during bootstrap or page load.

Implementing drupal_static

Let's take the following function for example. Here is helper function takes a url string from the ELMSLN network and breaks it up into identifiable parts.

 * Helper function to break an elmsln url into
 * identifiable parts. 
 * @param  string $url
 * @return array()
 *         - protocol
 *         - path
 *         - subdomain
 *         - domain
function _cis_connector_url_get_properties($url) {
    // split the url at the protocol
    $url_array = preg_split('/(:\/\/)/', $url);
    $properties['protocol'] = $url_array[0];
    // delete the protocol
    // split the domain at the first backslash
    $url_array = preg_split('/\//', $url_array[0], 2);
    $properties['path'] = $url_array[1];
    // split the url into subdomain and domain
    $url_array = preg_split('/\./', $url_array[0], 2);
    $properties['subdomain'] = $url_array[0];
    $properties['domain'] = $url_array[1];

    return $properties;

By itself the function doesn't have a ton of processing time invested into breaking the url string into an array but until we look at the function that is utilizing it. In our module we are calling it from the very dangerous `hook_url_outbound_alter()`. Depending on what page we are rendering, this function has the potential of being called a ton of times during the bootstrap process. It would be great if this function could just instantly return the value from the last time I called it. This is where drupal_static comes into play. Drupal static will check to see if the function has run already, and if it has, it will immediately return it's value saving you a lot of processing time. Let's look at how to implement drupal_static on this function.

function _cis_connector_url_get_properties($url) {
  $properties = &drupal_static(__FUNCTION__);
  if (!isset($properties)) {
    // split the url at the protocol
    $url_array = preg_split('/(:\/\/)/', $url);
    $properties['protocol'] = $url_array[0];
    // delete the protocol
    // split the domain at the first backslash
    $url_array = preg_split('/\//', $url_array[0], 2);
    $properties['path'] = $url_array[1];
    // split the url into subdomain and domain
    $url_array = preg_split('/\./', $url_array[0], 2);
    $properties['subdomain'] = $url_array[0];
    $properties['domain'] = $url_array[1];

  return $properties;

It's as easy as that! We first set the return value `$properties` to the drupal_static function. Then we check to see if the drupal_static function has the properties value. If it doesn't then it runs through its logic. If it does, we just immediately return the `$properties` value. Pretty cool. Let's test to make sure this is working.

function _cis_connector_url_get_properties($url) {
  $properties = &drupal_static(__FUNCTION__);
  if (!isset($properties)) {
    // split the url at the protocol
    $url_array = preg_split('/(:\/\/)/', $url);
    $properties['protocol'] = $url_array[0];
    // delete the protocol
    // split the domain at the first backslash
    $url_array = preg_split('/\//', $url_array[0], 2);
    $properties['path'] = $url_array[1];
    // split the url into subdomain and domain
    $url_array = preg_split('/\./', $url_array[0], 2);
    $properties['subdomain'] = $url_array[0];
    $properties['domain'] = $url_array[1];

    drush_print_r('Processed the following url: ' $url);

  return $properties;

Then we'll use Drush to execute some example calls:


# Result
Processed the following url: http://courses.elmsln.local/sing100

It works! It only processed the url one time!

Implementing drupal_static on a function with dynamic variables

But what happens if we run a bunch of different urls through the function? Let's try:


# Result
Processed the following url: http://courses.elmsln.local/sing100

Ugh-oh. It only processed the first url. That's because of our drupal_static name. Drupal static calls need to have unique names passed to them so that the caching layer will know what cached values belong to which functions. An easy way of doing this is simply using the name of the parent function by giving it the name unique name of `&drupal_static(__FUNCTION__)`. However, this only works if your variables being passed into the parent function are static; meaning they will always be the same during the bootstrap process. In our case, the variable $url, has multiple values.

The simple fix is to create a unique name based on the function name AND the incoming variables. To do this, I like run them through 'drupal_hash_base64()'. This makes a really nice hash value even if you have long string or arrays coming in as variables. Let's take a look at what this will look like:

function _cis_connector_url_get_properties($url) {
  $function_id = drupal_hash_base64(__FUNCTION__ . $url);
  $properties = &drupal_static($function_id);
  if (!isset($properties)) {
    // split the url at the protocol
    $url_array = preg_split('/(:\/\/)/', $url);
    $properties['protocol'] = $url_array[0];
    // delete the protocol
    // split the domain at the first backslash
    $url_array = preg_split('/\//', $url_array[0], 2);
    $properties['path'] = $url_array[1];
    // split the url into subdomain and domain
    $url_array = preg_split('/\./', $url_array[0], 2);
    $properties['subdomain'] = $url_array[0];
    $properties['domain'] = $url_array[1];

    drush_print_r('Processed the following url: ' . $url)

  return $properties;

Now let's test it:


# Result
Processed the following url: http://courses.elmsln.local/sing100
Processed the following url: http://blog.elmsln.local/sing100
Processed the following url: http://studio.elmsln.local/sing100

As you can see, even though we sent the function seven urls, it only ran through the process three times. Success!


Drupal static is an extremely easy way of making your functions more performant. Just make sure that if your functions variables are going to be different, you need to create a custom name to pass to drupal_static().


In the comments below, Panagiotis Moutsopoulos shows an alternative way that you can handle this which I like better.  Instead of putting the variable inside of the drupal_static function id, you can simply segment your properties under the value of the function variable. In your if statement you then don't just check for 'properties' you would check for '$properties[$variable]'.

function _cis_connector_url_get_properties($url) {
  $properties = &drupal_static(__FUNCTION__);
  if (!isset($properties[$url])) {
    // split the url at the protocol
    $url_array = preg_split('/(:\/\/)/', $url);
    $properties[$url]['protocol'] = $url_array[0];
    // delete the protocol
    // split the domain at the first backslash
    $url_array = preg_split('/\//', $url_array[0], 2);
    $properties[$url]['path'] = $url_array[1];
    // split the url into subdomain and domain
    $url_array = preg_split('/\./', $url_array[0], 2);
    $properties[$url]['subdomain'] = $url_array[0];
    $properties[$url]['domain'] = $url_array[1];

    drush_print_r('Processed the following url: ' . $url);

  return $properties[$url];
Dec 08 2015
Dec 08

The ELMS Learning Network team is seeking to not only transform education, but also the concept of content and how you can interact with a CMS. By taking our network based approach to educational delivery to Drupal, and viewing Drupal as more of an engine connecting people rather then a product out of the box, we can craft innovative solutions (like this one).

We’ve created the ability for Drupal to take a git repo, ingest it, look for Read The Docs style outline structures, and import all the content. It does this by a new module called Git Book that implemented the Git.php Library.

In this video, I show how we’re able to ingest the Drush Ops docs. It also offers a sneak peak at the interface work we’re undertaking to unify our multi-distro approach to tackling big systems.

This isn’t just about pulling in content, it’s about user empowerment. Faculty that understand the power of markdown and version control will be able to craft their courses in whatever location they want to (like ELMSLN faculty @_mike_collins) and then share it over to ELMSLN for use in courses. This will encourage remixing and open up the possibility for PRs against content.

In the future, we’ll be making a fork of the ReadTheDocs spec that (should) be compatible with it so that we can have simple git based files that drive the setup of complex course networks, automatically creating drupal sites, networking them, producing content outlines, assignments, discussion topics, media, etc.

Nov 18 2015
Nov 18

I recently defended my Master of Sciences thesis with a central theme of open source, activism and Drupal. You can grab a copy here or in the article links. I started this research project in 2007, in part, life got in the way but also, I didn’t know how to tell the story.

It’s the story of our community. My love for our community, my love for the world we are all creating, together. One that’s more open, more connected, and more equal. I went several years without knowing what to even call what I was witnessing. I used to refer to this as Structured Anarchy.

Structured Anarchy was describing the structure of the community that organized around Drupal. That there seemed to be chaos and disorder, yet some how everyone was able to build these amazing systems with limited effort and resources. As I started to go through the interview data though, I happened upon a different concept. I wanted to analyze where this community came from, because everyone talked about community but never identified who that entailed. What caused this vibrant community to emerge that keeps this chaos managable?

I landed upon a phrase that seemed to resonate with others as well: Information Altruism. Information Altruism is the concept identified in my research that it isn’t Free Open Source Software that changes the nature of work, not in itself anyway. It’s the application of a community that utilizes that FOSS and more specifically applies the notions of free and time-banks to traditionally vendor and pay based ecosystems. Information Altruism provides a case study in how the intentional donation of efforts can alter the concept of effort and work within an organization.

The research showcases that a community can emerge out of the sustained donation of effort. Thank you to everyone in our local community for support over the years and to the Drupal community at large. Nothing I do would be possible without you.

This isn’t the end, it’s just the beginning of a new chapter as our community matures and gains momentum. Ex Uno Plures.

Nov 17 2015
Nov 17

This video talks through how XMLRPC Page Load and HTTPRL Spider can be used to warm caches on private / authenticated sites. XMLRPC Page Load provides a callback that tricks Drupal into thinking that it’s delivering a page to certain user account. It does this by simulating page delivery but then never actually writing the output anywhere. What it does instead is just return if it completed.

This allows you to hook it up to crontab via httprl spider and simulate a ceratin user account accessing every page on your site. This can be used to warm the caches on sites that are locked away behind authentication systems so that you can rebuild caches on whatever interval you want.

There’s also a (crazy) command in HTTPRL called huss (not shown) that will simulate every user hitting every page; effectively getting full cache coverage on something like an authcache enabled site.

Nov 09 2015
Nov 09

For those of you who haven't worked with Entity Metadata Wrapper in Drupal before, drop the burrito visit the documentation page on

Entity Metadata Wrapper takes working with render arrays from blah, to "yeah!" in seconds.  For instance, remeber how painful it was to get the file url out of an image field before:

$image_uri = $variables['field_svg'][0]['uri'];
$image_url = file_create_url($image_uri);

Ok... not excruciating, but not fun either. What would be fun is if our object were smart enough to just give us the url without us having to generate it each time. Something like this:

$image_url = $node_wrapper->field_svg->file->url->value();

And remember when you wanted to access information about an article that is referenced by another article you had to write this:

$referenced_article = node_load($node->field_arcticle_ref[0]['target_id']);
$referenced_article_title = $referenced_article->title;

Wouldn't it make more sense to just write:

$image_url = $node_wrapper->field_article_ref->title->value();

Well that's the magic of Entity Metadata Wrapper. And I'm not even stratching the surface. The real power surfaces when you start updating data within those objects!

But one problem arises; how do we know what information we have access to when using Entity Metadata Wrapper on our objects? In this video I'll show you how to use the entity method 'getPropertyInfo()' to inspect those objects within Entity Metadata Wrapper.

[embedded content]

Oct 21 2015
Oct 21

This video shows the automation involved in creating a new tool in ELMSLN. A tool in ELMSLN = new install profile = new domain = new drupal distribution.

New Idea + New Distribution + New Domain = New Tool

We do this by running a bash script against the Ulmus Sub-distro which is shown in the video. This allows us to utilize a common base of modules for a “core above core” approach to distribution development. It also allows us to rapid prototype new ideas in a vagrant environment, to produce well-positioned applications for ELMSLN while potentially being in a classroom talking to a faculty member about their ideas and pedagogical needs.

Oct 20 2015
Oct 20

This is a quick video I shot showing how you can use the YouTube Uploader widget to streamline your workflows of interacting with Drupal and Youtube. I’m demonstrating this in the context of ELMS Learning Network as we’re looking at utilizing this module as part of our ELMSmedia distribution. It’s pretty impressive what the 7.x-2.x version is able to do and without further ado; enjoy. Props to my student Mark (@mmilutinovic13) for experimenting with this first.

Oct 08 2015
Oct 08

One of the options in Nittany Vagrant is to build a local, development version of an existing Drupal site - copying the files and database, then downloading it to the Vagrant VM.  Its is pretty straightforward, but there is the occasional trouble spot.

Here is a short video of how to do it.

Oct 08 2015
Oct 08

There’s many dirty little secrets in Drupal 7 core’s API when it comes to inconsistencies and oversights. It’s a big part of why so much care is being placed in D8 and its taking so long, because people realize this is a platform that’s used for the long haul and core decisions today will have lasting impacts a decade from now.

That said, I discovered one a year or so ago and kept putting it off, hoping it would go away on its own. Well, it hasn’t and here comes a potential scenario that I detail in an ELMSLN issue queue thread I like to call Role-mageddon. While this doesn’t just affect distributions and install profiles and features, it is a lot more likely you could run into a problem there with them; and so here we go.

Example Scenario

Site 1 (Profile A)

  • Developer Adds a Feature X that adds 2 roles
  • Then creates Views, Rules, and blocks and associates roles to access / visibility
  • Then they create Feature Y with 1 role and do the same as before

Site 2 (Profile A + the additions above)

  • Developer Enables Feature Y
  • Developer Enables Feature X
  • All access / visibility criteria of Roles / functionality supplied in Y is flipped with X
  • Oh Sh….

So What happened?

Roles in drupal are stored as id, name, weight. id is generated based on the database being incremented, so anonymous is always user rid 1 and authenticated rid is always 2. After that, it’s the wild west of whoever comes first gets the next id.

Well, if Roles 1 and 2 are created then Role 3, they’ll get ids of 3,4,5.

If Role 3 is created then Roles 1 and 2, they’ll get ids of 3,4,5 but all views, rules, blocks, anything associated to the rid identifier is now associated with the wrong role!

Without this knowledge you could have oh, i don’t know, made all your admin blocks visible the ‘bosswhopays’ role on production and not understood why ). This would also happen if your in dev and have a role that doesn’t move up to production that was created prior to the others that are about to. You move the features up, and none of the settings are kept.

So how do we avoid Role-mageddon?

Role Export adds a column called machine_name to the role table, and then uses the machine_name value to generate a md5 hash value which is used to create the rid. Then, so long as machine_names are unique, it effectively gaurentees that rid’s are unique and won’t collide with other roles that you import / migrate.

The import / export order no longer matters because they’ll always map to

Great for the future, but what about my existing site?

Role Export had support for automatically remapping the updated rid so your users roles don’t get lost, as well as the admin role variable and the permissions associated to the role. That’s great, without those this would have been basically worthless for existing sites.

What my patch of infinite lack of sleep provides, is the same exact thing but for Views, Rules, Blocks, Masquerade settings (since that has security implications and is popular) as well as a hook that can be invoked to fix your other variables like IMCE, Piwik, and LTI.

Oct 08 2015
Oct 08

Yesterday something significant occurred, launched on the new Polaris 2 Drupal platform. And soon the Abington Campus web site will move to the same platform. And perhaps many more.

Ten years ago this would not have been possible. Not because of the technology but because of the landscape and attitude here at Penn State. Words like 'portal' and 'content management system' were perceived as negatives, as things to avoid, as poorly implemented technologies.

That has changed.

One could argue that moving the Penn State home page and new site to Drupal was the significant event, but I was not convinced. That change could have been an anomaly, a lack of other, better options, or just pure luck. Not that a number of people in the Penn State Drupal community did not spend a great deal of time and effort into presenting Drupal as a viable option, but that once that argument was presented and accepted, the process to actually create the site was... let's say byzantine. So in my mind moving to Drupal, while radical and important, did not 'count'.

So yesterday's launch of the Brandywine web site confirms not only the success of Drupal at Penn State, but also a change in mindset and attitudes at a much higher and broader level at the University.  Additional possibilities for the use of the Polaris 2 platform may be in the works, hopefully we will learn more about those soon.

Perhaps there will also be a Polaris 3....

Oct 08 2015
Oct 08

I was made aware that it’s been close to a year since I actually did a demo of the reason that I contribute so many modules to drupal.?org. For those that don’t know, the reason I exist is a project called ELMS Learning Network. It is a Drupal 7 based deployment methodology that takes the many parts of an LMS and fragments them across a series of drupal distributions.

We then take those distributions and using RestWS, single-sign-on, and a common module and design meta-distro, stitch the experience together. The goal is to create more engaging, customized learning ecosystems by envisioning each educational experience as a snow-flake. The project’s iconography of a snow-flake comes both from the networked nature of the different distributions that make up the system as well as that mindset that we need to treat courses as individual art forms that can be beautiful in their own unique way.

Anyway, below is a video showing everything about the project in its current state. If you have any questions about what modules make it up, they can all be found in the links below along wth the repo.

Up coming camp presentations about ELMSLN:

Oct 08 2015
Oct 08

It was a nice little saturday in happy valley. Since my son is forcing us to watch the Sponge Bob Square Pants movie over and over, I decided to multi-task. Bryan Ollendyke has been talking about PHP 7 a tad bit lately so I decided to whip up an instance. 

Granted this is not ready for prime-time just yet, however, it is extremely fast and everything that I have tested so far works fine. Turn on authcache and it is REALLY fast!

There is a session_destroy Warning when logging out of drupal but that can be suppressed and probably is being addressed. If you run this check out the status page and you will see green!

Here is the link to the github repo make a pull request if you have any patches. 

I am currently working on a vagrantfile for it, so if you don't have access to a linode or a digital ocean account hang tight.

Here is the one-liner to run on your newly created VM.

yum install git -y;git clone; cd php7-centos7-mysql5.6; chmod 700; ./

For some quick stats using this setup, here is what devel is telling me. :)

Executed 54 queries in 12.52 ms. Queries exceeding 5 ms are highlighted. Page execution time was 34.56 ms. Memory used at: devel_boot()=0.56 MB, devel_shutdown()=2.86 MB, PHP peak=2 MB.


This is should only be used for testing the future. 

Oct 08 2015
Oct 08

I’m in the middle of several Drupal Camp / Con’s (any event over 1000 people is no longer a “Camp” but that’s for another time) and it’s occured to me: I can no longer learn by going. Now, this is learn in the traditional sense of what I used to go to Camps for (been coming to camps for 8 years now).

It’s not that I’m advanced beyond younger me, it’s that this ecosystem is an endless rabbit hole of buzz words, frameworks, CLIs, VMs, architectural components, issue queues, PRs, and other words that must make most anyone new to anything web immediately freak out and hide. And so, in coming to events I feel I rarely get to learn about implementing BEM or SASS or DrupalConsole or Composer or Dependency Injection because 1 hour or 50 minutes for a talk isn’t close to enough to scratch the surface of these concepts.

What IS important thought at Camp sessions:

  1. Community; hearing from peers and discovering together how much all of us don’t know everything
  2. Learning new buzz words, which is critical because I don’t know what to Google!

For example, I assumed everyone already knew what Drupal Console was only to find out that most people I talk to go “Duh-huh-waaaa?”. And so, to heep some more Buzz words on you all from the Camp circuit and why I’m so excited for them :)


Drush is the original command-line Drupal (you’ll see why I frame it that way soon). It allows you to communicate with your drupal site via commandline, issue commands, routinize boring tasks and do complex scary tasks as single commands. Other modules can supply plugins for this as well as anything you’d like to just use as a plugin without modules.

Drupal Console

Drupal Console is a bit of the new kid on the block. It originally was met with resistance, as it’s another CLI for Drupal, but has since started to find its place in the early D8 CLI communities that are developing. What’s cool about Drupal Console is that it’s starting to find a happy middle ground of coverage for things that Drush kind of was weak at. What else is cool, is that these communities are working together to reduce overlap in capability AND (more importantly) allow for each to call the other natively. This means you’ll be able to start execution threads like `drush drupal-console ….` or `drupal drush en views`.

Console doesn’t have support for running off and grabbing Views. It’s much more of the architecture of building out things that you need to work with your drupal site, but without writing all the code (for example). Think of it more as a utility to help you work with Drupal in the development side of the house. While Drush has plugins for things like module building, code review and entity scaffolding, they were never its strong suit. This is where Drupal Console is focusing its efforts.

Drupal Console also has Symfony based plugins since it’s building against Symfony Console. Drush on the otherhand is your traditional Drupal architecture, supporting multiple versions of Drupal, etc.

Why you should care

Because if this PR / thread in Drush-ops moves forward, it would mean the two could call each other. This will allow for two different ways of developing for CLI devs, pulling in Symfony components, or traditional drupal guts. It also gets you a bigger community working on things at the low level so that stuff at the high level (site building, theming, etc) is easier and more scriptable. With Drush you’ll (still) be able to build make files and get all the dependencies for your site and with Console you’ll be able to write new modules and sub-themes a lot faster because of the templating / file trees that it will allow you to build.

As we get towards a stable D8, Drush and Drupal Console, we’ll have so much raw HP under the hood that you’ll be able to automate all kinds of things. It also means these communities can tackle different sides of the under the hood problem space for getting Drupal going.

For example; I maintain Drush Recipes which allows for tokenized drush chaining. This lets drush call itself and via arguements you get a lot of stuff done without much effort. Jesus Olivas just made me aware that there’s some Command Chaining being worked on for Drupal Console. This way you could string together commands in Console (allowing console to call itself basically) that allows you to get something more functional / end-result then having to manually type the same kinds of commands over and over (he gives an example in the blog post).

The future Drupal Development environment / workflow

Here’s a few projects that if they merge efforts even a little bit in capabilities over the next year or so, you’ll see some insane things start to get automated and working with Drupal will make anything else look painful by comparison (again, looking ahead, right now… yowza!).

What this would give you workflow wise:

  • An Ansible / YML based provisioning of a server, netting all dependencies for D8, Console, Drush, etc
  • You could login and be presented w/ a prompting like Nittany Vagrant provides, asking what kind of site you want to build
  • With even minimal logic to the script (yes I’d like a Drupal site that’s for commerce for example), we could issue a drush recipe that…
  • Runs a make file, si minimal’s itself, grabs dependencies if they weren’t in the make file, set variables and import default configuration from features to make the OOTB “distribution” a rock solid starting point for anyone to build off of.

Then we’d ask other question. Like “What’s the name of this client”. Answering something like “bigbank” would allow..

  • Drush Recipes to tokenize the input of Drush to call Drupal Console
  • Console would then be told “Hey, we need to build a new module called bigbank_custom_paygateway, bigbank_helper, and bigbank_theme” create all the components for the types of custom modules that we all use on every deployment with anyone
  • Then enable these modules in the new site

Eventually we can get into automatic sub-theme creation, asking what kind of theme you want to base off of (zurb, bootstrap, mothership, custom, etc) and automate a lot of the setup in that area too. We could probably get to the point to w/ theming where you can ask Drupal Console for the template files (defaults obviously) that you want so that it generates those too.

The future is going to get so insane (buzz words wize) we’ll need to keep investing in automation just to keep up. We’ll tell each other to just download XYZ and run through the workflow; there will be no more “hey go get all these dependencies and…” no, things will just be, awesome!

Now, create a PuPHPet style interface that builds the YML customizations (Or maybe like this thing), asks the crappy bash questions, and tokenizes everything downstream… stuff that system in a simple Drupal site and hand it to a Project Manager to kick off a new build.. and THEY’VE started building the site. True empowerment and liberation of workflows is upon us. Now let’s all go drink coffee!


About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web