Jul 09 2019
Jul 09

This was a 90 minute session from DrupalCon Seattle 2019. The room was not recorded :( BUT, we recorded locally from Mike’s laptop! Enjoy! Our slides are also attached in the links below. The room was overflowing and we got great feedback on it so I hope you enjoy it too

Seems that it was pretty well received given this tweet of me floating around jumping up and down :)

Amazing and entertaining session about web components @btopro :thumb: pic. twitter.com/kvzyl7SANw

– Abdelrahman Ibrahim (@IAboEyad) April 11, 2019
Apr 05 2019
Apr 05

It’s been about a year since Nikki and I HAX‘ed DrupalCon Nashville. Now, Mike and I are about to embark to DrupalCon Seatle to once again, HAX all the things. HAXeditor has come a really long way since the last reveal. In the time since last year:

Web component oppurtunities at DrupalCon to connect with us and others

If you’d like to see what we’ve been up to and what we’re doing in the web components space (where we spend 99% of our day now), watch this video below that shows the theme layer in HAXcms and how it separates state from design.

May 24 2018
May 24

Photo by Patrick Fore on Unsplash

Nikki and I came, we saw, we talked quickly, and #haxtheweb has started to make sense to some people. But not enough, and that’s where you come in. There’s some amazing parallels in two talks from DrupalCon; Ours in HAX the web, and another by Pantheon about WordPress 5.0 changes / Gutenberg. If you didn’t get a chance to see them I recommend watching the WordPress one and then our HAX one as almost a “What if Drupal had it’s own Gutenberg project” which the speakers are talking about int he first one.

What’s possible with WordPress 5.0

Web components, Polymer and HAX

The unspoken crisis

There’s a great truth spoken towards the end of the What’s new in WordPress 5.0. How does one sell Drupal if Wordpress has this amazing authoring experience? It’s exactly the prompting we gave at Drupalcon Baltimore in 2017, save Drupal through superior authoring experience. If Wordpress beats us to the ultimate, usable experience in content production (or hell, even just a 1/2 decent one), how is your agency able to sell Drupal? It can’t.

I’m really glad that this conversation happened (I had a conflict or would have been there) and that people were willing to be honest and basically have an open discussion of how do we survive in the future if they do land this thing right.

State of the WP in bullet points

If you haven’t been following Wordpress, there’s effectively two communities in one right now when it comes to mindset about WP’s future direction (it’s very well outlined in the WP 5.0 post).

The biggest criticisms of Gutenberg are (at a high level):

  • You’re making a major change with no real input
  • I just want my shortcodes and other conventions I’ve always had
  • I don’t want to learn javascript just to work
  • Please don’t do this

These concerns are situated against reality of the web:

  • If PHP based projects are to survive they’ve gotta go all-in on a JS framework (most likely)
  • jQuery isn’t going to cut it
  • Shortcodes aren’t going to cut it
  • Javascript is where everyone is at and building amazing experiences on the front-end
  • A huge group of people saying “Please do this it’s over due”

Another way out

There’s always another way out and it usually comes in manners we least expect. How can we possibly compete with the AX patterns expressed in Gutenberg? We can’t right? It’s incredible. I mean even in it’s unfinished state, it’s in React and people love React. How could we possibly top what they are working on. Here’s where the WP approach is vulnerable

  • Once you adopt it, you will never be able to get out of it. It’s markup is all made up tags and comments and DEEPLY tied to WordPress.
  • To contribute, get ready to ramp up on React and all the tooling chain that comes with it
  • There is enough strife in said community (even just from a distance I can see it) to provide options for people to migrate away from WordPress. Great change (as we saw from D8) can cause great advancement but when people review options they really review all options
  • The body field is king and their world will devolve into body field only, even more as they seemingly position themselves to get out of the PHP business *cough*

Ok, so how can we turn these against them. How is it that my team reviewed what they are doing at a high level and said how can we avoid these mistakes. Don’t get me wrong, I love Drupal; but I’m going to make the best decisions for my team and our projects when it comes to selecting it. So here’s intentional decisions we’re making with HAX the Web in order to beat them at their own game:

  • HAX is built on Web components and while we use Polymer to build ours, we’ve integrated “Vanilla” JS elements. Think of Polymer like JQuery or underscore as a helper library, not an entire reframing of the way you build
  • We also write to the body field, but we’re not using any Drupal specific conventions which means our content could write anywhere and work anywhere provided the tag definitions are available.
  • We currently have integrations with Drupal 6, Drupal 7, BackdropCMS and GravCMS, meaning that you can unify the AX patterns between Drupal and anything
  • Our solution’s integration methodology is trivial and writing new components to work with HAX is simple. Extend the web visually, then fire an event informing HAX (if it exists) about how to modify yourself. This means our elements will on any web site and if HAX happens to be there, they’ll work with HAX there.
  • Our Developer Experience is second to none and incredibly simple. We have students with a few weeks experience and junior developers making meaningful contributions to our design assets and HAX in weeks not months. We have people that have never coded before making more impressive elements then much of the themes I’ve seen in Drupal for years.
  • Drupal will learn the design, Drupal will not be in charge of the design.

An authoring experience where Drupal just happens to have really great integration but that anyone can plug into and extend. Unite the tribes, join us. We’re already having discussion with Joomla, LMS and other vendors we never would have been able to previously because we’ve completely decoupled design from Drupal, and then taught Drupal about our design.

Please. Join us in these efforts. Challenge us on them. See what it is and how we’re trying to accomplish it. Because if we don’t, if we keep going the “Drupal-isms on Drupal-isms” methodology to layout and panel-esk development, we will lose long term. The door hasn’t slammed yet and no, we’re not the only path to truth; but Drupal has a real opportunity right now to start down a truly innovative path and say that we are here for one purpose: To build the authoring experience for the web.

Jan 04 2018
Jan 04

Over on the ELMS:LN team, we’ve been busy every place not Drupal (huh what?) lately. We have barely been touching Drupal and couldn’t be happier (…huh?). In fact, we’re busy these days encouraging people to not focus on Drupal, at all (so this is on planet then why…).

Drupal is fantastic at what it’s great at. It’s great at being a CMF; the crazy abstraction of a CMS that lets us build anything. It’s great for handling complex workflows and raw webservice / integration capabilities (there’s a but coming…).


Drupal is terrible from a theming and new user on-boarding perspective. In these areas it’s incredibly clunky and while Drupal 8 has made great strides, it’s still not going to save us all from the endless array of lightweight, fun to develop for, NodeJS and SailsJS backends seeking to give Drupal death by a thousand microservices.

What I’ve found though from hunting around other CMSs though, is that this isn’t a Drupal problem; it’s a workflow problem that plagues the larger web. GravCMS is reasonable to theme for, but it still sucks compared to just making something look great on the front-end. Wordpress the same thing (security methodology or lack there of aside). This isn’t a uniquely Drupal problem and it’s not to demean the people like @MortenDK and the other core contributors who have done a great job getting Drupal to where it is in these areas.

So Headless then?

Headless isn’t a rallying cry against the great work they’ve done, it’s a response to the fact that no matter how hard we try, we’re putting a round peg into a square hole. Headless allows our front-end teams to focus on the front end and interface with the systems team conversationally as “I need a JSON array that looks like {} here” and get it. It keeps needless markup (which I know div-itus has been heavily killed in core). Headless isn’t everything though; there’s still a HUGE advantage to hybrid methodologies where Drupal is delivering front-end style assets in a well structured and very intentional way. This helps get around issues of rebuilding Drupal but on the front-end when it comes to forms, admin areas, user management, and other sticky areas. If we can at least make these things look great, then who cares if they come from front-end or back-end so long as they leverage the same templates / rendered elements.

HAX and the way forward

HAX is short for Headless Authoring eXperience. HAX means that our authoring experience is completely disconnected from Drupal or any other CMS, and instead is providing a simple way of integrating a uniform UX pattern into a backend. HAX is an idea that me and other members of the Penn State community have kicked around for years, built prototypes against and now, in the last 2 months, finally have something that looks and functions in the direction we’ve been seeking. I think this is another piece of the adoption puzzle in Drupal 8 if we want to seriously start seeing sites like the Whitehouse not flip from Drupal to Wordpress and instead start seeing mass exodus from Wordpress / others to the most powerful platform.

In order to do this, it can’t just be a powerful platform. It has to be the most approachable and able to quickly make high fidelity experiences for content authors. It has to be drop dead simple to setup and gorgeous to use (yeah, super easy,… I know…). Authoring experience we’re working on; trying to make something highly pluggable on the front-end to match the lego-esk pattern that Drupal has on the backend. We’re teaching Drupal to understand webcomponents so that it can easily integrate with systems like HAX (and we’ve done so).

So in other words, I think the best way to increase Drupal 8 adoption is by making Drupal the most enjoyable platform to do front-end development, bar-none. I think one of the most important ways to do this is to get people off their addiction to tpl file theming and into Webcomponent technology natively. Leaving the island and landing on Polymer has been the single biggest enhancement to our own platforms we’re building in the history of the project. Teaching ELMS:LN (and Drupal by proxy) to understand webcomponents has been completely game changing for our developer experience as well as student experience in what kinds of solutions we can build.

I’m so sure that this is the solution that I’m not spending my free time integrating other platforms with Webcomponents so that it can be integrated with HAX. While others were celebrating, I stayed up into the wee hours writing Webcomponents and HAX modules for Backdrop, GravCMS and Drupal 6 (yes. Drupal 6). Included with this post are videos showing what HAX is and how to get it and webcomponents integrated into all of these platforms.

Drupal 8

The Drupal 8 port of both are in the queue and I’m happy to help anyone take them on as co-maintainer as I can only stretch so far. But I think Webcomponents is the answer to increasing Drupal 8 adoption long term. By getting off the addiction to TPL files, we can start progressively collapsing the module requirements of our existing Drupal 6 and 7 sites. When we reduce the requirements on the module side, we decrease the complexity of our sites and open up pathways to migration currently unattainable. People aren’t on Drupal 7 still because it’s more fun, they are there out of the realities of life :).

So if we enhance these platforms and get people developing in webcomponents to improve workflow, we’re getting them out of their Drupal 6/7 shell and into something that’s non-Drupal and WAAaaaaayyyyy more fun to work with as far as front-end is concerned. At the same time, we’re decreasing the complexity of their Drupal sites on the backend via these hybrid and headless sites which then can more easily transition to Drupal 8. Drupal 8’s “problem” then becomes understanding webcomponents and how to integrate with them and our front-end people won’t have to change their workflow. We’ll create a pipeline for awesome development experiences on the front-end via webcomponents and awesome content authoring experience via HAX (which is just leveraging webcomponents).

Jun 02 2017
Jun 02

As twitter, other social networks, conference submissions and drupal planet posts would suggest; I’m a bit over the moon about Polymer for Web Components. I’ve avoided front-end development workflows for years because they made no sense to me. Lots of custom bailing wire that would make a slick (but completely un-reusable) one-page-app. Nay said I! I will never do front end development!

Then Polymer happened

Michael Potter (@hey__mp) finally wore me down that we should be looking into component architecture. I saw pattern lab and went “neat, that makes sense” and then realized it was more then just a design paradygm and went “ugh, front end, this makes no sense”. He suggested we look at Polymer because “the next youtube will be done in it”. So I did. And now 4 months later, here I am writing about it non-stop. It completely changed the way I approach development with Drupal and I think is a key to sustaining and growing the Drupal community.

Drupal’s ability to be a headless, hybrid and template driven system all at the same time make it incredibly attractive for progressively decoupling and eventually moving towards entirely Web Component driven development practices. I’ve identified what I feel are five phases of web component integration:

  1. Design elements in templates
  2. Template-less elements
  3. “Smart” / one-page apps (where we are today)
  4. Headless, multi-page apps
  5. Web components driving information architecture

Phase 1 as I’ve covered before is to just start making some simple design elements and wiring them into tpl / twig files. This reduces the complexity of Drupal theming in general and increases accessibility and reusability of elements, reducing the time to develop down the road. It also allows you to start “theming” across platform at a design layer since you’ve unplugged design from Drupal. This still requires all traditional roles in your web shop in order to deliver a site.

Phase 2 is to start using the webcomponents module we wrote to start replacing template files entirely through views, display modes, and display suite style theming. This reduces the need for front end developers to understand or care about Drupal, which increases our potential target audience for the platform. We can start to hire pure front-end developers and then have pure site builders integrate those design elements with little effort, leaving the big lifts for developers to integrate.

Phase 3 is disconnected, one-page app development. These apps are prototyped by themselves and then wired up to Drupal after the fact. This allows for front end developers to replace the need for site builders by envisioning their development as single “screens”. It boils Developer integrations down to a single auto-loaded file (which the 3rd video below demonstrates). This is where we are currently in the phases of progressively decoupling via Web components.

Phase 4 is effectively a routing system for stitching together the one-page apps. Beacuse one-page apps are just single element tags, it’s easy to stack those together into increasingly complex systems which click and autoload the next page without refreshing the page. This allows for smoother interactions and reduced server transactions (especially if you’ve worked in service workers which I still can’t figure out but it’s gotta be something stupid I’m missing). It also drastically improves the feel of your application / web site. This starts making site building about information architecture only as more and more of the UIs are driven by single tag / click and tweak routes methods.

Phase 5 is a bit out there (but not too far), which would have things like polymer cli’s analyze function actually write to a information-architecture.json style file which could be ingested and start building out entities and bundles to match the potential of the front end. Effectively, design drives the entire process. The one-page-app doesn’t care if a month from now you swap out Drupal for something else though Drupal’s integrations and automation will make it more attractive to use then the competition.

Showing the Web component development process via Polymer

In our own development with the platform and webcomponents module, we’ve quickly moved from Phase 1 (February) to demonstrating the capabilities of Phase 2 (March - April at DrupalCon) and now to auto-loading smart, one-page apps in Phase 3 (as of last week). Because of this, I feel that I’ve got to start unpacking some of the magic at play here and show you what our build process for new apps and elements is. I’ve put together four videos which go through some different phases of planning and creation of a new one-page app which just diplays comments from Drupal and end with a block based example for a simple drag and drop upload widget.

Video 1 starts from right after installing polymer cli (which I show you where to get). I discuss how to go about getting elements, dig into some elements to show their capabilities and start to make a very vanilla, hard coded element to display a card with buttons.

Video 2 starts to discuss properties of custom elements and how to do two-way and one-way data binding between the properties of the element and other elements, allowing you to make some slightly smarter elements. I then abstract the work of the first video and then reference it in a new element, showing the process of making stacking elements. Then I cover the (amazing) iron-ajax tag to wire an element up to a json data source and Polymer’s “template stamper” which can do conditional statements and for-each loops in your element.

Video 3 shows how we then get this disconnected app registering in Drupal and being fed data to display from drupal using the new auto-load capability of the webcomponents module. I modify the manifest.json file, place the app in the correct part of the directory structure, and then have a 1 page app that’s in the menu system, acts like the rest of drupal and securely gets data from drupal but has had almost 0 “drupal” style code written.

Video 4 quickly shows another one-page app which is using the manifest.json ‘block’ function. This allows you to make a one-page app but then present it in Drupal as a block element to place where ever you want. The block element in question is an incredibly slick vaadin-upload element which allows for drag and drop uploads to the Drupal file entity system

May 24 2017
May 24

On the ELMS:LN team, we’ve been working a lot with polymer and webcomponent based development this year. It’s our new workflow for all front-end development and we want Drupal to be the best platform for this type of development. At first, we made little elements and they were good. We stacked them together, and started integrating them into our user interfaces and polyfills made life happy.

But then, we started doing data integrations. We wanted more then just static, pretty elements, we wanted data driven applications that are (basically) headless or hybrid solutions that embed components into Drupal to streamline and decouple their development without being fully headless. I started writing a few modules which seemed to have a TON of code that was the same between them. So, a few refactors later and a lot of white boarding, and now we’ve got the ability to do autoloading of polymer based one-page apps by just enabling modules and dropping apps in discoverable directories!

This approach keeps the design team as disconnected as possible from Drupal while still being able to interface with it in a modular fashion. Think of being able to roll out a new dashboard module with all of it’s display being contained in a single element; now you’ve got 1 page app development workflows. Your design / front-end team doesn’t need to know Drupal (let’s say this again in bigger text):


No, not by knowing twig, but by learning a technology closer to the browser (that works anywhere) in Web Components!

Check out the Readme.md file for more details about this workflow as text or watch this video!

Apr 05 2017
Apr 05

*Text from youtube description of below video

I love this community. If you do too, throw up a video introducing yourself to the Drupal community. Maybe say why you are involved and why you care so much about it. At the end of my video I challenge others to donate to causes that make the world a better place, open source or otherwise. The organization I am donating to today is http://ourrescue.org/

Feb 21 2017
Feb 21

I’ve dreamed of a day when systems start to work like the home automation and listening (NSA Spying…) devices that people are inviting into their home. “Robots” that listen to trigger words and act on commands are very exciting. What’s most interesting to me in trying to build such systems is,… they really aren’t that hard any more. Why?

Well, the semantic web is what’s delivering the things for Siri, Google and Alexa to say on the other end. When you ask about something and it checks wikipedia THAT IS AMAZING…. but not really that difficult. The human voice is being continuously mapped and improved upon in accuracy daily as a result of people using things like Google Voice for years (where you basically give them your voice as data pieces in order to improve their speech engines).

So I said, well, I’d like to play with these things. I’ve written about Voicecommander in the past but it was mostly proof of concept. Today I’d like to announce the release of VoiceCommander 2.0 with built in support to do the “Ok Google” style wikipedia voice querying!

To do this, you’ll need a few things:

Enable the voicecommander_whatis module, tweak the voicecommander settings to your liking and then you’ll be able to build things like in this demo. The 1st video is a quick 1 minute of a voice based navigational system (this is how we do it in ELMSLN). The second is me talking through what’s involved, what’s actually happening, as well as A/B comparing different library configuration settings and how they relate to accuacy downstream.

Oct 14 2016
Oct 14

A few months ago I did a post showing ELMS:LN’s initial work on the A11y module which provides blocks for improving as well as simulating accessibility conditions. After lots of testing I’m marking a full release of the project we’ve been using in ELMS:LN for months (as dev).

This latest release includes many bug fixes and most notable new features of:

  • Simulating Field loss conditions including central field and peripheral field loss, applied via CSS/JS in real time
  • Colorblindness for many different forms of colorblindness, applied via SVG filters in real time
Aug 07 2016
Aug 07

DrupalcampPA is July 30 and 31st in Pittsburgh PA and Yinz-all should come! We just announced our full schedule with keynotes, giveaways, and more. Some quick highlights of why you should come! Submissions this year tried to take the stance of “getting off the island” by having more and more presentations about topics that plug into or are affiliated with Drupal without just being about Drupal. As a result we’ve got talks ranging from local environment building, Islandora (Drupal based library system), Drupal as an iPhone app backend, Angular / CDN leveraging, to Unit testing, FAITH automated testing, team building, event organizing, CKEditor plugin development, building VR interfaces and voice / keyboard driven interfaces.

Day 2 also has a lot of open time dedicated to Sprints, Help Swaps, BoFs, with a workshop for those needing more hands on / “These are the building blocks of site building” style sessions. Some schedule highlights


  • Mathew Radcliff (and anyone else that wants to) will be organizing Drupal 8 / contrib sprints.
  • The ELMS: Learning Network core team will be holding a global sprint all day Sunday, running late into the night and Monday with people remoting into slack, github and appear.in channels from State College, California, Canada, and the UK.

Raffle prizes

This year we’re expanding our “door prize” raffle to include some sweet tech. We’ll be raffling off a Raspberry Pi starter kit, Asus Google Chromebit, hockey jersey, and hoodies!


  • Megan Sanicki the Executive Director of the Drupal Association is Skyping in to kick off the event!
  • Scott Reeves Drupal 8 theme core contributor will be keynoting day 1 to talk about the journey from outside the community to doing core contributions!
  • Bryan Ollendyke (me) will be moderating another fun open panel discussion with voices of the community called 2020, The Panel:
    • Kirsten Burgard - Drupal Govcon organizer; Super voter never missed an election.
    • Katrina Wehr - Instructional Designer @ Penn State University. Former teacher, current #edtech smart design & pedagogy first enthusiast.
    • John P. Weiksnar - Drupal(tm) futurist * Active member of WNYDUG, Drupal Association, Society of Motion Picture and Television Engineers(r)
    • Fatima Sarah Khalid - I bring my own sunshine #CivicHacker & budding #Drupalista | coding for the people @TheCityofBoston | Fmr @MicrosoftNY @NYUengr

Other stuff to do in Pittsburgh

We’ll have activities planned for Saturday night but beyond just DCPA Pittsburgh is a really nice city to visit. We’ve got a great zoo, inclines with great views of the city, boat tours, stadiums, shopping, great places to eat (especially in the Strip District) and lots and lots of Colleges & Universities in the area with lots of parks and places to walk not far from downtown (or downtown at Point State Park among others). If your looking to spend extra time there or need recommendations on where to go (like my favorite coffee / co-working house north of the city) just ask!

We look forward to seeing everyone in three short weeks for a great day of community, code and learning!

Jul 11 2016
Jul 11

I know Dreis caused a lot of smiles when he used Amazon Echo and Drupal 8 to be notified about “Awesome Sauce” going on sale. The future is bright and requires increasingly more ways of engaging with technology. But what if you wanted to start to have a conversation without Echo to do it? What if we wanted to progressively enhance the browser experience to include lessons learned from conversational input technologies.

If you haven’t heard of it before, welcome Annyang and the Web Speech API native in many browsers (Chrome / Opera it’s enabled by default too). Annyang is a simple wrapper library for the Web Speech API. This lets you use simple phrases as the trigger for a javascript callback to fire. Using well formed ones, you can effectively wire voice based commands into anything on the web.

While looking for something that implements annyang, I stumbled on the Voice Commander module. It didn’t do everything I wanted, but it’s a great starting point since it integrated the library and processed menus. What I show in the video below is a fork of voice commander which is core in elmsln, allowing you to start speaking lots of options. I show this in the context of one of our production systems where I’ve done the authorization clicks. Browsers will ask if your microphone can be turned on (so no, this isn’t NSA stuff).

Right now we have support for going to systems, scrolling, going into edit mode, next / prev in a book, browser history forward/backward, common menu items. In the future I’m looking to add a more conversational engine using the other side of Web Speech that allows browsers to talk to you in their nature “voice”. A proof of concept of that called ELMSLN AI is in the links section. Enjoy talking!

Jul 11 2016
Jul 11

I got a question in from twitter asking if we had a video showing what we were doing with Drupal, xAPI and H5P. I said sure! And I hurried off across youtube and my many blogs to find it. Just… gotta… find… that… post.. I mean, I know I did it I HAVE TO HAVE DONE IT ITS SO DAMN COOL.

An hour later. I have realized that I have never done a video about this. #facepalm

So, here’s this post then which shows and talks through the following:

  • What is an LRS - Learning Record Store (We use Learning Locker)
  • What is H5P - An HTML5 interactive widget creator that plugs into Drupal easily
  • What is xAPI - An experience API statement that tracks the action taken by a user

How is ELMS:LN using this? We’ve got native xAPI support thanks to H5P and the Tincan/xAPI drupal modules. These allow you to start doing tracking within Drupal sites, effectively helping turn any Drupal site into a mini-LMS. This doesn’t mean it does everything for you. But considering the Quiz module has native support for xAPI statement emitting and there are open source LRS like Learning Locker built in PHP and on a well documented communications specification; the reality of every system being an LMS is not that far off.

Obviously I’d ask you to explore this through ELMS:LN but we’re a drop in the infinite Drupal ocean, so let’s all learn and collect statements together shall we? :)

Check out the links and video below of it all in action!

Jun 02 2016
Jun 02

This is a recording of a presentation I gave at Drupal Gov Day called Purespeed. There’s many posts on this site that are tagged with purespeed if you want to dig into deeper detail about anything mentioned in this talk. This talk consolidates a lot of lessons learned in optimizing drupal to power elms learning network. I talk through apache, php, mysql, front end, backend, module selection, cache bin management, and other optimizations in this comprehensive full-stack tuning talk. The focus of this is Drupal 7 but the majority of the talk applies to Drupal 8 and beyond and I mention some D8 modules to investigate as well.


May 09 2016
May 09

First off, hope you are all enjoying Drupalcon, super jealous.

Its been almost three months since I wrote about Creating secure, low-level bootstraps in D7, the gist of which is skipping index.php when you make webservice calls so that Drupal doesn't have to bootstrap as high. Now that I've been playing with this, we've been starting to work on a series of simplied calls that can propagate data across the network in different ways.

For example, if data that's in one system but cached in another is updated, I can tell all the other systems of that type to have their cache wiped. This is a series of Services and Authority systems making up ELMS:LN and they form different patterns. The most common use-case though that's easiest to grasp is updating a course name.

Let's say we have a course named 'Stuff' and we want to make it  now called 'Stuff 100'. Well, that simple node with title 'Stuff', actually lives in anywhere from 10 to 15 systems depending on the structure of the network it's related to. Similarly, we don't know where the data is being updated (potenitally) and we don't want to force people to only be allowed to update it in one place.


We could spider the call (and originally were). This assumes figuring out the 10 or so places you want to send data and then sending it to all of them at once (or one then the next, etc). This invokes (N+1) load though as we use non-blocking httprl based calls, meaning that 10 calls will open 10 simultaneous apache / php-fpm threads. That's a lot of network traffic when starting to scale up, invoking self imposed spikes. We still need this call to go everywhere though so we need to borrow a concept from Twitter.

Spider call


When you send a tweet, it doesn't go to 1 massive database which then everyone reads from, it goes to localized databases and then every X number of seconds, messages are passed to neighbors in the twitter network. You can try this out yourself if you VPN to another part of the world then send a tweet and see how long it takes to show up in someone's timeline sitting next to you vs doing it on the same network.

Snaking the same call

For this call, we still send non-blocking and we still send to everyone in the network, but we use distributed recursion in order to accomplish it. Basically site A figures out "I have data that needs to go to these 9 systems". Then it sends the message as well as the list of the next 8 systems over to system B. B then looks, processes the request and JUST before finishing the connect says "wait, do I have others to pass this on to?" and see the 8 systems in the list. It then sees the first one as who to contact next then sends the stack of 7 systems who will also need the call. This process repeates recursively until there are no more calls (at which point I'm thinking of adding a "bite tail" function to call the originator and say it finished).

So that's cool but why do we care?

We care because this doesn't invoke N+1 load even though it's still replicated the message across N+1 systems! The snake methodology of data propegation will keep 2 apache / php-fpm threads open at any one time.

System A calls B (non-blocking call, A hangs up and delivers you the end user the page)

System B calls C (non-block call, B hangs up, you have no idea this is happening)

System C calls D (non-block call, C hangs up, recursively til no more stack to call)

Snake will always invoke at most 2 execution threads instead of N+1 which is a huge performance and scale gain. The difference to the end user is marginal as everything is non-block either way. Just make sure that you correctly block recursive kick offs on `hook_node_update` or you'll get something funny (and crippling) like this issue write up.

Code example for the function testing if it should ship off to the next member of the network:

// look for snake calls which we can perform on anything stupid enough
  // to attempt it. This can destroy things if not done carefully
  if (isset($_elmsln['args']['__snake_stack']) && !empty($_elmsln['args']['__snake_stack'])) {
    // tee up the next request as well as removing this call
    // from our queue of calls to ship off
    while (!isset($path)) {
      $bucket = array_pop($_elmsln['args']['__snake_stack']);
      $systype = _cis_connector_system_type($bucket);
      switch ($systype) {
        case 'service':
          $path = '/' . $_elmsln['args']['__course_context'] . '/';
        case 'authority':
          $path = '/';
          // support for skipping things outside authority / service scope
    // need to queue module / callback back up for the resigning of the request
    $_elmsln['args']['elmsln_module'] = $_elmsln['module'];
    $_elmsln['args']['elmsln_callback'] = $_elmsln['callback'];
    $version = str_replace('v', '', $_elmsln['args']['q']);
    // issue call now against the next item in the stack
    $request = array(
      'method' => strtoupper($method),
      'api' => $version,
      'bucket' => $bucket,
      'path' => $path,
      'data' => $_elmsln['args'],
    // request the next item while indicating that recursive calls are allowed
    // so that it gets passed down the snake
    _elmsln_api_request($request, TRUE, TRUE);

Here's a screenshot of the code diff that it not takes for us to switch off of traditional non-blocking spidered calls to non-blocking snake calls. 8 lines of code for me to switch from Spider to Snake.

Mar 24 2016
Mar 24

Accessibility is a big deal! Good, now that that’s in the teaser for this let’s dig in.

ACCESSIBILITY IS A BIG DEAL - But not everyone knows how to help or what to do

The biggest problem is awareness raising about accessibility issues for users of your site and how they NEED the site structured (and what to do to meet them there). As a result, the a11y project exists to help raise awareness as well as provide techniques and checklists for those building things for the web.

This is where the Accessibility Toolkit (a11y) hopes to help by providing some additional tools to Drupal for improving accessibility of your websites. The a11y module currently provides a block which allows users to:

  • Change your site’s fonts to all by optimized for those with dyslexia
  • Change the contrast of your site
  • Change the size of your site’s interface
  • Invert the colors / use a darker interface with black on white text and yellow links

The goal of the module is to provide tools that can be applied to any drupal site to improve options for your users that go beyond aria landmarks and correctly ordered headings. A11y is about far more then “just” blind users; it’s about improving access for all.

This video shows what the module provides in the context of ELMSLN. Backend support so that there isn’t a FOUC is planned but not yet implemented.

Mar 21 2016
Mar 21

I had a question the other day on twitter as to if it’s possible to run drush sites aliases in parallel. Apparently I was asking the question the wrong way as @greg_1_anderson (Drush co-maintainer) told me that Drush supported –concurrent and apparently has for several years. In fact, its supported it so long and is so under used that they might remove it from Drush as a result :).

So here’s my post doing some testing of running drush with concurrency to illustrate how and why you might want to use it. Criteria for using this:

  • You have multiple site aliases you want to run the same command against
  • This could be a multi-site or running against multiple remotes

Multisite is probably the best use-case for improving performance because you might just say “Use ansible” or some other provisioner to hit things in parallel. I live blogged via this issue in the ELMSLN issue queue while playing with this but I’m consolidating the findings here.

The server specs that these tests were run on: 2 CPU, 6 gig RHEL 6 based server. To test these things, I ran drush @courses-all rr –y which in our setup will run registry rebuild against 58 sites in a multi-site configuration. RR is provided by the registry rebuild drush plugin. While drush is a concurrency=1 default, I set it on purpose just to make sure.

time drush rr @coruses-all –concurrency=1 –strict=0 –v –y

Here’s what is produced if we did that command:

real 13m42.743s

Registry Rebuild is an aggressive command, so we kind of expect to see something like this. We’re at almost 14 minutes here. This is what kicked off my research, how to optimize this command

I started with a concurrency value of 10, that proved to be WAY too high. It maxed out my CPU but I got MUCH better results:

real 5m11.379s

It brought things down to 5 minutes, but 5 minutes of CPU max vs 14 minutes of much less so is a good trade. From there, I started trailing off from 10 to see if we could find that threshold. The visual below is runs at 10, 1, 6 and 7 concurrency. As you can see, 10 6 and 7 look very similar; we slam the CPU and we max out. The times for 6 and 7 were around 4:51. A huge gain but we’re still rocking 100% CPU which isn’t a good thing.

visualizing CPU usage

So we kept moving down. This graph shows 5 vs 3 vs 2 for the CPU bumps.

5 vs 3 vs 2

Three and two are very promising because while we hit 100% CPU, it’s for a very short period of time and in two concurrency we don’t hit 100, we just get close (90ish). The most impressive thing when hitting only 2 or 3 concurrent calls is that we finish the job substantially faster then 1 at a time (and not much different from 10).

Concurrency 3 (C3)

real 4m56.193s

Concurrency 2 (C2)

real 5m36.804s


While C3 is 40 seconds faster then C2 it might be worth just using two threads at a time because it’s not really fast enough to justify. C2 is almost 60% faster then the baseline. As a result, I adjusted our upgrade script to take advantage of the concurrency value so that we now do two threads at a time. I still need to run this in non-vagrant environments but the calls against more commands then just registry rebuild should provide an impressive gain.

Mar 08 2016
Mar 08

It’s been a few months since I first mentioned the Git Book module here on DPE. I haven’t done much with it since but was able to scrape together a rather epic sprint today. Coupled with improvements to ELMSLN in general, this thing is getting close to a pretty killer workflow for book creation. The scenario we’re striving for:

  • Read in a git repo (or create a new one) to provide a git repository per book in Drupal
  • Read in any structured markdown repository (starting with support for Read the Docs)
  • Allow people to work either in a CMS or on github and successfully push “code” changes back and forth
  • Allow for working remote (github) and syncing local (CMS) to further democratize publishing
  • Git Book is the first area; we’ll be making an Open Curriculum YAML / markdown based spec that ELMSLN will be able to read and automatically setup… everything

As you’ll see in the video (using Drush docs as an example) this now plays nice with Outline Designer and the rest of Drupal. It now also has packaged support for EpicEditor; a sweet and simple markdown editor that WYSIWYG API has built in support for (though we patch it for settings reflected in the screencast).

Mar 07 2016
Mar 07

In order to push education, we’ve needed to at times bend Drupal to our will.

ELMSLN hold at its core foundation Drupal 7, with the idea that in order to innovate, we need to fragment functionality but not user experience. We can push the envelope in one area of education and design while still holding fast to the realities of today’s experiences in others. With ELMSLN, we’ve created an infrastructure for sustaining innovation while keeping your baseline of course experiences in check.

Because of this, we’ve got sites… boatloads and boatloads of sites. A course is not just 1 Drupal site, but the bulk of it’s experience is potentially delivered by 4 or 5 sites, influenced by data from a 6th and provided with high quality media from a 7th. This overhead comes at obvious costs (like complexity, thankfully Drush / bash keep life simple) and replication of database tables / files for gains in flexibility.

In order to correctly keep the experience in sync for students and instructors so they never need to know what’s going on under the hood, we need data and experiences to be in sync. For experiences, that’s easy, we’ve built a solid, sub-themed UI that helps people quickly forget that it’s built in Drupal. For data though, we needed something more. We’ve leaned on two capabilities that were easy to come by: cron jobs with keys, and Restful web services via RESTws module. These each come with their own set of problems. Cron is slow, and destructive to caches and performance; while RESTws is well targeted to modification of objects. The problem is, not everything is an object.

And so, we built an API that can contextually bootstrap Drupal based on the kind of command being run. I based this off of a cool module I found called the JS Module. The JS module provides a drop in, alternate index.php which bootstraps just the database and the JS module itself. I didn’t like that it was making too many assumptions about JavaScript so I took the concept and ran with it.

This screen cast details a low-bootstrap API that I wrote to allow ELMSLN to cheat on D7 bootstraps. We needed this because of the inherit structure of our system (lots of sites that need to keep very specific data in sync) but it’s an example that I think others could fork for their own purposes. A few API capabilities we currently support for lower-bootstrap:

  • remote cache bin clearing - DB bootstrap only
  • remote vset - variables bootstrap only
  • remote theme settings sync - variables bootstrap only
  • sync roster - a complex backend process that uses a full bootstrap but without page execution

[embedded content]

Dec 08 2015
Dec 08

The ELMS Learning Network team is seeking to not only transform education, but also the concept of content and how you can interact with a CMS. By taking our network based approach to educational delivery to Drupal, and viewing Drupal as more of an engine connecting people rather then a product out of the box, we can craft innovative solutions (like this one).

We’ve created the ability for Drupal to take a git repo, ingest it, look for Read The Docs style outline structures, and import all the content. It does this by a new module called Git Book that implemented the Git.php Library.

In this video, I show how we’re able to ingest the Drush Ops docs. It also offers a sneak peak at the interface work we’re undertaking to unify our multi-distro approach to tackling big systems.

This isn’t just about pulling in content, it’s about user empowerment. Faculty that understand the power of markdown and version control will be able to craft their courses in whatever location they want to (like ELMSLN faculty @_mike_collins) and then share it over to ELMSLN for use in courses. This will encourage remixing and open up the possibility for PRs against content.

In the future, we’ll be making a fork of the ReadTheDocs spec that (should) be compatible with it so that we can have simple git based files that drive the setup of complex course networks, automatically creating drupal sites, networking them, producing content outlines, assignments, discussion topics, media, etc.

Nov 18 2015
Nov 18

I recently defended my Master of Sciences thesis with a central theme of open source, activism and Drupal. You can grab a copy here or in the article links. I started this research project in 2007, in part, life got in the way but also, I didn’t know how to tell the story.

It’s the story of our community. My love for our community, my love for the world we are all creating, together. One that’s more open, more connected, and more equal. I went several years without knowing what to even call what I was witnessing. I used to refer to this as Structured Anarchy.

Structured Anarchy was describing the structure of the community that organized around Drupal. That there seemed to be chaos and disorder, yet some how everyone was able to build these amazing systems with limited effort and resources. As I started to go through the interview data though, I happened upon a different concept. I wanted to analyze where this community came from, because everyone talked about community but never identified who that entailed. What caused this vibrant community to emerge that keeps this chaos managable?

I landed upon a phrase that seemed to resonate with others as well: Information Altruism. Information Altruism is the concept identified in my research that it isn’t Free Open Source Software that changes the nature of work, not in itself anyway. It’s the application of a community that utilizes that FOSS and more specifically applies the notions of free and time-banks to traditionally vendor and pay based ecosystems. Information Altruism provides a case study in how the intentional donation of efforts can alter the concept of effort and work within an organization.

The research showcases that a community can emerge out of the sustained donation of effort. Thank you to everyone in our local community for support over the years and to the Drupal community at large. Nothing I do would be possible without you.

This isn’t the end, it’s just the beginning of a new chapter as our community matures and gains momentum. Ex Uno Plures.

Nov 17 2015
Nov 17

This video talks through how XMLRPC Page Load and HTTPRL Spider can be used to warm caches on private / authenticated sites. XMLRPC Page Load provides a callback that tricks Drupal into thinking that it’s delivering a page to certain user account. It does this by simulating page delivery but then never actually writing the output anywhere. What it does instead is just return if it completed.

This allows you to hook it up to crontab via httprl spider and simulate a ceratin user account accessing every page on your site. This can be used to warm the caches on sites that are locked away behind authentication systems so that you can rebuild caches on whatever interval you want.

There’s also a (crazy) command in HTTPRL called huss (not shown) that will simulate every user hitting every page; effectively getting full cache coverage on something like an authcache enabled site.

Oct 21 2015
Oct 21

This video shows the automation involved in creating a new tool in ELMSLN. A tool in ELMSLN = new install profile = new domain = new drupal distribution.

New Idea + New Distribution + New Domain = New Tool

We do this by running a bash script against the Ulmus Sub-distro which is shown in the video. This allows us to utilize a common base of modules for a “core above core” approach to distribution development. It also allows us to rapid prototype new ideas in a vagrant environment, to produce well-positioned applications for ELMSLN while potentially being in a classroom talking to a faculty member about their ideas and pedagogical needs.

Oct 20 2015
Oct 20

This is a quick video I shot showing how you can use the YouTube Uploader widget to streamline your workflows of interacting with Drupal and Youtube. I’m demonstrating this in the context of ELMS Learning Network as we’re looking at utilizing this module as part of our ELMSmedia distribution. It’s pretty impressive what the 7.x-2.x version is able to do and without further ado; enjoy. Props to my student Mark (@mmilutinovic13) for experimenting with this first.

Oct 08 2015
Oct 08

There’s many dirty little secrets in Drupal 7 core’s API when it comes to inconsistencies and oversights. It’s a big part of why so much care is being placed in D8 and its taking so long, because people realize this is a platform that’s used for the long haul and core decisions today will have lasting impacts a decade from now.

That said, I discovered one a year or so ago and kept putting it off, hoping it would go away on its own. Well, it hasn’t and here comes a potential scenario that I detail in an ELMSLN issue queue thread I like to call Role-mageddon. While this doesn’t just affect distributions and install profiles and features, it is a lot more likely you could run into a problem there with them; and so here we go.

Example Scenario

Site 1 (Profile A)

  • Developer Adds a Feature X that adds 2 roles
  • Then creates Views, Rules, and blocks and associates roles to access / visibility
  • Then they create Feature Y with 1 role and do the same as before

Site 2 (Profile A + the additions above)

  • Developer Enables Feature Y
  • Developer Enables Feature X
  • All access / visibility criteria of Roles / functionality supplied in Y is flipped with X
  • Oh Sh….

So What happened?

Roles in drupal are stored as id, name, weight. id is generated based on the database being incremented, so anonymous is always user rid 1 and authenticated rid is always 2. After that, it’s the wild west of whoever comes first gets the next id.

Well, if Roles 1 and 2 are created then Role 3, they’ll get ids of 3,4,5.

If Role 3 is created then Roles 1 and 2, they’ll get ids of 3,4,5 but all views, rules, blocks, anything associated to the rid identifier is now associated with the wrong role!

Without this knowledge you could have oh, i don’t know, made all your admin blocks visible the ‘bosswhopays’ role on production and not understood why ). This would also happen if your in dev and have a role that doesn’t move up to production that was created prior to the others that are about to. You move the features up, and none of the settings are kept.

So how do we avoid Role-mageddon?

Role Export adds a column called machine_name to the role table, and then uses the machine_name value to generate a md5 hash value which is used to create the rid. Then, so long as machine_names are unique, it effectively gaurentees that rid’s are unique and won’t collide with other roles that you import / migrate.

The import / export order no longer matters because they’ll always map to

Great for the future, but what about my existing site?

Role Export had support for automatically remapping the updated rid so your users roles don’t get lost, as well as the admin role variable and the permissions associated to the role. That’s great, without those this would have been basically worthless for existing sites.

What my patch of infinite lack of sleep provides, is the same exact thing but for Views, Rules, Blocks, Masquerade settings (since that has security implications and is popular) as well as a hook that can be invoked to fix your other variables like IMCE, Piwik, and LTI.

Oct 08 2015
Oct 08

I was made aware that it’s been close to a year since I actually did a demo of the reason that I contribute so many modules to drupal.?org. For those that don’t know, the reason I exist is a project called ELMS Learning Network. It is a Drupal 7 based deployment methodology that takes the many parts of an LMS and fragments them across a series of drupal distributions.

We then take those distributions and using RestWS, single-sign-on, and a common module and design meta-distro, stitch the experience together. The goal is to create more engaging, customized learning ecosystems by envisioning each educational experience as a snow-flake. The project’s iconography of a snow-flake comes both from the networked nature of the different distributions that make up the system as well as that mindset that we need to treat courses as individual art forms that can be beautiful in their own unique way.

Anyway, below is a video showing everything about the project in its current state. If you have any questions about what modules make it up, they can all be found in the links below along wth the repo.

Up coming camp presentations about ELMSLN:

Oct 08 2015
Oct 08

I’m in the middle of several Drupal Camp / Con’s (any event over 1000 people is no longer a “Camp” but that’s for another time) and it’s occured to me: I can no longer learn by going. Now, this is learn in the traditional sense of what I used to go to Camps for (been coming to camps for 8 years now).

It’s not that I’m advanced beyond younger me, it’s that this ecosystem is an endless rabbit hole of buzz words, frameworks, CLIs, VMs, architectural components, issue queues, PRs, and other words that must make most anyone new to anything web immediately freak out and hide. And so, in coming to events I feel I rarely get to learn about implementing BEM or SASS or DrupalConsole or Composer or Dependency Injection because 1 hour or 50 minutes for a talk isn’t close to enough to scratch the surface of these concepts.

What IS important thought at Camp sessions:

  1. Community; hearing from peers and discovering together how much all of us don’t know everything
  2. Learning new buzz words, which is critical because I don’t know what to Google!

For example, I assumed everyone already knew what Drupal Console was only to find out that most people I talk to go “Duh-huh-waaaa?”. And so, to heep some more Buzz words on you all from the Camp circuit and why I’m so excited for them :)


Drush is the original command-line Drupal (you’ll see why I frame it that way soon). It allows you to communicate with your drupal site via commandline, issue commands, routinize boring tasks and do complex scary tasks as single commands. Other modules can supply plugins for this as well as anything you’d like to just use as a plugin without modules.

Drupal Console

Drupal Console is a bit of the new kid on the block. It originally was met with resistance, as it’s another CLI for Drupal, but has since started to find its place in the early D8 CLI communities that are developing. What’s cool about Drupal Console is that it’s starting to find a happy middle ground of coverage for things that Drush kind of was weak at. What else is cool, is that these communities are working together to reduce overlap in capability AND (more importantly) allow for each to call the other natively. This means you’ll be able to start execution threads like `drush drupal-console ….` or `drupal drush en views`.

Console doesn’t have support for running off and grabbing Views. It’s much more of the architecture of building out things that you need to work with your drupal site, but without writing all the code (for example). Think of it more as a utility to help you work with Drupal in the development side of the house. While Drush has plugins for things like module building, code review and entity scaffolding, they were never its strong suit. This is where Drupal Console is focusing its efforts.

Drupal Console also has Symfony based plugins since it’s building against Symfony Console. Drush on the otherhand is your traditional Drupal architecture, supporting multiple versions of Drupal, etc.

Why you should care

Because if this PR / thread in Drush-ops moves forward, it would mean the two could call each other. This will allow for two different ways of developing for CLI devs, pulling in Symfony components, or traditional drupal guts. It also gets you a bigger community working on things at the low level so that stuff at the high level (site building, theming, etc) is easier and more scriptable. With Drush you’ll (still) be able to build make files and get all the dependencies for your site and with Console you’ll be able to write new modules and sub-themes a lot faster because of the templating / file trees that it will allow you to build.

As we get towards a stable D8, Drush and Drupal Console, we’ll have so much raw HP under the hood that you’ll be able to automate all kinds of things. It also means these communities can tackle different sides of the under the hood problem space for getting Drupal going.

For example; I maintain Drush Recipes which allows for tokenized drush chaining. This lets drush call itself and via arguements you get a lot of stuff done without much effort. Jesus Olivas just made me aware that there’s some Command Chaining being worked on for Drupal Console. This way you could string together commands in Console (allowing console to call itself basically) that allows you to get something more functional / end-result then having to manually type the same kinds of commands over and over (he gives an example in the blog post).

The future Drupal Development environment / workflow

Here’s a few projects that if they merge efforts even a little bit in capabilities over the next year or so, you’ll see some insane things start to get automated and working with Drupal will make anything else look painful by comparison (again, looking ahead, right now… yowza!).

What this would give you workflow wise:

  • An Ansible / YML based provisioning of a server, netting all dependencies for D8, Console, Drush, etc
  • You could login and be presented w/ a prompting like Nittany Vagrant provides, asking what kind of site you want to build
  • With even minimal logic to the script (yes I’d like a Drupal site that’s for commerce for example), we could issue a drush recipe that…
  • Runs a make file, si minimal’s itself, grabs dependencies if they weren’t in the make file, set variables and import default configuration from features to make the OOTB “distribution” a rock solid starting point for anyone to build off of.

Then we’d ask other question. Like “What’s the name of this client”. Answering something like “bigbank” would allow..

  • Drush Recipes to tokenize the input of Drush to call Drupal Console
  • Console would then be told “Hey, we need to build a new module called bigbank_custom_paygateway, bigbank_helper, and bigbank_theme” create all the components for the types of custom modules that we all use on every deployment with anyone
  • Then enable these modules in the new site

Eventually we can get into automatic sub-theme creation, asking what kind of theme you want to base off of (zurb, bootstrap, mothership, custom, etc) and automate a lot of the setup in that area too. We could probably get to the point to w/ theming where you can ask Drupal Console for the template files (defaults obviously) that you want so that it generates those too.

The future is going to get so insane (buzz words wize) we’ll need to keep investing in automation just to keep up. We’ll tell each other to just download XYZ and run through the workflow; there will be no more “hey go get all these dependencies and…” no, things will just be, awesome!

Now, create a PuPHPet style interface that builds the YML customizations (Or maybe like this thing), asks the crappy bash questions, and tokenizes everything downstream… stuff that system in a simple Drupal site and hand it to a Project Manager to kick off a new build.. and THEY’VE started building the site. True empowerment and liberation of workflows is upon us. Now let’s all go drink coffee!

Oct 08 2015
Oct 08

Brad Fisher (@bradallenfisher) started a #purespeed channel in our slack channel a few months back. Since then, we’ve both been doing a lot of work to tune every aspect of drupal page delivery. What will follow is a series of blog posts about tuning every part of the stack. We’ll cover:

In this first post, we’ll look at the patched core that ELMS Learning Network runs as part of its drupal setup.

The core can be found here: ELMSLN Drupal 7 Core.

The patches can all be found here but we’ll walk through each one below:


Second loop in module_implements() being repeated for no reason

This essentially corrects a logical mistake in a big part of core, module_implements that prevents multiple calls to module_implements having to process all the projects involved. module_implements is used a ton of parts of Drupal core and contrib. Optimizing this is a decent gain as can be seen in this comment.


Improve theme registry build performance by 85%

This is pretty insane, infact I couldn’t believe it til I applied it and did some analysis which you can see in the thread. This one patch cut page load times on theme rebuild and clear all caches down about 400ms!

UPDATE: Quicksketch has suggested an alternative patch to this function that goes even further then the one referenced here!


Add static cache to module_load_include()

This is another decent gain by just utilizing static cache correctly and gives significant gains on how long it takes to do full cache clears. This is important because as everyone knows, rebuilding drupal is painful from a performance perspective and this can help make things happier for the unfortunate souls that kick these off or get the first hit after your site has no caches populated. As my findings and others suggest, this took 1.7 seconds off of CC all!!!


Waiting for table metadata lock on cache_field table

Captain of amazing performance optimization, Mike Carper realized that cache tables were locking transactions when there were calls to truncate it. By renaming the table, then truncating and renaming it back, he was able to reduce deadlocks. I don’t understand this one to be totally honest but it works :)

Oct 08 2015
Oct 08

The Buzz is all about PHP 7, and rightly so, it's pretty incredible some of the performance metrics its pulling even relative to Facebook's HHVM. Brad Fisher wrote about a one-line installer to get up and going with PHP 7 if you want to play around with it. But then reality sets in...

It's still a bit too early to run PHP 7 in production and Drupal 7 has some issues ancedotally from testing. Nothing deal breaking, but nothing that we're going to drop everything we're doing today and fix all the potential issues (a few months from now, yeah maybe). So, what can you do today to crank up PHP? Well that's where we pick things up. The following assumes CentOS/RHEL 6.x but similar commands will work for all the others (the locations change)

PHP.ini tuning

The first place to start is the thing that's pretty global to what version of PHP you are running. The settings that I recommend is drawn from reading forums, feedback from modules like apdqc (which makes recommendations) and other sources.

You can find the full file here but the key settings are:
max_execution_time = 90
max_input_time = 60

memory_limit = 256M

upload_max_filesize = 50M
post_max_size = 50M

; Recommendation from @mcarper
realpath_cache_size = 1M
realpath_cache_ttl = 3600

Max execution is useful because you might have a cron hit or drush command that takes a long time to process and you don't want things to timeout. Memory limit I always set really high again, in case there's a cache rebuild that takes a lot of memory. I dont see things get anywhere near this but it never hurts.

Upload and post sizes are useful if you have files that people upload to the site and they go above default which is only like 1 meg. The last 2 settings are recommendations from mcarper via the apdqc module (which we'll cover in a later posting).


If you run PHP 5.3, you'll want to get APC up and going. APC allows for caching the location and content of PHP files in memory. This allows PHP to compile a hell of a lot faster since it doesn't have to load all the files all the time. We also recommend exploring APC's user bin as it can provide a drop dead simple way to make Drupal fly on limited resources. If you're running PHP 5.3...


sudo yum install php-pecl-apc


Apply the following script to sudo vi /etc/php.d/apc.ini

Upgrading off of 5.3

If you can, there's a lot of reasons to upgrade from 5.3 for performance and security reasons. If your currently on 5.3, you can run something like the following to get up to PHP 5.5. The biggest change in upgrade is that APC will stop working (if you have it currently applied) because Zend Opcache is available for it (so basically core)! We'll upgrade using Remi though Brad often recommends Webtatic.

wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm wget http://rpms.famillecollet.com/enterprise/remi-release-6.rpm
rpm -Uvh remi-release-6*.rpm epel-release-6*.rpm
yes | yum -y --enablerepo=remi,remi-php55 install php-opcache php-pecl-apc php-cli php-pear php-pdo php-mysqlnd php-pgsql php-pecl-mongo php-sqlite php-pecl-memcache php-pecl-memcached php-gd php-mbstring php-mcrypt php-xml

This will set the repo to download from to REMI (a popular compiled source) and then get a lot of the dependencies that you'll want for running Drupal like pear, apc, opcache, mcrypt, mbstring, memcached, gd and others.

Tuning Opcache

Opcache is a big deal as it's basically a better supported and FASTER version of APC. I read a few articles on tuning opcache and while there are different ways to do it, I'm running these in production with success.

Add these settings to /etc/php.d/opcache.ini

# optimize opcodecache for php 5.5

The key settings here are the max_accelerated_files, memory_consumption and the fact that it's enabled :)


Because APC is no longer the prefered opcache module and Zend Opcache doesn't support the user bin side of APC, if you still want that then you'll need a repo for it. The command to upgrade php to 5.5 includes apcu. If you want to add to this file by enabling progressive uploads then you can use this file.

Oct 08 2015
Oct 08

Before I say anything… DRUPAL 8 IS RC1!!! Great work to all the contributors that poured themselves into it to get it “done when it’s ready”. It’s looking like a really solid platform to build on down the road. Now..

I spent much of July traveling to Drupalcamps and it was so fun I’m doing it again! ELMS Learning Network will have presentations in the following locations over the next few weeks so if you are looking to learn about ELMSLN or how to optimize your existing Drupal sites and servers they run on, meet up with me at one of these locations!


Baltimore, MD (Oct 9th)

Tuning Drupal out the Wazzooo

We’ll look at how to improve the performance of Drupal by optimizing the stack it runs on. This talk covers all aspects of Drupal optimization including: Tools for identifying issues, Apache, MySQL, PHP, Varnish / Pound, cache bin management, and contributed modules you can get to make D7 fly. While these concepts will be shown within Drupal 7, many of the same techniques can be applied to Drupal 8 (or Drupal 6 for that matter).

We’ll point to multiple purespeed and other optimization posts on this site, as well as the ELMS Learning Network repository where we keep many of these recommended defaults. I’ll then take a bare-bones Virtual Machine that’s juuuuust powerful enough (at a baseline) to run Open Atrium 7.x and apply the techniques and changes I’m recommending so we can see the performance gains in real time. This is effectively the process I undergo when I come across any inherited server and recently used this to save a Drupal site that was effectively dead when it started missing it’s reverse proxy on calls prior to applying these settings.

This talk draws on a lot of documentation both in and out of the Drupal community and consolidates many of the recommendations by mikeytown2 (@mcarper) of advagg, boost and httprl fame.

DrupalCamp Atlanta

Kennesaw, GA (Oct 16th - 17th)

ELMSLN: Rethinking Systems Design

In this talk I’ll quickly outline the technology architecture powering ELMS Learning Network and how you can utilize this concept in the design of systems going forward. ELMSLN uses lots of smaller scoped sites that are all REST capable in order to achieve a unified user experience for end-users, while allowing for continuous and distributed innovation in educational technology. We don’t know what we’ll have to build to match the pedagogical needs of faculty, we just know that we have to.

It is also important to note that I said we; because ELMSLN is now a distributed group of developers and instructional designers at multiple institutions and organizations collaborating to build a better learning ecosystem. There are now multiple full time developers donated to this effort with adopters outside of Penn State now utilizing the platform.

Come learn about ELMSLN’s architecture which you can apply to any project as well as the state of the project and where we’re going next (hint: http://codepen.io/michael-collins/pen/memxpe ).

DrupalCamp Ohio

Columbus, OH (Oct 23rd - 24th)

Tuning Drupal out the Wazzooo

Same talk as Baltimore above though this session submission is more detailed.

Ignite talk: Drupal, a Thesis

This is the story of the secret life btopro lived the last 8 years of my life in pursuit of a Master of Sciences in Information, Sciences & Technology… boiled down into 5 minutes. I recently achieved this degree for my work using Drupal to facilitate social change within an organization. The thesis outlines a concept I’ve coined Information Altruism (the new title of my personal blog on the topic) which states that the donation of effort and work devalue and eliminate market potential in existing spaces. This socio-technical theory can be applied to existing information markets to fundamentally alter the way they operate.

Sep 15 2015
Sep 15

Drupal is fantastic at caching itself (I mean, you've gotta be with a code base this hefty).  Drupal 8 will see the dawn of render cache in core which is why most metrics you see of D8 are uncached (because it's basically cheating its so fast cached). If you want to play around with Render Cache in D7, it can be done but still needs some work (Drupal.org actually runs Render Cache backport to handle comments in issue queues among other things).

What we'll be focusing on here though is not the render engine but the stoage of that data. Drupal's extensive and flexible cache backend system allows you to modify where its drawing cached data from so that it doesn't have to hit mysql / the data base. The idea is that if you can push commonly requested cache bins to locations other then mysql (like RAM) then you can hit higher throughput and also reduce the time it takes to deliver a page.

Cache bin modification sounds scary, but it's actually pretty easy (though I wouldn't just go doing it blindly on production). You modify your settings.php file to include some additional configuration settings and Drupal uses the supplied project to handle the cache bin for that data. It's also the best way to confuse your Drupal friends as to why your sites are "stupid fast" on admin pages (and all pages for that matter).

Here are some different cache bin / backends you can implement (there are others):

For this post we'll focus on APC since it's the easiest to get hooked up and can be run on limited resources (I use it on everything). We'll use the APC module which uses the APC User bin (or apcu project in PHP 5.5+) to swap out the location of some cache bins. We're drawing from parts of the default ELMSLN shared_settings.php file that's included with all systems setup in the network.

To define a different cache backend drupal should know about, add something like this to your settings.php file:

$conf['cache_backends'][] = 'sites/all/modules/apc/drupal_apc_cache.inc';

This tells drupal that apc is supplying a cache backend for it to care about so that it loads this file early on in its bootstrap process.

From there, if you open up a mysql database explorer you can see all the different cache tables (all prefixed with cache_WHATEVER). To push one of these cache bins to draw from memory / APCu instead of mysql, you can add a line like the following:

$conf['cache_class_cache_WHATEVER'] = 'DrupalAPCCache';

replacing whatever with admin_menu for example, will push the cache_admin_menu bin to be delivered via memory instead of touching mysql. This bin might only get hit once, but that's 1 less time mysql is being asked to do anything. Here's some bins that I generally push into RAM / a memory based cache bin system.

$conf['cache_class_cache_admin_menu'] = 'DrupalAPCCache';
$conf['cache_class_cache_block'] = 'DrupalAPCCache';
$conf['cache_class_cache_bootstrap'] = 'DrupalAPCCache';
$conf['cache_class_cache_entity_file'] = 'DrupalAPCCache';
$conf['cache_class_cache_entity_og_membership'] = 'DrupalAPCCache';
$conf['cache_class_cache_entity_og_membership_type'] = 'DrupalAPCCache';
$conf['cache_class_cache_field'] = 'DrupalAPCCache';
$conf['cache_class_cache_menu'] = 'DrupalAPCCache';
$conf['cache_class_cache_libraries'] = 'DrupalAPCCache';
$conf['cache_class_cache_token'] = 'DrupalAPCCache';
$conf['cache_class_cache_views'] = 'DrupalAPCCache';
$conf['cache_class_cache_path_breadcrumbs'] = 'DrupalAPCCache';
$conf['cache_class_cache_path'] = 'DrupalAPCCache';
$conf['cache_class_cache_book'] = 'DrupalAPCCache';

Generally speaking you can push anything to APC cache bins and you'll start skipping mysql requests all over the place. You could just issue $conf['cache_default_class'] = 'DrupalAPCCache'; and push everything to APC but I wouldn't do this for a few reasons including..

  1. You'll fill up available APC memory quickly for sites that change often (if they don't change frequently and aren't very big then I guess you could but I still wouldn't)
  2. APC doesn't clean up after itself so when a cache expires, it just creates a new one and fragments your memory (I issue an apache restart during maintenance issue to overcome this, especially on smaller projects)
  3. Some things you can't serve from memory like the form cache or your forms submissions will constantly be marked invalid (like every stinking page submitting data)

To solve this, I always put something like this at the end of my settings files to avoid hating myself later.

# Default DB for the ones that change too frequently and are small
$conf['cache_default_class'] = 'DrupalDatabaseCache';
$conf['cache_class_cache_form'] = 'DrupalDatabaseCache';

Some ending notes:

  • If you are doing high scale sites, memcache / redis are the way to go. These will give you dedicated services for delivering cache from memory and you can push a lot of the load off to these via distributed instances. They are a lot more complex to setup though which is why I didn't cover them here.
  • Other projects like Authcache can define specific cache backends (and you'll notice both authcache and apdqc in my example settings file). These will be covered in a later post as they both offer their own complexity and take drupal even further then cache bin management alone.
  • If your running off of a newer cloud hosting cluser like Digital Ocean, Linode or others that are running on solid state drives (SSD) then you might want to look into using filecache as a storage backend. The reason being that SSD and RAM are effectively the same as far as seek time, but SSD and filecache can expire and clean up after themselves correctly. This allows you to push all (except form) bins to disk with no real negative connotation. I've experimented with this in the past with positive results. The only cavet I'll give is be careful setting up filecache as it's the only cache backend I've experienced WSOD's associated with and can be tricky to recover from unless you know what you're doing.
Apr 25 2015
Apr 25

The year is 2020. We’ve managed, through automated testing, Travis CI, Jenkins CI and crontabs; to completely eliminate downtime and maintenance windows while at the same time increasing security and reducing strain on sys admins. There are very few “sys admins” in the traditional sense. We are all sys admins through Vagrant, docker, and virtualization. We increasingly care less about how scripts work and more about the fact that they do, and that machines have given us feedback ensuring their security. They don’t hope our code is insecure, they assume that it is and instead treat every line authored by a human as insecure.

We’ve managed to overcome humanity’s mountains of proprietary vendors, replacing their code and control with our own big ideas, acted upon by which helps build the tools we need for us. We have begun to bridge the digital divide on the internet, not through training, but by refusing to be solely driven by financial gains.

We are open source. And we are a driving force that will bring about the Singularity (if you believe in such a thing). So we did it gang, it took awhile but wow, we do almost nothing and we’ll always be employed because we know how the button or the one-line script works. Congrats! Time to play Legos and chew bubble gum right?

Or is this just some far off, insane utopia that developers talk about over wine in “The valley”.

This vision of the future, 5 years out, isn’t as crazy as it might sound if you see the arch of humanity. In fact, I actually believe we’ll start to get to that point closer to 2018 in my own work and at a scale 1000s of times beyond what was previously thought possible. This is because of the convergence of several technologies as well as the stabilization of many platforms we’ve been building for some time; yes, all that work we’ve been doing, releasing to Github, drupal.org, and beyond… it’s now just infrastructure.

Today’s innovation is tomorrow’s infrastructure and this becomes the assumed playing field by which new ideas stand on. Take Drupal’s old slogan for instance, Community Plumbing. That thing we all paid for significantly at one time, but now just take for granted; that thing is becoming even more powerful then any of us could have imagined.

So, enough platitudes. (ok maybe just one more)

“Roads… Where we’re going we don’t need road.”

This next series of posts that will start to roll out here are things I’d normally save for presentations, war gaming sessions and late night ramblings with trusted colleagues. I’m done talking about the future in whispers; it’s time to share where we’re going by looking at all the roads and how they’ll converge. After all, the future of humanity demands it.

If you haven’t seen the video “Humans need not apply” then I suggest you watch it now and think. What can I do to help further bring about the end of humanity… Ok wait, not that, that’s too dark. Let’s try again…

What can I do and what do I invest in knowledge wise to help be on the side developing the machines instead of the side automated into mass extinction (career wise).

Hm… still pretty dark? No no, that’s pretty spot on. What job are you be working towards today so that 5 years from now you are still relevant? I hope that the videos to be released in the coming days provide a vision of where we’re heading (collectively) as well as some things to start to think about in your own job.

What are you doing right now that can change the world if only you spoke of it in those terms?

Apr 25 2015
Apr 25

I recently had to switch profiles for this website. In the process of doing that, I immediately afterwards said “wow, I feel like other people have had this issue”. Sure enough they have… on this blog early last year by our own @aitala in the post How to Remove Drupal Install Profile.

So, when I then went to convert my own blog to use the publicize distribution I said “NOT AGAIN!”, and read his post. At the bottom, he pointed to a project called Profile Switcher and then this issue to add drush integration peaked my interest. Because I’m really cool, I decided this is how I’d spend my Friday night downtime… patching a project I have had no need for outside of 5 minutes prior to starting to work on it :).

Below is a video showing how you can use the patch to run a the profile switch drush command as well as (of course) a drush recipe that automates everything you should do during the migration process that can be done via drush.

Apr 21 2015
Apr 21

This screencast shows how you can use a cloud provider like Digital Ocean to install a working copy of ELMSLN by copying and pasting the following line into the terminal:

yes | yum -y install git && git clone https://github.com/btopro/elmsln.git /var/www/elmsln && bash /var/www/elmsln/scripts/install/handsfree/centos/centos-install.sh elmsln ln elmsln.dev http [email protected] yes

Obviously, this is for development purposes only at this time, but I hope it shows you a glimpse of the level of automation we are getting to and where you could easily take this in the future. We already have a Jenkins instance on campus that can perform these same operations against any new server after doing some manual (ew) SSH hand shakes and user account creation.

I pause in the video to let the installer run through but still, that's all it takes to get it up and going; copy and paste into a new CentOS 6.5 box and hanging out for a few minutes. While you wait you can modify your local /etc/hosts file to point to the address (which the command will print what to copy and paste where at the end).

This will eventually replace the current Vagrant install routine so that we're using the same exact one in what we recommend for blank servers, for travis builds, for vagrant and beyond. If you need more help with setting up ssh keys and using the digital ocean interface, some tutorials are linked to below.

Note: This is not an endorsement of Digital Ocean and they are simply included in this article because I've been messing around with their service for testing purposes.

Apr 21 2015
Apr 21

Welcome to the new Drupal @ PSU!

We hope you enjoy the site so much that we want you to have it. No really, go ahead, take it. Steal this site. We did, and we’re proud of that fact. This site is actually a fork of the Office of Digital Learning’s new site that just launched recently.

How did we steal this? Using backup and migrate and having access to the source for the theme, we were able to spin the ODL site up at a new location. From there, we used Drush Recipes to further optimize the site for SEO and performance. Then using Node Export to get a few default pieces of content migrated, Default Config + Features for all the exportables and profiler builder to author the install routine, we have the same site as before but sanitized and fully released to the public (including ourselves)!

This post isn’t just to brag about open sourcing a site though, it’s about how you can now spin this site up really fast (or any other) in PSU DUG’s Nittany Vagrant project. Nittany Vagrant has a few goals that make it unique compared to other Vagrant offerings:

We want to boil our jobs down to answering questions

We all run almost the same series of commands to get started. Drush dl views, ctools, features, admin_menu, jquery_update, etc. So instead of all that, we wanted to come up with a method of helping to standardize site builds to put newbees on the same playing field as long time developers. This has always been a driving force behind the Nittany distribution, but now we’re taking it a step further. Instead of having a traditional install profile, we have a server and site build process that try to get at what you are looking to accomplish.

Right now it asks if you are doing front end development, if you need SEO tools, if you want the base line we recommend, and it even gives you the option of starting your new site build out from the Publicize distribution, drupal from a repo (which could be an install profile at that point) or by ssh binding to a server and pulling it down to work on locally!

We want mirror a RHEL based environment

Most setups you see are Ubuntu / Debian based. That’s great except most of higher education has standardized development on RHEL for long term stability and support (RHEL supports packages for like 20 years or something). Nittany Vagrant is built on top of CentOS 6.5, which is about as similar as you’ll get without paying money.

We wanted a small, clean Vagrant routine

We’ve been burned before by Chef and provisioning scripts in Vagrant. We end up using Chef for Vagrant management and then managing Chef and its cookbooks more then just fixing the small issues that prop up. This time around we wanted to use something as simple as possible for everyone in our community to be able to jump in and help so we stuck with bash, drush and drush recipes to do all the heavy lifting.

It’s already helping get other people involved as we have 6 members of the local PSU DUG community now with commits to the Nittany-Vagrant repo where in the past it was always 1 or 2.

Show me

Here’s a video I recorded showing the current state of Nittany Vagrant as of this writing and what you can do with it now just by answering questions. This is only the begininig though as we want to get into site provisioning and replication, as well as the ability to move this entire process from vagrant to being applied to remote servers (which has already been shown to work on Digital Ocean).

Mar 03 2015
Mar 03

To start..

This tutorial involves hacking core. If you aren't comfortable with doing that, you're probably in the wrong place :).  I created a dropbucket.org drop called Memory profiling in hooks which has all the code details in case you want to dig in yourself.  You'll need to modify includes/module.inc and also have devel module enabled for this to work.


I've been trying to do some low-level tuning while developing a course site in ELMSLN. As part of the project I'm constantly trying to push as many users into one system as I can prior to scaling resources. You can read about past efforts to push drupal core in this and other linked posts: Drupal speed tuning: analyzing and further optimizing Pressflow

Today, I wanted to get a better sense of why a page was taking 17.75megs of ram to render. The better we can understand what's running to accomplish page delivery, the better we can dig in and help drop that number.  The lower that number per page load, the more users we can support on the same resources and the more we can keep costs down while increasing service offering to our users.

Identifying Bottlenecks

Image showing the memory usage of each hook fired in this example drupal page loadAfter applying the code from dropbucket, you'll see something like this whenever you load any page (see right). The hook modification monitors the memory in use at the start and end of every call to module_invoke_all. This tells you how much memory was required to fire off that hook.

It does the same thing for each function invoking that hook and then repeats the same thing for drupal_alter statements as well. This gives you a ton of data to sort through but as you do, you can start to get a clearer picture of both what it takes to deliver the page as well as where bottlenecks in RAM usage might be.

XDebug is a php library that can also be useful for finding bottlenecks if you are interested in digging even deeper. I wanted per page load stats and to start to do so per role so that's why I went the modifying drupal core route (this time).

We know what's bloated, now what?

In my testing I was looking for a few things specifically; what was slow but more importantly, was it anything custom I had written. Because a major library that I use in ELMSLN has a lot of potential performance implications, I was looking for memory usage by functions starting with "cis".

As it turns out, memory usage was pretty low because I followed this great lullabot article by Jeff Eaton while developing the calls that happen quite often. I put in the artificial metric that if it was a function under 50KB I didn't care about it so that I got less data (otherwise you get things that are like 48 bytes all over the place).

I did find a few functions that I need to do some additional tuning on / make decisions about as a result of this test. The functions in question were:

  • book_node_view (584 KB)
  • special_menu_items_init (514 KB)
  • og_context_init (475 KB)
  • context_init (448 KB)
  • token_module_implements_alter (262 KB )
  • feeds_forms (145 KB)
  • regions_page_alter (116 KB)
  • stream_wrappers (115 KB)
  • boxes_page_alter (110 KB)

In total, this is telling me that of my 17.75 MB page load, around 2.7 MBs can be attributed to 9 functions. There are hundreds / thousands of functions that build any given page and now I know that 9 of those account for 15% of the memory required to deliver any given page.

Decisions, decisions

From here, I have some decisions to make now that I have this information. First, I have to look at the low hanging fruit functions that I can get rid of.  Feeds (in the MOOC distribution) is only used to initially import content, after that it's there for convenience but that convenience is apparently (feeds_forms) adding 145 KB to every page for every user regardless of if they are using feeds. That's a bit ridiculous right?  So, when in production, turn off Feeds (or at least I can). We also only user Boxes module for 1 block at the footer of the page; is that really worth having Boxes load on every page for one block to be loaded and be exportable. special_menu_items is used to prettify one menu system, which definitely can't be justified with it's half a meg footprint so it will most likely be dropped from the core supported package.

What can't I really change?

The second decision. Well, all systems in ELMSLN are built on top of Context, OG, Tokens, and (MOOC at least) the Book module. This makes it rather difficult to get rid of og_context, context_init, book_node_view, and token_module_implements_alter.  Steam_wrappers is also something built into core which just takes time to run so I don't have much of a choice there.

I may have to live with book_node_view or look and see if I'm invoking it to load in such a way that it's called when it doesn't have to be. If you look at the API docs, it's basically just adding previous / next navigation to the bottom of pages and styling it. Maybe we can write a custom replacement to cache this call in memory so that after it initially builds previous / next links that it just has access to this data in memory.  It might seem like a lot of effort, but to save .5 megs per page load it could be a big win.

What can I fix without much effort?

regions_page_alter is provided as part of the Regions module, one of my own. I can either fix this or, as we're starting to do in MOOC / the new ELMSLN UX; transition away from it (for the most part). This is something I have total control over though so it might be worth investigating if I can fix it quickly or at least reduce its footprint.  I may also just have to remove some modules like boxes, regions, and feeds which are all sure to drop the page load.

What should I explore "fixing" if its even possible?

Well, my top for calls account for almost 2 megs on every page load, those are obviously where I have to start. The problem I may find though is that Context and OG are heavily used by a ton of people. I doubt I'll find inefficiencies there but it's possible.  Special Menu Items is worth investigation though it might be better served to just remove it from the core package. Book links I already outlined a work around which would be to cache the call to the data needed from it. As book is core and we're this late in the game I'm not going to see a patch accepted to core to fix this in my life time :).

What's 2 megs in the grand scheme of life?

Let's say I have 4 GB of ram on my server, 2 GB set aside for Apache transactions. From there we can do some simple math to see how many concurrent connections we can support. Again, the goal here is do as much as possible with as little as possible (yes pushing this into the cloud / NGINX / advanced caching techniques for auth'ed traffic can drop it even more).

Me logged in

2048 Megs / 17.75 MB/user = 115 users

I'm not the target audience though, so me logged in as a student (when giving the role ability to view devel output)

2048 Megs / 16.75 MB/user = 122 users

Right there, by switching roles and analyzing I was able to drop 1 meg which translated to 7 additional users on the same resources. Now let's factor in a theoretical reduction of that bloated 2.7 MB.

2048 Megs / 14.05 MB/user = 146 users

This is where low level performance tuning can have big impact. If I can clean up just 2.7 megs from the page load this would increase out load potential by 24 users or a 16% gain in scale without actually scaling up our systems.

By tuning for the least common demoninator, when we start to scale up and get into distributed memcache servers or adding a more realistic amount of RAM, these percentages all can add up to a lot more users on a lot less resources. With a theoretical 16 GB of RAM allocated to Apache we could serve 1,166 concurrent users.  As past numbers have shown though, even with 2,200 - 3,500 students a semester, we're never anywhere close to this number of concurrent users.


Since performing this audit I've been able to decouple our system (in dev testing) from og_context, regions and boxes modules. Then, using hook_module_implements_alter I was able to further optimize the book_node_view by creating a new module called Book Cache and completely removed special_menu_items_init which wasn't serving the purpose we have for the module. I also discovered a performance enhancement for the og_ui module which will increase the performance of every node load once accepted! https://github.com/Gizra/og/pull/38

After applying these techniques and disabling feeds I was able to get things down to the following numbers when logged in as me:

14.5 MB on the test page which is 3.25 MB smaller then when I started!

As the target audience role the drop was from 16.75 MB to 14.75 MB for a 2 MB gain for typical users!

2048 Megs / 14.75 MB/user = 139 users which is a 12.25% gain over the original configuration

Page delivery times I've optimized in the past via the APC user bin so authenticated page delivery times are usually around 250ms for a site running around 145 modules

Feb 14 2015
Feb 14


Something that inspired me recently to write about DUG, are the efforts of MediaCurrent. Media Current has recently been pushing forward a series of postings talking about how they are giving back and being a lot more open about use of time to give back (which is awesome). As a result, I wanted to describe the culture of Drupal @ PSU and how we’re contributing back to Drupal and our members.

Who are we?

Recently, The Penn State Drupal User’s Group has been getting a lot of interest from people to develop things for them. The funny thing is, that PSU DUG is kind of like Fight Club; it doesn’t actually exist in the way traditional organizations do. Yes, we meet monthly, yes we all do cool things, some time in collaboration. But there is no official “I’m going to take this project to the _____ drupal team” (yet).

How do I join?

With that in mind, I wanted to let you know what we’re looking for in community members; because there isn’t a “go apply to ___ group” to become part of DUG; it’s a distributed community of people who love Drupal and love Higher Education / Penn State as well.  For starters, the easiest way to get into our community is to search the job board for jobs with the word “Drupal”. We have a lot of them, some part time, many fulltime. You can find them on http://psu.jobs/ .

What’s in it for me?

So why would you want to work here? Because We Are, a community. A community of experts and novices and humble in whatever role, without labels and designations. We are a community of equals, seeking to better ourselves, our units, our university, and ultimately the web as a whole through open source contributions.

We also are a community that invests in itself. PSU DUG pushes its members to present at DrupalCamps, learn more about Drupal and each other’s workflows through open conversations, and provides free training events on campus to build up the knowledge base and empower others to do the same. We are building the tools by investing in our own people to be a powerhouse of innovation in the drupal community. We Are contributing to the ecosystem you all use every day and we love doing it.

So what kind of contributions?

Those contributions go far beyond just traditional code publishing. Yes, many PSU DUG members submit patches to drupal.org issues, write their own code in-house, and publish code on drupal.org in the form of modules, themes, distributions, and drush plugins. We write code, a lot of code, and contribute a lot of code directly back to drupal.?org. But we also help run the Education focused drupalcamppa.org.

We get involved in issue queues, we answer questions on Slack and IRC channels. We (interally) answer questions on company intranets. We build bridges to other Camps, especailly DC Ohio, DrupalDelphia, and DC New Jersey through presentations, word of mouth and sharing of ideas about how we run camps and what has and has not worked well before.

We write documentation, blog on this website which aggregates to Drupal Planet to showcase what we’re working on and how we’ve learned, succeeded and failed in various development efforts.

What will I learn?

We push industry standards and norms and keep tabs on them as well as help influence them. We recommend and try to steer people towards Sublime / Vagrant / Git for local development, Virtualized infrastructure on a powerhouse of a VMWare Cluster which is cheap, fast, and supported. We push the use of automated deployment and testing engines in the form of Behat, Travis CI and Jenkins CI. We push provisioning formats like Chef, Puppet, and Ansible. Simply put, we push skills that will make you shine because in doing so, you’ll be more likely to build up our community and become immersed in it.

We Are what the drupal community was founded on. Small, distributed teams and individuals, wanting to make a living meeting goals but also wanting to make the world a better place through adoption of open source technologies.

Sign me up

If this sounds like a fun place to work, I highly encourage you to engage with our community. I feel like I woke up from a dream of where I’d want to work, and found myself in the middle of the coolest place I could imagine.

Job Board

Jan 27 2015
Jan 27

Let’s face it, we have a problem. It doesn’t matter how powerful what you build is if no one wants to use it. How can the modern CMS hope to compete with simple drag and drop, one-click authoring services that can produce the majority of websites people want to build? Do you really think the answer to this can still be found on Drupal’s edit form or with the panels ecosystem? Will open source CMSs fade from popularity because we continue to ask human beings to design content with a WYSIWYG? Well, we aren’t waiting to find out…

CKEditor in core is not going to be enough to make authorship shine in D8 and beyond. We need to focus on the thing that everyone seems to work on last, and that’s the authoring experience. Members of the Penn State Drupal community are organizing a sprint at DrupalCamp NJ which will focus on a radical overhaul of the “page” authoring experience in Drupal. If you have ideas, you should seriously consider joining us in this endeavor.

Our starting point can be seen in this InvisionApp prototype. We are searching for that sweet spot between simplicity and flexibility, and we think we might be close.


  • Create an authoring experience that rivals the “web-site-tonight” platforms

  • Build against JSON endpoints from a nearly static HTML interface; making it “headless”

  • Create a flexible yet simple data model via Drupal entity’s

  • As part of the push towards “headless” development, we want to be able to support Drupal 7, 8, Backdrop and theoretically anything that would supply the IA and RESTful endpoints

  • Not be locked into any specific modules for production (though we will support RESTWS first)

  • Build upon the ideas of panels and omega (3.x) while sticking to responsive design principles inline with Zurb, bootstrap and other popular frameworks

It is a moonshot born out of a three + hour, wall-wide, white-board jam session between me, Michael Collins (@_mike_collins), and Michael Potter (@hey__mp).  We are spearheading the development of a prototype built in either AngularJS or ReactJS talking to custom D7 “element” entities exposed via RestWS endpoints.

This is the kick off development sprint and we hope to see you there with your feedback, ideas and code :)

Dec 24 2014
Dec 24

This is a review of everything that happened in ELMS Learning Network through 2014. In 2014, we picked up an additional member institution that has been using ELMSLN for online courses. Wisconsin Law School’s Center for Patient Partnerships has been avid supporters of ELMSLN even prior to adoption in the past year and are very happy with the flexibility and control that it gives them over learning environment development.

They were also instrumental in landing the Wisconsin Law School an innovation grant to pilot ELMSLN for the entire school. The bulk of planning and figuring out what exactly that meant happened in 2014 and the deployment and course development will really pick up in February 2015. We are going to be collaborating on getting ELMSLN integrated with their single sign-on system as well as flushing out notions of shared content resources and discussion forums. As a result of our relationship, they have also dedicated a junior developer to the project at least through the summer.

Deployment and hires have also picked up locally as we now have the following groups at Penn State working actively on their own ELMSLN instances:

·      College of Arts & Architecture (A&A)

·      Eberly College of Science (Eberly)

·      The Rock Institute

·      College of Agricultural Sciences (AG)

·      College of Health and Human Development (HHD)

Prior to 2014, Arts and Architecture (where I work) was the only group on board. In addition to being on board, A&A hired two part time junior developers to help with development efforts. Eberly also has contributed large portions of a full time user experience lead to the project to improve usability of the platform across its many systems.

A&A is still the only production instance delivering courses to students but Rock Institute is actively developing courses to launch mid-spring. Eberly and AG are currently engaged in course migrations / clean up and HHD will be beginning the process in the Spring, as well as contributing part of a full time site builder’s time to the work.

Much of 2014 was spent working in automation, build quality, and accuracy of builds. This focus on automation and accuracy can start to be seen in increased adoption locally as well as a group in the UK joining the project actively in 2015. Vagrant is another technology we’ve adopted to increase the quality of our builds as we now have completely destructible environments to test and build new functionality.

We now employ two robots dedicated to improving quality and scalability of work without increasing demand on people. Travis CI is a quality assurance robot that ensures every code push for ELMSLN is working. This ensures that our Vagrant instance as well as anyone that were to get the project at that point in time will be able to successfully install it.

Our other robot, Jenkins CI, just started helping recently. Right now he helps maintain the accuracy and quality of builds and upgrades internal to Penn State. For example, nightly Jenkins runs a script against all deployments to ensure that the file system is in place accurately and being managed securely. Not that systems magically become insecure, but people with access to systems can make mistakes; Jenkins now ensures those mistakes would only be temporary.

Another thing that the team did is talk about ELMSLN. The following cities had presentations about ELMSLN, most by team members:

1.     Austin, TX - Drupalcon

2.     Atlanta, GA – Drupalcamp ATL

3.     Boston, MA – Campus Tech

4.     Columbus, OH – Drupalcamp Ohio

5.     Madison, WI – Drupalcamp Wisconsin

6.     Philadelphia, PA - Drupaldelphia

7.     Pittsburgh, PA – Drupalcamp PA

8.     Silicon Valley, CA - BadCamp

9.     State College, PA – Web developers Conference

10.  Washington, DC – Open Ed

Data also has the ability to overwhelm. I constructed a plugin / website to monitor just how much of an impact ELMSLN is having outside of “elms” outright. http://dd1.btopro.net/drupal_org_user_data/btopro

As of this writing, 64 projects contributed to drupal.org to improve drupal as a whole (and be used for ELMS) have been downloaded 775,050 times and are currently reporting 11,589.

 Installations. That doesn’t include ELMSLN proper as it’s download information is on github.com and not connected to these numbers. That’s ~1,200 systems managed by developers who’s success hinges (in part) on parts of the ELMS ecosystem. This means that they’ll jump into contrib and expand and help build upon ELMSLN without even realizing it.

We also saw commits to ELMSLN contributed modules from Acquia, Blink Reaction, and Bryce Jordan Center employees. These are contributions that no other educational technology platform would be able to pick up and is a testament to the investment in abstraction of the platform to be built on top of Drupal over the years. It’s awesome and incredibly humbling to have field experts downloading and contributing back to your code and I’m so happy to be apart of the larger Drupal community / ecosystem.

It’s been a whirlwind kind of year with more big news hopefully to be made in early 2015. It is incredibly overwhelming for me both the successes and the new problems to tackle as a result.  Every positive tweet, comment, kind word, email or other communication is not forgotten; they keep me going. They keep me knowing that we are making an impact and that we can achieve my ultimate goal for the project: make the world a better place through better educational experiences.

In the next 20 years 2+ billion people will “come online”. What kind of world will we provide them? One riddled with the educational infrastructure problems we have today? Or an even more open, more transparent world. Where people can setup their own learning ecosystems with a click of a button, cost nothing, and help them education their family and friends.

The Singularity is forcing us towards a hub-less society. One in which everyone is able to produce knowledge and that we can all learn from others. We need learning technology that is free and incredibly flexible to allow for the myriad of ways people learn and will want to structure information. No more silos. No more boxed solutions.

Happy Holidays / New Year. Let’s make 2015 even awesomer then 2014. 

Oct 20 2014
Oct 20

TL;DR: I've created a fork of Pressflow for the purposes of conversation and analysis -- https://github.com/btopro/Presser-Flow-FORK

History lesson

Pressflow is a very popular Drupal 6 fork of Drupal that lead to many improvements in Drupal 7 from a scale and speed standpoint. It required some tuning but you could start to make it fly and be viable at high scale. Pressflow in Drupal 7, while maintained, really only has a handful of improvements; the most notable I've found to be turning to APC to read in copies of js/css files instead of expecting only the file system. This minor change can really improve speed (5% or so) for sites with lots of JS/CSS files (uhh, any drupal site).

The other improvements are more so in hook flexibility where it allows developers to tap in and do crazier stuff. This makes sense then that many Panthon engineers are involved in its development / maintenance given the crazy scale of their infrastructure / deployments.

Forking Pressflow

If you've read my past posts on here, you'll probably find a trend: I'm obsessed with performance tuning. Any time you can get more responsiveness without more hardware, I'm very, very happy.  I've been running a modified drupal core for ELMSLN for some time now and decided instead of keeping these changes / tuning to myself, I'd try and document them / test them and see if any changes make sense for pressflow.

In https://github.com/btopro/Presser-Flow-FORK you can see 3 folders:

  • _PATCHES - all the patches (from drupal.org) utilized in the metrics
  • _RECIPES - a drush recipe that auto optimizes to the level used in testing, there are also recipes for each of the sites in the test so you can see exactly what was used for testing.
  • _METRICS - XLS file with detailed metrics of how testing was performed, where, and what combinations

Table of performance metrics associated with the testing

The full metrics are included in the file, this is just part of the summary. All values in Green mean that they were improvements over the baseline (stock version of core for that site). Red means it regressed. Bold means it was the best value seen in that category.

The fork is called Presserflow (fork) in the image above and you can see it performs pretty well vs Pressflow (it should it's built on top of it). It performs relatively similar with exceptions coming in the form of noteworthy performance improvements on systems dd1 and dd2 around front page load. The other values are pretty similar, Pressflow being slightly better on some and Presser on others; suggesting that it's probably statistically insignificant

Then to make things funny for the "you can't scale that bloated CMS" crowd I ran it against similar settings that I run in production on all of our ELMSLN instances (Kitchen sink). As you can see, kitchen sink and an APC / memory based cache bin system absolutely slaughter Drupal, Pressflow and Presser stock instances. In all categories except memory usage (at times) it destroys the competition.

If you can use APC / memcache or any other advanced caching system I highly recommend it. We use APCu bins to make drupal insanely fast even on only acceptable resource allocations.

Past posts in the ELMSLN performance series:

Sep 30 2014
Sep 30

Drush Recipes has come a long way since the project was first announced on planet a month ago.

Highlights of Beta

  • Testing, a lot a lot a lot of testing
  • Travis CI - We’ve passed our own builds in our latest testing of beta4 against php 5.3, 5.4, 5.5 and HHVM
  • Multiple contributors, including a junior developer currently dedicated to QA / Travis
  • Multiple camp presentations to elicit feedback, all of which was incorporated into the project
  • Production usage - ELMSLN now includes Drush recipes and implements the dr_safe_upgrade script
  • Documentation - Below is a playlist that documents each command and options you can use in these commands
  • Additional recipes added from real world usage and community feedback

Work is currently underway to build a Drupal site development wizard (based on puphpet) that will use drush recipes to ask a developer a series of questions and start building a site with all the functionality that they request. This is a fundamentally different way of building a Drupal site that can easily be captured, transferred to another developer and be deployable more easily.

Yes we haven’t solved features entirely, but we’re getting there in many of the sticky areas (like permissions and roles). Using the included ddt function you can take a finished site and turn it into a recipe to give to someone else though. For example, the command below will take a Drupal site and turn it into the http://drush.recipes/ web service (alpha).

drush cook http://drush.recipes/sites/default/files/recipes/drecipe_uploads/drush_r...


This playlist illustrates all the commands that are now linked to on the drush recipes project page.

ELMSLN / Production usage

My work on ELMSLN is what’s inspired drush recipes. I started writing a lot of drush commands in succession against different, well structured aliases. As I started to write YABS (yet another bash script) I noticed that I was doing things that I imagined everyone was doing; running a series of commands to get stuff to happen in a pattern. These patterns exist everywhere, and when used together, they have meaning. dr_safe_upgrade.drecipe was the first recipe that I made by hand which does registry rebuild, takes a site offline, runs updates, brings it back up, clears caches, runs cron, and then seeds entity caches.

It was a simple series of commands that originally appeared in a script I was writing for ELMS learning network to spider the network and upgrade all systems. As you can see, this script was doing a lot of drush calls against the different parts of ELMSLN that it was self-aware of. Now, we have one drush call instead of several spread across multiple lines. This lead to a 100 line reduction in bash scripting while providing a stable upgrade drush routine that can be applied to any Drupal site ever created!

Just look at the commit that pushed drush recipes into usage in this script. All the red lines are code that was removed, while green code was additions. This removed 97 lines from the file while only adding 4!


About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web