Jan 06 2020
Jan 06

In my last blog, I talked a bunch about some of the basics of our development efforts around Acquia ContentHub 2.x. There's a lot more I'd like to discuss about that module and the efforts our team has put into it, but one of the comments I got on twitter specifically asked about the Drush commands we packaged with 2.x, how they're used, and what you can do with them, so in an effort to both continue discussing the good work of the team, and the capabilities of the product, I'm going to dedicate this blog to that topic.

A Word about Data Models

I could certainly just document how to use the commands, but as with anything, I think understanding the theory involved is helpful, both for greater understanding of the product itself, as well as principles of good Drupal development. In the previous blog, we talked about the 5 basic steps of data flow, but I didn't get into the specifics much. So I want to talk first and foremost about how ContentHub finds and serializes data to be syndicated (since this will apply directly to one of our drush commands).

ContentHub 2.x relies heavily on a new contrib module called depcalc. Depcalc's got a rather simple API that allows a DependentEntityWrapper instance (a super lightweight pointer to load a particular entity) to be parsed for other dependencies. This API then recursively calls itself, finding all dependencies of all entities it encounters along the way, until it runs out of stuff to look at. Now, many data models are going to be processed pretty quickly, but there are lots of other data models which process slowly. Deeply nested dependency trees will take some time to calculate. I've seen trees of 1200-1400 take 6-8 minutes to process, but I've also seen trees of 300-400 process in just a few seconds. I've also seen data models that didn't come back with results after over 30 minutes of processing. The difference is HOW they're structured, so it's really critically important to understand your data model. If you don't you might not get the results you want or expect. ContentHub has a number of APIs dedicated to helping to streamline this functionality and I intend on discussing them at length in other blogs posts, but for the sake of this blog, it is just important that we establish a baseline understanding. Different data models have different processing characteristics. YMMV.

Calculating Dependencies

In order to understand the characteristics of your data model, you need to understand the basics of how depcalc is going to find dependencies for your entity(ies). First, when an entity is handed to depcalc, it dispatches a dependency calculation event with the DependentEntityWrapper. Event subscribers tend to focus on one classification of dependency, so for instance, one might check to see if the entity is a content entity with entity reference fields, and then dig through all those reference fields finding subsequent entities for processing. Another subscriber might execute if the entity is a config entity and process through Drupal core's config dependencies. Yet another subscriber might look exclusively at text areas finding the input filter that was used to process the data in the field. A handful of these sorts of events exist within depcalc specifically, and since it's an event subscriber pattern, if you have a custom relationship that won't be calculated through our existing subscribers, you can always add your own. As I previously mentioned, all entities found this way are recursively calculated until we find no new entities. We don't attempt to deal with simple configuration, and non-entity data is yet to be used in our syndication pattern.

Identifying Problematic Data Models

Now that you understand the basics of HOW we calculate dependencies, it's up to you to look at your own data models and make determinations about their compatibility with the process we're about to attempt. There are a few obvious guidelines to follow however.

  1. If an entity of a particular bundle has references to other entities of that same bundle, it's possible you will end up having a dependency tree that includes ALL entities of that bundle.
    One of our early clients asked us why they were getting ALL news articles when they exported any one news article. We looked at the data model and noticed that they had "previous article/next article" entity reference fields, and suddenly, this was a very easy question to answer. Similarly, I've had customers with less clear cut relationships between entities of "like" bundle. Organizations related to other organization, and this can lead to situations where calculation on one organization node might happen rather quickly while others simply seem to never finish. In future blogs we'll talk about how to handle these situations, but you need to identify up front if you have them.
  2. Paragraphs
    We support paragraphs, but if you've used it for page layout, it can be a real bear to calculate depending on how deeply nested it is and how many paragraphs are used on an average page. Also, we don't move twig templates around, so the receiving site likely won't have the templates to interpret the incoming paragraph data and it will be displayed oddly.
  3. Lots of entity references
    If you have a single entity bundle with many entity references, this can also be indicative of problematic data modeling, and can make it difficult to predict how long entities of a given bundle might take to process.
  4. Custom Field Types
    This isn't really "problematic" so much as you should be aware that ContentHub is going to make a "best guess" at field types it doesn't understand. If you have custom field types, or even contrib field types we've not yet written support for, some of your data may be incorrect or missing. If you find this to be true for any contrib field, feel free to file a ticket and we can look at what it would take to get it supported.

Ultimately, there's no harm in trying, but you might be surprised by how many entities are actually related to each other in these circumstances. In a future blog I'll detail how we break down these entities and make them processable even when they might be problematic. Also, keep in mind that if an entity you want to syndicate references an entity with the characteristics we've described above, all the same problems can apply.

Exporting Via Drush to Flat File

With all my caveats out of the way, let's get to the meat of this blog and talk about using Drush to export our data. We're going to use a file in the local file system to store our data for output. In order to do this though we'll actually need to files. ContentHub's Drush export command works on the idea of a manifest file to define the specific entities we want to see exported, so we must first create the manifest file. The file can be named anything you like, so you could actually have a series of manifests for different use cases. The manifest can have 1 entity, or however many you need. Start small and work your way up to whatever you might need. In the normal operation of ContentHub with Acquia's service, we seldom need to move many top level entities at once. While a lot of care was taken to make the import and export processes as lean as possible, Drupal core still has static caching of entities baked deeply into entity storage, which can exhaust memory if you have lots of entities loaded over the course of a single bootstrap.

The manifest file should be in yaml format. We support referencing entities in "type:id" or "type:uuid" formats. Your manifest could look as simple as this:

entities:
  - "node:1"

A more complicated manifest file might look thus:

entities:
  - "node:1"
  - "node:91644a22-8ec8-413e-91fb-b928dba88fd7"
  - "node:315a0239-57d2-4dcb-89bd-f9a76851b74c"

Our first example exports node 1 and its dependencies. Our second example exports node 1, and the two other nodes by their uuids, along with all the dependencies across all 3 entities. If all of these nodes were of the same type, the supporting config/content entities common to them all would only have one entry in the resulting exported output, so this can be a fairly efficient way to group entities together by top level bundle. Also, since we can export config entities, a manifest file can also reference config entities. If you wanted to export a view or some other entity with these 3 nodes, you could absolutely do that, you just need to use the Drupal entity type id, and the id or uuid of the entity you want to export.

Let's assume our manifest file is named "manifest.yml". We can execute our drush command from inside the Drupal directory like so:

drush ach-elc manifest.yml

Once the dependency calculation and serialization processes are complete, this will output what we call "CDF" directly into your terminal. CDF is a custom json data format we use for communicating about our entities. In future blogs, I'll break down CDF into its various components so that it's easy to understand and dissect. If you want to capture this CDF to file, we can do so with typical CLI notation:

drush ach-elc manifest.yml > manifest.json

A Quick Word About File Handling

CDF Doesn't attempt to make the binary representation of files portable. There are obvious reasons for and against doing this, but currently ContentHub depends on sites and their files being publicly accessible. We currently only support the public file scheme (though we want to support S3 and Private files in the long term). If the site you performed your export on is not accessible to the site you will import into, then your files will be missing once the import is complete.

Importing Data from CDF File

Assuming we have successfully exported CDF data to a local file, we can attempt an import. Let's discuss the basic requirements of the receiving site:

  1. Code base must be the same
    All the same modules must be available within the code base. They don't have to be enabled or configured (ContentHub will do that), but they do have to be present.
  2. A blank canvas is always best
    While not a strict requirement, a blank canvas in terms of content and configuration is always going to demo best. I'd suggest using the "Minimal" installation profile for your first attempt. Keep in mind, ContentHub attempts to unify your configuration settings, so if both the Originator (we call these sites Publishers) and the Receiver (we call these sites Subscribers) have the same entity bundles, ContentHub is going to bring the receiver's configuration inline with the originator. That's fine most of the time, but if your setup is more complicated and includes any sort of configuration conflicts, we'll need to solve those separately. While this CAN be done, you probably don't want to attempt it for your first try using ContentHub, which is why I'm suggesting the Minimal profile.

With those guidelines in place, we are now ready to attempt an import. Be sure to point the drush command at the json file you created, NOT our original yml file.

drush ach-ilc manifest.json

This should result in terminal output that tells you how many items were imported. Something like:

Imported 73 from manifest.json.

Conclusion

Congrats! You've just successfully moved content and configuration from one Drupal site to another via Acquia ContentHub! As I've mentioned before, ContentHub is actually backed by a service for doing this at scale, but the benefits of having a drush command for debugging and testing purposes is really invaluable. It works at a small scale, and makes it possible to trial ContentHub's features. We actually do this programmatically in our test coverage a lot to prove that ContentHub is working as expected, and covers the various use cases we want to see.

In future blog posts I'm going to dissect CDF and show what it's doing, and how it does it. I'll also be posting about manipulating your data models, controlling what field data is and isn't syndicated and calculated, and probably a general discussion of the different events ContentHub & Depcalc dispatch, what they're used for, and how they can customize and streamline your data flows. As always, I'm super interested in any feedback people have, and would love to hear about your experience.

Jan 02 2020
Jan 02

So my blog's been offline for a while now. I think there was a security issue sometime around 8.2.2 because I just upgraded the public version of the site from 8.2.1 directly to 8.8.1. I actually did an upgrade to 8.8.0 alpha-something that I used to bootstrap the upgrade to 8.8.1, but public-facing the site made a pretty big jump. Some of this is due to laziness on my part, but a pretty significant portion of my disappearance has been due to a new (to this blog) position within Acquia. I've been at Acquia for nearly 6 years now (in April of 2020), and for almost the last 2 years, I've been helping Acquia re-develop the ContentHub product.

ContentHub is a really interesting problem space. I tend to get sucked into these very nuanced, support-everything-imaginable situations with Drupal module and core development, and ContentHub is no different in that respect. Ultimately, it's intended to be a syndication engine, taking content from one site and materializing it on another. However, the nuance of doing this between Drupal sites is quite detailed. In order to solve this, we approach content syndication in a 5 step process, with 2 steps happening on the originator, 2 steps happening on the receiver, and 1 step happening within our service.

  1. Dependency Calculation (Originator)
    Any given piece of data within Drupal might depend on dozens of other pieces of data. The common "article node" that Drupal 8's standard profile ships with depends on around 40 other entities at the very least. These include the node type, field storage and config, author, tags, image(s) and any supporting entity bundle/field/view mode/form mode data each of those entities requires. Even simple entities are quite complicated in terms of what data they require in order to operate. We can't really "depend" on any of these things existing on the receiving end of our syndication pipeline, so we have to package it all up and send it en mass and let the receiver figure out the details.

  2. Data Serialization (Originator)
    Ideally, this is a "solved" problem in Drupal 8. The 8.x-1.x version of ContentHub tried to use the serialization engine that ships with Drupal core in order to do much of this work, but this approached tended to come up short when dealing with multilingual data. This may have been a flaw exclusive to ContentHub 8.x-1.x, but ultimately, when looking at this problem space, it seemed easier to have each field declare how it was to be serialized, deal with all language level data on a per field basis, and have a fallback mechanism for unrecognized field types for "best guess" solutions.

  3. Communication (Originator->Service->Receiver)
    Once we've found all our dependencies and serialized them, we send that data to an Acquia specific service whose job it is to filter data by customer-defined criteria, and send the appropriate data to any receiving sites the customer might have on a per-filter basis.

  4. Data Collection (Receiver)
    Since only filtered data is sent to a receiver, it may not actually get all the data required to install a given piece of content, it just gets the content itself. Each piece of data is constructed with a reference to all its dependencies so that a complete list of requirements can be built before attempting to import anything.

  5. Data Import/Site Configuration/Dependency Handling (Receiver)
    Ok, this is a bunch of stuff to have all in one step, but when you have a giant list of completely different types of data with different module & configuration dependencies, you have to be flexible. In this step we start by identifying all the required modules for an entire import. Once that's complete, we check to see if those modules exist on our receiver and bail out if they don't. If they DO exist, we enable them and start parsing through the incoming data. This is tricky too though because we can't create a new node without first creating the node type and adding all the fields to it. If one of those fields references taxonomy terms from a particular vocabulary, we have to make sure that vocabulary exists... etc, etc. To this end, we loop our data set progressively creating the dependencies based upon what we have locally available and what's required to support our incoming data. Eventually, we'll process the entire incoming data stream finishing with the original entity that was requested for syndication.

Having outlined this basic flow, I want to contrast this with Migrate for a moment. Both Drupal 8 & 7 migrations happen on a per-entity-type basis. All the incoming node types are created before any nodes will be created. All the users will be created before nodes. All the terms, etc before most nodes (based on what dependencies any migration might have). This is really sensible since we tend to use Migrate for an entire site's worth of data at a time. ContentHub doesn't have this luxury since as little as 1 item might go to a site for syndication, or as much as many thousands. Whatever the case, ContentHub has to be capable of identifying how to handle any incoming entity type, creating that entity type, and moving on to some other based upon how the dependencies are stacked in a given Drupal setup.

This approach is really powerful because it ends up acting very similarly to the Features module. Since ContentHub can syndicate any sort of entity (so long as it supports uuid... don't get me started) it can package up things like content types, views, etc and send them to another site, complete with automated module dependency calculation. The team's common demo example has been installing Umami as an originator of content, and then using Standard or Minimal as a receiver, and then watching all the Umami configurations and content stream in to fill out a blank site. Acquia's demo team has actually been using it for setting up demo content in a limited capacity, and I'd love to help push that further by figuring out how to use our data format to uninstall data from a site (by reverse dependency encumbered-ness).

Ultimately, I'm really pleased with the efforts of the last almost 2 years of time. I think ContentHub 2.x is a really cool tool that a lot of people could use to varying degrees to do different compelling things. We included a couple of drush commands with it so that people could play with it without subscribing to our service. They can import/export data from manifest files, and are really useful for demoing the product and also doing development in a completely controlled manner. If you get a chance to play with ContentHub 2.x I'd be very interested to hear from you, or if you want to ask questions, please let me know!

Jan 13 2017
Jan 13

I like things that work. I think most technicians do, but as a web developer I have a very serious problem. My most effective environment for doing web development is the one that exists on my own personal box. It can also be a rather impractical place to develop because most of my customers (current and historic) are on rather customized server stacks. Typically, the host has customized the environment to their own specifications. It's not uncommon to find additional services like solr or maybe a memcache server in the mix. I work for a company who builds its own platform but I personally have no control over what environment gets built for a customer, nor do I have any influence over where the tech stack is going at a higher level.

I'm not an Ops guy. I don't like screwing with server configuration. It makes me cranky. But I'm also not blind and I do like artifact generation. I have a long history with what we in the Drupal community think of as "artifacts". We us a tool called drush make and build and distribute Drupal profiles (i.e. artifacts) through it. In recent years, PHP itself has adopted this same style of approach with composer.json/lock files as have many other programming communities. If you look at what Docker does with their docker files and things like docker-compose, these are tools that build consistent "ops artifacts"... regardless of where they're built (ish). Following these threads of commonality, I decided I wanted to build a better local development environment tool. Hopefully one that would be inclusive in nature and give communities outside the ones I've historically associated myself with an opportunity to benefit and contribute.

Hedron:

Hedron is an PHP and Docker-based tool for generating ops and development artifacts that work in tandem to give you a local development environment with the flexibility to match what exists in your hosting environment. For ops, Hedron depends on docker-compose to build out clusters of containers providing the same basic environment as your production system. For development, Hedron embeds a Drupal 8-inspired plugin system to give developers the flexibility to build their development artifacts in whatever way makes the most sense for them. I've personally spent some time on a Drupal 8 workflow because I know Drupal 8, and it works well enough for people to experiment with and begin improving.

Hedron leverages many of the tools Drupal and PHP developers are already used to. Out of the box you'll find:

  • Familiar Symfony components
  • A Drupal 8-inspired plugin system rewritten from scratch
  • A powerful set of tools for responding to git push commands
  • On demand development environments per git branch
  • Helpful CLI tool
  • Room to contribute ;-)

Hedron is broken up into two separate components today with Hedron and HedronCLI. HedronCLI is designed to ease using and installing Hedron with built in commands for installing the core of Hedron, adding clients and projects, and managing docker clusters per project with ease. In addition to this my father has helped me put together an initial docker-compose build that should work well for Drupal 8, and we've made that available along side Hedron on github. Our main website should have all the instructions you need to get started.

What you'll need:

Hedron has a few dependencies:

  • php 7.x
  • composer
  • docker (and docker-compose if they're not packaged together for you)
  • git

You'll need these 4 things installed on your local machine. Ultimately I'd like to see Hedron evolve to a point where it ONLY needs docker, but I'm not certain how best to do that today, and I'm more interested in forward movement. (Fail fast, fail early)

Caveats:

Docker-for-mac is SLOOOOOW. I mean like, really slow. This is due to an issue with OSXFS shares. There are work-arounds, I've not yet delved into what's involved, so just keep in mind if you're on a mac, YMMV a lot depending on the newness of your machine and as always "patches welcome".

I hope people give Hedron a try. I've put a bit of thought into how I personally want to develop going forward, and I feel like this is moving in the right direction. I think there's a ton of potential for future improvement and expansions into other offerings based on Hedron. Give it a whirl and let me know what you think.

Nov 28 2016
Nov 28

This blog was originally intended as a comment on Maxime's medium post. It got long, and I am loath to create content for mega-sites. As such, I responded with a post of my own, which is exactly what Maxime did to Robert Douglass' original Facebook post... I guess we all have our competing standards ;-)

I’m sorry Maxime, I really think your premise is silly. The notion that GPL Commerce isn’t a thing because it is not yet a thing is circular reasoning.

Robert's argument resonates with me a lot because I worked for Commerce Guys for 4 years and am currently at about the 2.5 year mark with Acquia. Robert of course is the opposite... he worked for Acquia for many years and now Platform.sh (but was there before the split thus Commerce Guys). I can’t and won’t speak for Acquia, but I will say that I understand Robert’s frustration. I also understand Acquia's strategy. I'm not happy about either, but I do understand them both. Maxime, your message is basically “pack it up and go home Drupal Commerce”. This is ridiculous at best and destructive at worst. You yourself cite that DC is especially useful when commerce and content have a tight coupling… that’s of course because its damn near impossible to get this kind of behavior out of any of the other commerce solutions that exist. Clearly, that's beneficial, especially to the Drupal community.

Drupal is important… it allows for content management, strategy, architecture and display in a way that other solutions simply DO NOT. Drupal Commerce, by way of its relation to Drupal, gets ALL of these benefits for free just by being part of the ecosystem. Drupal is a literal army of web developers, most of whom would understand 80%+ of Drupal Commerce at first blush by virtue of the fact that it's built the same way Drupal is. Of course I'd love to see the engine that is Acquia pumping out more Drupal Commerce and helping to mobilize that army of Drupal developers to invest more heavily in Drupal Commerce, but I feel this may be a bit of a race condition:

Until we have a capable Drupal-specific commerce solution that can compete feature for feature with the big 3, enterprise re-platforms will be few and far between. Of course, we may not see a Drupal-specific commerce solution with that level of capability until we have some Enterprise re-platforms to help fund it.

Maybe with Migrate now in core the Drupal Commerce team could consider building Demandware, Hybris and Magento migrations to ease the pain of re-platforming and perhaps attract those clients. I'm not certain that is even possible or if it would change anything when face to face with enterprise clients invested in one of those platforms, but then, none of that is likely to end up on my list of todos regardless so I am totally spitballing at this point.

When a client comes with their commerce solution already in place and wants to do an integration... Drupal's REALLY good at that. But if a client comes to you a.) looking to re-platform their commerce or b.) without a commerce solution in place, then you have some soul searching to do. A lot of Drupal's money is generated from enterprise development these days, but I worked with Drupal in SMB for 5 years at my parent's Drupal company before joining Commerce Guys (yes, my family runs a Drupal company) and I can tell you... most of SMB doesn't have the money to do Hybris or Demandware... and even if they did, they'd consider it a complete waste of resources. This leaves Magento, WooCommerce, a smattering of PaaS/SaaS solutions and of course Drupal Commerce. Now if I'm putting a Drupal site in place... well the solution is pretty obvious. I can dismiss Woo out of hand because it's WP, I can dismiss Magento because it's non-trivial to integrate with, I can spend some time considering the various PaaS/SaaS solutions, but really only if there's a Drupal module that integrates with them already, or I can just implement Drupal Commerce, have all the configuration options my customer might need and really... move on with life. Oh and if they happen to have a non-standard commerce need, then Drupal Commerce was already the correct solution, and Drupal Commerce does traditional commerce just fine, so I'm good regardless of where this goes.

Is Drupal Commerce a magic bullet? Of course not. 4 years of experience exclusively creating DC sites and I still run into issues from time to time that are new territory for me. Integrating with Hybris, Demandware or Magento would be exponentially more fraught with stuff I've never seen and architectural weirdness... so enterprise only please. All of that to say this: GPL Commerce is DEFINITELY a thing. Suggesting otherwise seems silly to me. None of my arguments above invalidate choosing to NOT re-platforming a customer's commerce in the middle of a site re-platform. Having done simultaneous re-platforms of both before, I can say it's a daunting task. The bigger question for me is whether or not a re-platform is more/less expensive than an integration and what the benefits to the business are in both scenarios. If integration is cheaper and endows more capability, then you should pick integration EVERY TIME.

Nov 07 2016
Nov 07

Drupal 8 has been out for over a year at this point. I worked extensively on helping to improve portions of core during the Drupal 8 cycle, but maintaining your own site is radically different from trying to develop the platform that site(s) will reside upon. Upgrading my blog is especially exciting for me because I was still on Drupal 6. Getting to jump directly from Drupal 6 to Drupal 8 is a pretty big win and the fact that Drupal 8 supports this out of the box was amazing. Now granted this is just my blog, it's not even 100 nodes, but still... most of my effort was in cleaning up some content weirdness after the migration and building a new theme. Drupal 8 did all the heavy lifting for me in so far as handling the content. That being said, I did lose a bit of my formatting, but I took a couple of shortcuts that might have introduced the problem so I'll be watching more closely for this on the next migration I do.

Gotchas:

There were of course a couple of relatively minor "gotcha"s. If you're interested in migrating your own site to Drupal 8, you obviously need to be at the most up to date release of Drupal 6 (or 7 if that's what you're running). I was on an ancient version of 6 (6.10 I think), and amazingly the site was still intact. At some point there were some relatively minor schema changes introduced to some tables and of course, Drupal 8's migration path is expecting the tables and columns that should exist in the latest version of Drupal 6, so be certain you've upgraded to the latest release of whatever version of Drupal you're starting with.

I use Acquia's Dev Desktop for most of my local development work. This stems from my time as Developer Evangelist at Acquia and while Dev Desktop is a good tool, it has a couple of minor issues worth noting. First of all, DevDesktop builds a link of your site name to the sites/default directory. This can be rather confusing since you'll have both a "somesite.dd" and "default" directories in your sites directory. As part of my final deployment I actually removed this link without any ill effect.  Likewise, DevDesktop doesn't document the database settings or the hash_salt in your settings.php, so you'll need to manually build your database settings on your server, and likewise retrieve the hash_salt settings from DevDesktop. (Check for a .acquia directory in your user dir on whatever OS you happen to favor) In addition to all of this, rather than use Drupal 8's built in Migration UI for a Drupal to Drupal migration, I opted for a Drush based approach. This meant a couple extra modules (and you can find docs here) but more importantly, it meant a reliable connection string to my database. Since it was Dev Desktop, this took a bit of experimentation. I documented the solution here. Using Drush is especially favorable when you have a lot of content to migrate but I'm also working on another migration using core's UI for the process and that works equally well.

Another wrinkle was around gist embeds. In Drupal 6 I just used "full html" input filter and pasted the script directly into my output. It's not quite this simple in Drupal 8. Currently I've opted for codefilter module, but that has been less than ideal to use. I'm contemplating trying gist-embed module too, but a quick glance at both of these leaves me feeling like more effort needs to be put into both of these modules to:

  1. ensure proper identification of relevant content
  2. integrate with the core bundled ckeditor implementation

I'd really like a solution that didn't require me to provide a custom input filter any time I need to paste code, which is what I'm forced to do right now.

Improvements:

Drupal 8's media support is far and away better than any previous version of Drupal. Granted, it's a pretty deep stack of modules and external javascript libraries at this point, but it wasn't difficult to get up and running and once I did it was pretty obvious I needed to rework my content into using it. Drupal 6, by comparison, had relatively little it could do and there were some interesting approaches to handling the problem space. The one I had opted for included a hidden imagefield on the content type which I then used to place images into my wysiwyg. Media made this so much easier that I actually dug back through all my historical content and re-embedded images via media and deleted all the imagefield contents and finally the field itself. This is a really big win and I'm hoping we can figure out how to get better media handling into core. I'd love for my next major upgrade of this site to just recognize what's already been done and adopt it.

On the topic of media handling, I'm pretty happy with focal point. Works as advertised, and made doing some cool stuff like the site header really easy.

Theming was both frustrating and enlightening. I'll blog about that separately, but in general, I'm very pleased with most of the processes involved in getting a new theme up and running. I hope the front enders in the Drupal community share that sentiment.

Final Thoughts:

Anyone as involved in a product cycle as I was with Drupal 8 probably doesn't have the right to give any sort of kudos to the product. It just seems wrong, but since none of my efforts even fringed on most of the things I discussed in this article, I feel really good saying this was a very good experience and I'd like to thank everyone who made this possible. Upgrading the content of my blog from Drupal 6 to Drupal 8 was about a day's worth of effort excluding theme work and was relatively painless. Drupal upgraded all my content types and field configurations, migrated my users, content, comments and files. In general, it went off without a hitch. I have a much much bigger Drupal 7 site I am going to upgrade as well in the near term future, and also a Drupal 5 to 7 to 8 migration I'm going to attempt. I'm really looking forward to trying this process again on a more complicated site. If you've got an old site that you've been putting off upgrading, I'd really encourage you to begin the process. Drupal 8 has reduced or removed most of the old pain points in doing site migrations of a certain complexity.

Apr 01 2014
Apr 01

It is with a mixture of bitter and sweet that I am officially announcing that I'm leaving Commerce Guys for a new position elsewhere. I have really enjoyed the last 3 (nearly 4) years at Commerce Guys. They have been an amazing place to grow both as a person and as a programmer. During my time there I've had the opportunity to work on numerous big projects and interesting technical challenges. Commerce Guys funded me to work full time on Drupal 8 as an initiative owner for months on end, and without that investment of time, I personally wouldn't have grown so much, nor would I have been able to contribute to Drupal 8 to the same degree. I cannot stress how great of an experience working there has been for me, and I'm thankful to all the people there who made that possible and made my own time there so enjoyable. I look forward to seeing them do great things.

As for my future, I am actually moving to Acquia. An interesting job position opened up there that will allow me to be the interface between Drupal's developer community and Acquia. This is especially interesting to me because it makes me part of the feedback loop that is intended to help Acquia understand what portions of Drupal developer experience are in need of improvement. In this role, I'll work in whatever capacity I have at my disposal to help mitigate these issues and improve Drupal from a developer experience perspective. In addition to this I'll function in the same capacity for various Acquia product offerings, and I find that very exciting as well.

In truth most developers want to escape client work, and despite what you might think, that is impossible if you want to continue development. We can only strive for better clients, whether that is literally a higher quality client, or whether you manage to make yourself your primary client, we always have clients. In many ways I think this move makes YOU my client, yes you. The whole of the Drupal community will be my singular client. I’ll interact with you at different levels, we’ll talk personally, we’ll talk corporately, we’ll interact at camps and cons. I’ll try a few things and I’ll probably fail at a few things, but I took this position because Drupal is really important, and I want to be a part of crafting its future in whatever capacity I might have available to me. This job opens that up in ways I hadn’t considered before, and that’s very exciting. Technically speaking, this is a marketing position, and I know that’s weird for a developer, but this is no mistake, I’ll just be marketing DX improvements, and gathering information about where DX is lackluster so that we can craft a better solution, together. I look forward to this role and this work, and I hope you do as well.

Kris “EclipseGc” Vanderwater

Aug 18 2013
Aug 18

To say I've spent a lot of time working on Drupal 8 over the last 21 months would be a bit of an understatement. The Plugin System & the Blocks & Layouts Initiative have consumed much of my professional and personal time over that period, and we've worked on a lot of really awesome and interesting stuff. That being said, the vast vast majority of that work was still really "Drupal" and certain aspects of the underlying architecture that we were building on I didn't have the time to learn in detail. I have relied on the invaluable knowledge and insights of 3 individuals in this regard, namely Larry Garfield (Crell), Sam Boyer (sdboyer) and Sebastian Siemssen (fubhy). However, their guidance could only get me so far, and at some point I needed to go learn this for myself. Starting shortly before NYC Camp, I began working through Fabien Potencier's excellent 12 part series on using various Symfony components. If you've not read through it, it's well worth the effort. The most up to date version can be found here: https://github.com/fabpot/Create-Your-Framework/tree/master/book

In the series, Fabien walks the user through many basics of Symfony's routing system and a few more complex use cases (such as http caching and phpunit tests). I had made my way through the series once before when we initially adopted Symfony, but I was a little newer to object orientation than I'd like to admit, and I also didn't have the tools at my disposal, development wise, that I have now. Coming at the series again with a lot more experience with OO and a few tools (such as Drupal's plugin system) available to me, not to mention a little bit of working knowledge I picked up with regard to Symfony's event system, I felt I could dig through what Symfony was doing and that that would help me to interpret what Drupal's doing. All in all, I think this was a very valuable exercise, and I intend to continue it with regard to various other aspects of D8 that I've had issues with. However, I'll save that for another blog.

We've had a lot of people in the community, myself included, complain about certain aspects of what's going on specifically in the routing system (amongst other areas). I can't say that I'm a fan of the entire stack (yet) but I have dissected a lot of the basic components and am coming around to Symfony's approach. What follows are some observations about this approach, and what I hope will give other developers a place to start w/o having to learn this all for themselves. This will be approached w/o much in the way of Drupal specific context. Drupal is wrapping these same concepts in a few other function calls & abstractions, and in order to keep this simple, I'll be avoiding them. The concepts all still apply.

The Class Autoloader

There's not a lot to discuss here. We have a class autoloader, it happens VERY early in the stack. Drupal does a few things surrounding this, but for the sake of explanation, you could literally include the autoload.php file generated by Composer and bingo, you'd have the ability to load any class in any available PSR-0 namespace by simply saying "new ClassName();". Autoload implementation details can change, and there are many different autoloaders in the greater php world that conform to the PSR-0 spec. For the sake of following what I'm describing I'd encourage you to seriously consider using Composer's provided autoloader.

Composer?!?

So here I am, discussing composer, and I've not actually told you what it is. For those not in "the know" it's a dependency management tool for PHP projects. It's very similar to Drupal's .info file architecture, except it uses a composer.json file in the various packages to specify what dependencies it has. It can offer suggested packages that are not dependencies but that you might want to consider using, authorship, and various other details. Composer is run from the command line, and it reads these json file and then builds dependencies, installs them all, and then generates the basic details required to get a class autoloader up and running. Drupal 8 has a composer.json file in the root directory and there are issues on d.o to get us switched over from Symfony's autoloader class to Composer's. Many php projects outside of Drupal have adopted Composer, here are some composer related links worth reading:

Drupal: Use the Composer autoloader to make everything simpler
Composer Homepage
Packagist Repository of Composer Projects

The Dependency Injection Container

Also often called a "Service Container" or "DIC" we want to talk about this early on. A dependency injection container is an "Inversion of Control" technique which essentially ask you as a developer to document for the system as a whole how classes you're providing are to be instantiated. Specifically it's asking, what other classes instance you might depend on to work. In order to understand this fully (if you don't already), you'll need to do a fair amount of reading, experimenting and failing. That's not to discourage you , because it's definitely a worth while exercise, but being realistic, you'll probably need a couple of failures before you can really begin to appreciate how to use the system appropriately. At its simplest, I'll use the example of a Database Query. In order to query the database through a query builder object, that object will need a database connection object first. Today we provide a bit of this through globals in Drupal, but speaking for a puristic standpoint, that's suboptimal. What we really want to do is define connection objects somewhere, and then inject them into our query builders so that you can have different builders for different database connections. Objectively this makes sense, your milage might vary a bit if you try to implement it ;-)

This same concept applies to many classes. You want to inject dependencies into classes and then use the dependencies that were injected. You don't want to reach out of the scope of the class and into some global user space, and call arbitrary functions. This is of course untrue for any raw php function, but if you were in a class, you certainly wouldn't want to be calling drupal_json_encode() for example. Rather, if you need access to that, you should be injecting the Json class from the Drupal\Component\Utility library and making use of its method for that same purpose. Conceptually, this hopefully makes sense.

It's important to discuss the DIC because once we understand this concept, it logically follows that many of the classes we'll be dealing with regularly exist as services within it. I'd specifically like to draw your attention to the Symfony\Component\HttpKernel\HttpKernel. (If you're looking for this in Drupal, we've subclassed it as Drupal\Core\HttpKernel and it resides at the service id "http_kernel")

The HttpKernel

For the sake of simplicity, I'm going to focus on a very limited aspect of the HttpKernel, specifically the handleRaw() method & the KernelEvents::REQUEST event, but before we go there, it's worth discussing how we got here. My example index.php is dead simple:

<?php
$container
= require_once __DIR__ . '/../src/container.php';
use
Symfony\Component\HttpFoundation\Request;
$request = Request::createFromGlobals();
$response = $container-&gt;get('kernel.cache')-&gt;handle($request)-&gt;send();
?>

What happens here? well, first we include the Dependency Injection Container. For reference, the DIC in this case is including the autoloader as its first action, so in our first two steps we have an autoloader working, and a container that we can begin working with. We then use Symfony's excellent Request class and get a request object from the current global settings. There's a lot of awesome stuff that happens behind the scenes here and I'm not even going to begin dissecting it. If you're interested though, it's well worth your effort. From there, we take the Request object and pass it directly to the kernel's handle() method. Ignore for a moment that that says we're getting the kernel.cache from the container. That's all true, but sort of irrelevant, just focus on HttpKernel. If you happen to have walked through Fabien's 12 part series I linked to earlier, you'll notice this is a little odd. In his series, we end up doing route matching before we hand off to the kernel, however, WHEN we do route matching it's important to note that we're doing little more than populating some attributes of the request object. That object continues on into HttpKernel, which ultimately resolves the _controller that was appended during matching, or attempts to match if no match has yet been found and then resolve the _controller. Knowing this is half the battle to understanding what's going on here.

Route Matching

How we match the route, while being generally important, doesn't matter for understanding the code flow. WHERE we match it is more important. Delaying matching until we're in the HttpKernel itself allows us one very very big advantage over doing it before hand... multiple routers. In Drupal's case, we're using the Symfony provided Symfony\Component\HttpKernel\EventListener\RouterListener class. This class checks to see if someone else has already populated the request with a _controller. If it hasn't been populated yet, then it attempts to do so with the Matcher that was injected into it (through the DIC). What is perhaps more important here is HOW this class is invoked. Drupal's very very used to using various info hooks, general hooks and alter hooks. To a certain degree Symfony's Event Dispatcher covers the later two of these use cases simultaneously.

Event Dispatching

So, if we have an event dispatcher, we can fire a unique string through that (much like our hooks) and format an Event class to go along with it. Events generally are injected at construction with the various classes you expect anyone listening to your event to need (might include the request object, any available response object, etc) and various getters for getting these instances out. Since it's all OOP, all classes are passed by reference, so this sort of ends up being like a generic hook and an alter hook simultaneously. We can perform tasks based upon the information that was passed to us, and we can also alter the information we have. Getting back to the case of our Routing, we're using a very specific event for this, the KernelEvents::REQUEST event. This is defined as a class constant so that if the string changes, no one has to update stuff (so use the constant in your listeners). For routing, this event fires, it passes along the request object, and the RouterListener looks at that Request object and alters it as necessary or returns if a controller is already set.

So wait, what was the point?

Well... Events have a priority associated with them (the opposite of how we think about weight in Drupal, bigger numbers happen earlier), and if an event is being used in order to perform routing (and it is) then that means we can actually hijack the entire routing mechanism before it even attempts to route, and provide our own facility for it (or fall through if our routing mechanism fails, and let Drupal keep doing what it does). In terms of your average Drupal project, perhaps this isn't super useful, but if you think of it in terms of integrating a completely different application, seamlessly with your Drupal site, this holds a LOT of promise. Especially if it's an application that could make use of existing Drupal classes and methodologies. You are LITERALLY giving yourself an entry point to do whatever you want before Drupal does, and if you return a Response object here, then you're actually preventing Drupal from every doing anything, all while having access to all of Drupal's services, and non-routing-specific architecture.

Most examples I've seen of using this Event in Drupal have implied that it's a replacement for hook_boot, but it's really not. Sure we can use it in terms of needing something at about the same bootstrap level (I want to dsm some message on every page, or whatever the use case it). But this event actually hands off the current request object, and what HttpKernel is looking to get back is a Response object (calling the setResponse() method on the event and passing a Response object through it will succeed at this). It has fallbacks in case it doesn't get a Response object back (and in Drupal's case, it never does from this event).

My specific use case involves Routes as Plugins & a Plugin Manager and a custom listener that resolves the request through that Plugin Manager and ultimately resolves the controller by loading the plugin instance and passing back the Response object that instance generates, but if you want to do something really simple to see how this works, you could just do:

<?php
/**
 * @file
 * Contains Drupal\routing_hijack\EventListener\RouterListener.
 */
namespace Drupal\routing_hijack\EventListener;

use

Symfony\Component\EventDispatcher\EventSubscriberInterface;
use
Symfony\Component\HttpKernel\KernelEvents;
use
Symfony\Component\HttpKernel\Event\GetResponseEvent;
use
Symfony\Component\HttpFoundation\Response;

class

RouterListener implements EventSubscriberInterface { /**
   * {@inheritdoc}
   */
 
public static function getSubscribedEvents() {
    return array(
KernelEvents::REQUEST => array('onKernelRequest', 33));
  }

  public function

onKernelRequest(GetResponseEvent $event) {
   
$request = $event->getRequest();
   
$path = $request->getPathInfo();
   
$path = trim($path, '/');
    list(
$first) = explode('/', $path);
    if (!
in_array($first, array('admin', 'contextual', 'toolbar'))) {
     
$response = new Response('this is a test');
     
$event->setResponse($response);
    }
  }

}

?>

I've set the priority for this EventListener to 33. The HttpKernel in Drupal's implementation (and most Symfony implementations as I understand it) is set to 32. That means I get an earlier priority and can response to any request before Drupal even thinks about routing. I've packaged up the code in a drupal.org sandbox for others to play with. You can find it here: https://drupal.org/project/2068457/git-instructions

Happy Drupaling

Eclipse

PS: With all the recent "bad news" around various Drupalers and their status in our community, I thought some time and effort to explain the actual code and what's happening in it might be generally useful to the community. I know working through this and coming to understand it has certainly allayed a lot of my own fears. That's not to say we don't have any problems, but I do feel like Symfony was a very good choice and I'm beginning to feel empowered by it.

PSS: Symfony DX tip. If you're used to grepping for hook implementations, Symfony's event system can work the same way, just grep for the actual class constant that's being used to fire the event. In the case of my examples here grepping for "KernelEvents::REQUEST" should yield some interesting results.

Mar 02 2012
Mar 02

In early February a gathering of developers came together in Acquia’s offices to decide the fate of the Drupal 8 initiative known as WSCCI (Web Services and Core Context Initiative). This event has been blogged about a handful of times already by Dries Buytaert, Larry “Crell" Garfield and Daniel “sun" Kudwien so I won’t rehash what has already had a lot of robust and detailed coverage. One of the conclusions from the sprint was that an entire class of problems that WSCCI was trying to solve—mainly around improving Drupal core’s layout and block system—was a large enough problem that it really deserved attention as its own proper, separate initiative.

Why a new initiative?

While rescoping WSCCI, it became obvious that there were three separate scopes into which WSCCI was creeping.

  • Web Services
  • “Frameworkification" of Drupal core
  • Page Layout

Page layout, and the various foundational work required to get there, are all well within the scope of what Drupal 8 would like to deliver. But this can’t all fall under WSCCI, and as such, a new initiative was decided upon, and I was asked to lead it.

Ok, so... what is this new initiative then?

This initiative aims to bring unity to a system of disjointed output components (blocks, page callbacks, menus, theme settings, and more) and provide a standardized mechanism of output, new tools for placing content on a page, and a potential for performance gains amongst other benefits. Blocks as they currently exist have out-lived their usefulness as a basic Drupal component. Placement of blocks is a common shortcoming of the core system, as is the use of multiple instances of a single block, which is impossible without contributed modules. A number of other very popular solutions have been around for a while (e.g. Context, Panels, Display Suite) that change the notion of how blocks work, and what can be done with them. The goal of this initiative then is to build a best of breed solution leveraging the tools and experience developed in Drupal Contrib.

Currently, Drupal doesn’t treat all content equally, and there are so many different types of things on screen at the same time (page callback output, blocks, theme settings, etc.) that it can be daunting to keep up. Instead of building pages via a page callback as “primary" content, and then blocks and other bits of “secondary" decorating the page, the goal is to make any output on the page anywhere a block. Not a “block" as traditionally thought of in the limited Block module in Drupal 7 and below, but a “smart block" that is context-aware and utilizes per-instance block configuration. Additionally, instead of regions being theme-specific, the capability to add regions would move to the hands of the site builder. A palette of layout options will be offered to select from (3-column, grid, etc.), including the ability to nest layouts within layouts, and layouts could factor in contextual data (e.g. a different layout for articles vs. page nodes).

How do we get there?

First and foremost, I need your help! :) A lot of thought has gone into the steps involved in doing this long before the WSCCI sprint in Boston; being offered this position has encouraged me to really focus and prioritize that list of things into what I hope are deliverables that are practical for Drupal as a whole, and can stand alone should the next phase of development not land in time for our Dec 1st feature freeze. What follows is an outline of sorts, first detailing the systems we need in core in order to even consider this initiative achievable, and second the work involved in delivering on this initiative.

Step 0: Dependencies outside of this initiative:

  1. Contribution to WSCCI’s foundational work with Request/Routing/Response.
    • We’re going to leverage Symfony’s Request, Routing and Response systems from the WSCCI initiative. Since HTML is just one type of REST response, once this is enabled, we will take over the HTML response system and build a new set of tools for delivering Drupal’s content. We are essentially the text/html web service response of web services. Step one is getting Symfony’s Request object in core.
  2. Configuration Management Initiative.
    • The new block system we are architecting depends heavily on custom configuration management objects per block instances, and without CMI, we will be completely crippled. We need to get this framework reviewed and committed to core as soon as possible. UPDATE: Committed 3/2/12
  3. Finalization of the Plugin subsystem.
    • The Plugin system is a unified way of dealing with various “pluggable" elements within Drupal: for example, blocks, swappable subsystems (e.g. cache layer), and text formats, etc.
    • The plugin system has been modeled after CTools but with an OO approach. CTools’ solution for this was always very OO inspired (even supporting OO plugins). The system is very robust, providing generic factory solutions for multiple use cases with the ability to extend as necessary, and a wrapper system for calling plugins of various type easily.
    • As soon as the initiative’s repo is setup and ready to go plugin development will continue in a branch specifically for plugins. Until that time you can get a sneak peak in my repo that I’ve been using as a git submodule.

Blocks Everywhere Phases:

  1. Create a standard interface and class specification (plugin) that works to satisfy existing needs of blocks.
    • Includes meta data about the block, configuration forms, validation, submission, rendering and a few other things like contextual awareness.
  2. Migrate existing blocks to the new structure.
    • Includes a lot of work in the blocks administration system.
    • First real check point for the whole system. Until this much is done the system is not a complete enough product to do anything.
  3. Move page elements (title, logo, primary links, etc) to the block system.
    • There’s already issues in the core queue for some of this, such as Convert primary and secondary menus to blocks. We need to decide which of these could most benefit from being worked on now, with core’s current block system, and then refactor to the new block system later, and which ones to wait on until the new block system is in place.
  4. Integrate forms with the new block system.
    • Many current “page callbacks" in Drupal are drupal_get_form() with a particular $form_id passed as a “page argument". How do we best integrate all these forms into the new block system? Should we convert each form into a block plugin class, or create a single “Form" block plugin class that can take a form id as a configuration or context argument, or something else? We’ll need to do some experimentation, and this will be subjected to lots of code review and iterating based on feedback.
    • Similarly, the Symfony approach relies on reflection of variable names to do a lot of its magic, and drupal_get_form() totally breaks this with its typical use case.
    • We’ll need to consider how best to make form redirects and dynamic content configurable. We’ll also need to integrate form POST handling with the WSCCI initiative (which plans to implement separate routing for POST vs. GET requests). While implementation details are still unclear at this time, the end goal is to enable our administrative tools to become plastic and re-arrangeable in such a way as to allow install profiles and distributions to provide a more custom tailored administrative experience.
  5. Integrate non-form “page callbacks" with the new block system.
    • Some page callbacks, like node_page_default() (the list of node teasers shown on the front page in a default Drupal installation), will need to be made into first class block types, complete with their own unique configuration options (e.g., how many teasers to display).
    • For the rest, there’s an open question on whether to make all of them first class block types, or leave them similar to the functions they are now, but passed the contextual information they need to ensure their dynamic content isn’t tightly coupled to their default URL, so that custom sites and distributions can rearrange their URLs and page placement just like with any other block. I am currently biased towards the former, but the pros and cons of each of these with respect to API consistency, impact on page configuration UX, and impact on core and contrib developers still needs to be discussed in more detail.
  6. New layout system
    • In order to deliver a layout administration tool many helping hands will be required during the previous 4 phases so that this phase, and those following it can progress as close to simultaneously as possible.
    • Layouts are a bit of a new concept for core and what will be done here is breaking the dependency between a “theme" and it’s “page.tpl". Page.tpls will largely go away in favor of reusable layout components that can be controlled and populated through the administrative interface.
    • Layouts are a lightweight plugin type intended to be easy to create for “designers". They will contain their own js, css, administrative css (if necessary), some sort of image representation of what they look like, a configuration xml definition (for the CMI config system) and a tpl file with their appropriate output. The objective is to remove the need for site builders to need to be building their own HTML, and to make building custom HTML for designers insanely simple.
    • Layouts are intended to be nestable and reusable, not only in the sense that you could have multiple of the same layout on a page together with different content, but also in the sense that a pre-configured layout (for example: a header or footer) could be easily reused and nested into a container layout.
  7. Logically a library of layouts will develop here and things like grid or responsive layout plugins could exist hopefully lowering the barrier to entry on getting sites that depend on these systems up and running more easily. A strong starter library in core would be a big win.
    • Mentioned previously a theme becomes completely disconnected from a layout, so changing the theme is really about individual elements a theme may care about, and the css intended to exist for given layouts.
    • As core (and other modules) begin providing various layouts, it seems natural that themes would begin to supply css that corresponds to the most popular layout plugins, and/or supply their own layout plugins. Logically layout packages begin to take the place of “base themes".
  8. In order to setup all of this work with layouts, a new administrative system will need to come into being that allows for default layouts for a site to be defined, and then some sort of variation on layouts based on url and other potential criteria.

Conclusion

In short, we have a ton to do and I will need as much help as I can get from people familiar with the concepts I’m discussing here or those who are willing to learn and help convert the existing system. My employer, Commerce Guys has agreed to donate 20% of my time to this initiative, which is insanely generous and I am lucky to be in a position to help push this initiative forward. If you’re interested in getting involved this effort we will probably work in #drupal-wscci on freenode. I’m EclipseGc on IRC, drupal.org and twitter. I look forward to engaging the community in this initiative. I think we have the opportunity to do something really special here.

Nov 16 2011
Nov 16

You may have noticed the nice little g+ icon on the site, I've started using a Google+ Page to deliver and organize information about the Contextual Administration module I maintain. This is an interesting new use scenario for me, and I wanted to share what I'm doing and why.

Disclaimer

Everyone knows that the project system on Drupal.org needs a little love. As I understand it it's actually getting that love right now and we just haven't seen the results on the live site yet, so I eagerly await that future, and this is not meant to detract form it at all. Maintaining a module that is inherently complex, I often find that video is the best way of expressing how to utilize the module to its greatest extent. Similarly I try to continue active development on the module, but this blog and my twitter account is about the limit of platform I've devised thus far to discuss what I'm doing and why. I would very much like to open a conversation about these things in a non-issue-queue area, but that has just seemed very impossible thus far. Then last week-ish google+ released their new pages system so I decided to see if it could function in some greater capacity than what I had managed with this blog thus far. I won't claim to have succeeded, but there are some key "win" points here that are worth discussing.

Blog + Share = WIN!

The ability to have content re-shared across all of google+ is a big deal. My posts aren't getting much in the way of traction there yet, but still people I don't know are sharing my posts with their followers, and that's encouraging.

Photo Albums = WIN!

Yeah that seems odd at first, except that g+ allows me to organize these by album, and I can upload videos instead of photos. Organizing the videos, in order, is awesome because while I just finished my first video series, I'm intending to start a new one shortly and logically separating these topics by the "feature" I'm building is REALLY beneficial. The only missing thing for me is the ability to build a help system, and really drupal.org is probably a much better place for that in any event. If you want to see what I'm doing here, maybe steal some ideas and improve upon them or share them, please just click the google+ icon on the page.

Future?

It'd be amazing if I could build some deeper functionality between my existing blog (here) and my g+ pages. Probably should dig into the g+ apis and see what's available, but thus far I'm very satisfied.

Nov 06 2011
Nov 06

For the past 2 seasons of camps and cons, I've been proposing some material that has taken a lot of its cues from page_manager and ctools. This has been a really hard road to follow because of the complexity inherent in these tools, and as a consequence, only the camps have really given me any real traction on my sessions. This has been encouraging in the sense that my camp sessions have been very well attended and have gone extraordinarily well, and discouraging in the sense that Drupalcon attendees have missed out on these sessions. In an effort to better explain what I'm trying to propose session wise (and hopefully generate some excitement and momentum) I wanted to write this blog post with a little less formal approach to a session description, and try to really nail WHY potential con attendees should be voting for my session, and why I hope track chairs will give me a chance to present regardless. I'll start with my 'baby':

Customizable Administration Tools

The upcoming presentations of this for voting purposes:

This is my intro session to my very own "Contextual Administration" module, or 'context_admin' for short. Contextual Administration is a site building super module based on the existing site building super module Page Manager (in the CTools suite). Its purpose is to free up site builders to build administrative interfaces they'd normally need a developer to build for them, things like customized node creation screens, automatic node reference handling, administrative views on pages with content creation screens etc. You can think of it as a way to free your customers from having to learn drupal (and a way to free you from having to teach it). In short Contextual Administration attempts to give site builders all the tools they need to lock their users out of the monolithic drupal administration, and instead present them with an easy to use administration system that is contextually relevant. This is no small task, as any developer who's attempted this will tell you, and having a tool designed to do it is really invaluable. This is the session I WANT to present to the community very very badly. Help me vote this in. The other session:

Understanding CTools Page Manager

It's unfortunate that we can only do one solo session because, as much as I want to present on Contextual Administration module, understanding page_manager is JUST as important, if not more so. Page Manager is the underlying technology that lends power to Contextual Administration, and MORE IMPORTANTLY Panels module. It is the future of Drupal 8 (or 9) and people REALLY need to start grasping its concepts, because it is influencing the direction of core even as we speak. This session really tries to dissect page_manager and illustrate how it's put together and WHY it's put together that way. How developers can hook in and start using it, etc etc. It's really much more developer focused, but site builders can get a lot out of this session too as I spend a pretty significant amount of time showing the sorts of insane things you can ONLY do with page_manager based solutions, and try to really delve into WHY people should be using this (for both development and site building). As I said earlier, Page Manager really is a site building super tool as well, and I really just want to expose more people to its power and it's potential as I think it can really change the way people use drupal for the better. It has certainly changed my perspectives.

Conclusion

I know this is the season of session shilling, and I am TOTALLY doing that here, but I hope as a reader of this post you can step away from the fact that I AM indeed shilling and understand that I'm doing it because I think people really need to see these modules, not JUST because I want to present. I'll be at Drupalcon regardless.

Oct 21 2011
Oct 21

Page Manager (and family... i.e. Panels) is starting to get some more traction within our community. New users are finding it, using it, and asking awesome questions about it every day. I've done my part both from a development side as well as a teaching front to try to help that along as much as I can, and I'm very pleased with the fact that the community is beginning to find these tools and really appreciate what they can do. With that being said, I want to pedal some of my own page_manager based wares on those of you who might listen.

Contextual Administration 7.x-1.x RC3 is imminent. For those of you who have not used context_admin or don't know what it is, context_admin is a module that utilizes the power and flexibility of page_manager to locate administrative interfaces at arbitrary urls. For example, if you have a news article view as /news/view (default tab) then context_admin gives you the ability to add a "Create New News Articles" link at /news/add... or an administrative VBO at /news/admin etc etc. Since these are all page_manager driven pages you have the luxury of specifying who has access to these pages in a much more granular way than the typical 'permissions' system.

There are many great features stuffed into context_admin, including examples of how to prefill node references (ala nodereference_url module, but with a lot more options) and menu auto-population, and taxonomy tree building and much much more. As I said, RC3 is imminent and I just have some access plugins to test at this point.

In addition to all of this, the newest dev of context_admin has a wrapper that exposes all of its administrative plugins to panels to be used within the panels interface. This is a totally experimental feature, but I've been pretty happy with the results thus far. I hope this post helps bring some more converts to the system. Please feel free to track me down in irc or in the issue queue. I'm EclipseGc and spend lots of time in #drupal-contribute as well as other places.

http://drupal.org/project/context_admin

Sep 05 2011
Sep 05

Recently, there has a been a LOT of discussion about the future of drupal core and some of its rather "legacy" modules. This is a good conversation to be having, however I think we are spending our efforts in the wrong place, or perhaps better, I think we have or priorities misaligned.

What do I mean by this?

As great as removing the cruft from core, and lessening the burden on core developers is, I think there's a more important conversation to be having here, and these points shed a completely different light on this conversation. The issue we really need to be discussing is one of packaging management, not bikeshedding about what modules should and should not be maintained by core any longer.

Why is this significant?

Packaging is a core concern for a number of reasons, the two I want to immediately highlight are the aforementioned "lessening" of core contributors work load, as well as supporting other drupal based distributions which already have to depend on packaging as a factor of what they do.

Lessening Core Workload

Core has a number of items that, in any other distribution, would be considered "feature" modules. These module are VERY important as, just like any other distribution, they characterize WHAT the standard install profile IS. I bring this up because, just as Open Atrium would be a completely different product without case tracker, or documentation, so drupal core is a different product without a module that specifically handles blogging. This is why removing blog from drupal core is something I'm not thrilled about. It's not that we want to deliver a different product (or if we do, that's definitely NOT the conversation we're actually having) it's that we want to remove the "product features" from the plate of the core developers, and this brings us to the first "value" I think we need to be tracking.

Value #1:
It's not what we deliver, it's how we deliver it.

The tl:dr version of this is pretty simple. Just because we remove blog module from the list of things that core devs need to worry about doesn't mean that it can't or shouldn't be delivered in the standard install profile. The very real practicality is that we have hundreds of thousands of existing drupal installs out there that need the ability to upgrade their core-provided blog module at some point. I propose that this is best handled by a blog module packaged within the standard install profile.

Supporting Drupal Distributions

Currently, standard and minimal are the "blessed" install profiles. This is actually a good thing as Drupal needs to be SOMETHING when you install it. What's bad about them is that they don't have to follow the same packaging rules as the rest of the drupal based distributions in the current ecosystem. This of course means that certain points of pain for building a distribution are largely not felt by core, and thus will never actually be fixed (or at least are significantly less likely to). This also means that what is currently distributed as "drupal core" is both slow and difficult to change. If core were implementing the same sort of distribution strategy as all other distributions, new profiles could be added to core once deemed strong enough and popular enough to do so. In the same way "standard" product features would still be packaged with core, but get the benefit of a faster potential improvement cycle by being in contrib. This brings us to values number 2 and 3.

Value #2:
All install profiles should be packaged the same way.

Value #3:
Asynchronous development cycles are good for both product and kernel.

Back to the Original Point

We're spinning our wheels bikeshedding WHAT should be removed from core when I think the real question is: "What should be packaged into our product." In truth, Drupal's not really all that well defined of a product. There are a bunch of modules that are there that no one ever turns on and various components of modules that are commonly enabled that no one makes use of. But the fact that Drupal is a rather badly defined product doesn't mean that it can't still be packaged effectively, and this is why I'm concerned about the current conversations within the community. I know my values are not necessarily the same as everyone else, but to summarize my value statements again:

Proposed Values:

1.) It's not what we deliver, it's how we deliver it.
2.) All install profiles should be packaged the same way.
3.) Asynchronous development cycles are good for both product and kernel.

These values are NOT really addressed by bikeshedding what we're going to move out of core. Understand, I DO believe we need to be moving product features out of core, however I also believe that people will be more likely to agree on what is a "product feature" if they have assurances that that code will continue to be distributed, it just won't require the maintenance of core contributors to do so. I am very concerned that we'll spend months debating the various components we want to remove, and then even more months figuring out how to handle their removal in an upgrade path for Drupal 8. I've been told packaging is not a concern until Drupal 9+, and as much as I admire and respect the various individuals I've had private conversations with concerning this topic, I cannot shake the feeling that packaging is the far more important problem, and that ignoring it in favor of removing code completely from what we distribute is a horrible horrible misstep. This approach brings up many other questions, but they're important questions to answer, and I hope that this post can facilitate a larger conversation on this topic.

Kris "EclipseGc" Vanderwater

Mar 21 2010
Mar 21

If you've been following my blogs about the various administrative interfaces we've been playing with, you'll know that I've proposed the idea of building a system through which these sorts of administrations could be deployed by site administrators without the need to understand what it would take in terms of module development to do so for themselves.  Since I started playing with this stuff ctools has really become very common place, and it's page_manager module is perfect for doing this sort of work.  With that in mind we're announcing Contextual Administration module.

Contextual Administration is a ctools dependent module designed to deploy single page interfaces that would be impractical to deploy in a panels type interface.  It comes with a number of built in plugins for things like node creation, user creation, and taxonomy administration.  Creating your own plugins is easy (though I'm still working on the documentation for that particular feature).

Why not just use panels?

This is a completely legitimate and understandable question because well... panels is awesome.  The reason our plugins don't just interface with panels is because while having node_creation forms as plugins to panels would be cool, it gets impractical pretty quickly if people start trying to utilize multiple forms together.  For the sake of simplicity (and sanity) we chose to just provide our own interface to page_manager.  The upside to this is that we get the ability to mix our variants with panels variants so we still get the best of both worlds and it's simple enough that anyone can deploy a context_admin page without needing to understand everything that panels can do.

What features come with it out of the box?

Our main focus in developing context_admin was to provide a menu based administration system that could be deployed anywhere at any time.  Personally I use this to deploy new administrations through "tabs", or MENU_LOCAL_TASKs for the more technically inclined.

Since we've interfaced with page manager, these page definitions can already be exported, and will bundle into features module with no additional work.  You can bundle them just like you would a panel.

Out of the box we currently (beta4) provide plugins to allow for:

  • node creation of node type X
  • node creation of node type X as a menu child of the parent node (requires a node/%node/* path currently)
  • node creation of node type X with an automatic node reference (requires a node/%node/* path currently)
  • taxonomy term edit
  • taxonomy vocabulary edit
  • taxonomy vocabulary term list
  • taxonomy vocabulary term add
  • taxonomy term add as sub term (requires taxonomy/term/%term/* path currently)
  • user creation (with tons of configuration here, you can auto-assign roles, status, and more)
  • Node administration of node type X via a views_bulk_operations style view (requires views_bulk_operations module)
  • Administration Sections (these can only be used in conjuncture with a /admin/something path.  You simply add one of these, and then add links to that through the typical menu system, and it will build up your own administration section just like "site building" or "site configuration")

We intend on doing a user administration via VBO as well but it's not in the current beta release just yet.  We may also provide our own actions that don't require various user perms to make the VBO based administration a bit more usable without having to hand out administrative privs to users.

WHOA?! Isn't that a security issue?

Uh... "YES", yes it is a "security issue". With that said, let's cover that topic for just a moment.  Any time we're allowing for nodes to be created w/o the user having the perm to do so, that's a "security issue" as well.  What is worth pointing out here, however, is that there's still security here, it's just up to the site administrator instead of being hardcoded by drupal core.  Drupal 7 has already been altered to work this way in a number of places, not the least of which is node creation and user creation.  It's frankly sad that VBO was forced to alter its existing functionality to cripple this power to begin with.

What that really means is this:  Drupal 6 utilizes a number of additional checks over those already provided by the menu system to determine if user X should have access to perform task Y.  Now the menu system actually did this check already (for example, "create page nodes").  The menu system is already running a user_access() check against that perm, and if it's NOT it's because some other module has altered the menu to do something else.  The node_add() function does an additional check against this same set of parameters before it will allow a user to actually even visit the node creation form for pages.  If the menu is altered, then this blows up spectacularly (which is why context_admin is actually providing its own version of node_add()).  This same is largely true for user creation, except it's a lot messier there.  Both of these functions have gotten some love in drupal 7 so that the menu system is the one in control, and thankfully this means that as developers we got a LOT more flexibility out of drupal core.  It also represents a bit of a shift from a security perspective in that we're recognizing that the menu system really should be in charge of these things.  Thus I would say the logic goes for VBO (and context_admin) in the same way.  The site administrator is in charge of preventing users from visiting these administration pages, and if he doesn't it's not actually a security issue for the module.

Why are you bringing this up?

It's sort of odd to be pointing out issues in a module I just released, but I'm afraid I need to make my argument FOR my approach before I start getting security issues filed against me.  I feel VBO was unfairly picked out in this regard as well, as views provides a perfectly good solution for access control to a view.  The point is, drupal 7 has already embraced this logic in many places, and we get a LOT more development power out of it.

Quick aside, I've been using this in production for a while and am quite happy with it.

Kris "EclipseGc" Vanderwater

Oct 10 2009
Oct 10

If you've ever used the drupal views module, chances are at some point you've needed to suppress any output until AFTER the user has made a selection from one of your exposed filters.  Views actually DOES make this possible, but it's not exactly self-evident.  I'm going to run you through a quick "howto" on this as I'm sure many people have needed it at some point.

As I mentioned above, this is possible but not particularly self-evident.  Views has a number of different "global" group items.  The most common of these is probably the Random Sort.  Within arguments you also have another member of the global group, the global NULL argument.  This is basically a way of attaching your own rudimentary argument to a view.  Through the use of the default value (as custom php) and custom validation (again through php) you could cook up just about anything.

With our global NULL argument in place, the following settings are about all we need to make this really work:

1.) Provide a default argument
2.) Argument type -> Fixed entry (Leave the default argument field blank as what gets passed is irrelevant to our needs, we simply needed to make it to the next level which is validation
3.) Choose php code as your validator
4.) Check through the $view->exposed_input array.  I recommend using devel module's dsm() function here because it will respond on the ajax that view is using (unlike drupal_set_message()).
5.) Set "Action to take if argument does not validate:" to "Display empty text"

You can get as fancy in step 4 as you need, but it's just down to good old php if statements at that point.

I hope this howto helps other people.  We've found it rather useful, and since it's sort of arcane, I wanted to share it.

Thanks to Earl Miles (merlinofchaos) for pointing me in the right direction on this one!

Oct 09 2009
Oct 09

Currently there's a pretty in depth discussion surrounding hosting install profiles on drupal.org and what the legal/other implications of such a thing are.  I feel this is an important issue to highlight to the community at large, and since I feel rather strongly about it, I'm using this forum to point it out, and also to focus a bit on my own feelings concerning this.  For those of you who would like some context, read this post on drupal.org.

This specific issue is surrounding the wonderful new possibilities revealed by the new drush make library on drupal.org.  In short, drush make provides the ability to build a simple file that drush will process and from which, build a full drupal platform.  In terms of install profiles, this could be exceptionally useful as authors could easily build a robust platform of all dependencies that their install profile will need to work properly.  Now the work of building the profile still has to be done, but install profiles are essentially a set of configurations for the various modules your profile specifies. 

Currently install profiles are distributed "as is" on drupal.org.  There is a proposal to start providing profiles as fully built tarballs that are simply ready to be installed, and drush make would make this rather simple to do, however there is a bit of fear that this will open up drupal.org to a large level of legal issues due to various licenses of the softwares involved.  This may seem like a "what?!?" at first, until you understand that drush make can actually pull software from external sources.  In short, if you were inclined to have a fully functional wysiwig editor in your install profile w/o the need for custom configuration/installation, this would be totally possible.  However, the community typically utilize tinymce, or fckeditor which are all external programs from drupal itself, thus the legal issues ensue. (In fact this is just the tip of the iceberg, drush make can even apply patches posted to specific issues on drupal.org, but I digress)

So, all the legal mumbo-jumbo is... non-trivial.  And I'm certainly not trying to trivialize it, however we HAVE to recognize that this is THE killer feature of drupal in years to come.  If we nail this profile thing, you could literally build profiles that replicate the other platforms we compete against, and we'd all still be benefiting from the same core/modules as necessary.  Want a wordpress clone? Want a Joomla! clone? Want a profiles with a simple UX approach to the same core you know and love?  We know that where competition thrives, so does innovation, and the drupal 7 release, while I'm sure it will help significantly with some of our UX woes, is not the "be all, end all" of UX bliss.  Make profile distribution on d.o easier, more powerful and come up with a solution that doesn't exclude external code libraries and we will be well on our way.

In short, fully realized install profiles are a game changer for the entire CMS market (in my opinion).  It essentially reduces "Drupal" to a development platform, and leaves the "content management" needs up to the profile developer. Drupal.org needs to step up and support this as much as possible.  If d.o doesn't I can guarantee that some other site WILL, and that will not benefit the community at the same level.

Sep 19 2009
Sep 19

It's about freaking time!  As a professional web development shop, sometimes your website can suffer from the "cobbler's kids" effect, namely you're so busy working on other people's websites, that you haven't taken any time for your own.  I've been working to remedy that for a little while now, but I'm always up for trying something shiney and new, so... I decided to do it in the (fairly) newly released Panels 3 module. For those of you who have not yet tried Panels I can offer no stronger encouragement than to say it is the best thing since cck and views. I've not yet had the pleasure of exporting panels into their own module (think views export) but the maintenance possibilities here are very exciting.

I can tell you that the vast majority of the public facing portions of the site are built entirely in panels.  The front page is it's own special layout, as well as the internal pages.  Panels has made creating your own layout plugins incredibly simple (never mind the flexible layout that's included in core).  I've tried to donate back what knowledge I've gained as I've done this through a couple of patches (some documentation for panels, and the footer message page element for ctools).  It's sad to me to see panels show up so late in the D6 cycle, but despite that, it's so promising with the various features and powers it provides, this alone could revolutionize the way you build Drupal sites.  Panels has ACTUALLY simplified my theme.  The ability to represent the layouts I wanted within panels, without the need to build a page-front.tpl.php and a page-blog.tpl.php etc etc was really empowering.  I utilized Zen for this version of our look (which we'll be updating soon) and I actually simplified zen's default page tpl file by removing $title, $tabs, $footer_message and a few more things.  True, I had to re-configure these things within panels, but the simplicity of doing so, and the flexibility of the end product made it well worth the effort.

There were a number of SEO wins as well.  The primary win was that of node titles and h1/h2 issues.  There are a number of ways to solve this particular problem, but as I mentioned before, I removed title from the page tpl file in zen.  This is because panels allows me to override the individual node's title (with nothing in this case) and then have it displayed in the page title (h1) instead of the typical node title (h2).  This is great for singular nodes, again panels simplified any sort of alterations I might normally have made to my theme layer by just providing the options I wanted right out of the box.

The removal of tabs from the page tpl was an administrive aesthetic one.  I didn't want my tabs over the entire panel, I wanted them in with the content to which they pertained.  Again the panels tabs page element provided this right out of the box (you need to be running dev version of ctools to get more than one page element on screen at the same time).  In addition to this, I wanted my footer message as a page element, but it wasn't part of ctools plugins yet.  Merlinofchaos has made it very easy to add in new plugins of these types.  With just a little bit of work, I had my own footer_message plugin built very quickly. It worked exactly as I had hoped it would, and allowed me to build a great footer minipanel.

Minipanels really are a wonderful tool as well.  I built an additional layout style specifically for my minipanels so that I could utilize them with the most efficient markup possible.  Panels, by it's nature, adds a little bit of markup to whatever you may be doing layout wise, but most of this is helper markup for node edit links and things of that type.  Howere, if I know for certain that the content I'm dealing with is going to end up in a div of its own at the end of the day, having minimal (or no) markup surrounding it, but still having the ability to group it together is very powerful.  For the footer I built an "empty" layout, that was literally one region with no markup.  This meant that utilizing minipanels to build up grouping of content that would be the same from one page to the next made a great deal of sense.  With that in place, I built my footer once, and simply included it into the same region of the various panels again and again.

In all, panels is an amazing tool.  I will certainly be utilizing it more.  The new apis it provides are very empowering, and can simplify your maintenance needs pretty significantly.  If you haven't tried panels yet, I'd heavily encourage you to do so.

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web