Nov 07 2018
Nov 07

Determining how content on your site should be cached isn't a simple topic. Last time, I covered cache contexts and tags. Today, I'd like to get into a couple more advanced topics: The use of custom cache tags and of max-age.

Custom Cache Tags

Drupal's built-in selection of cache tags is large, and some contributed modules add additional tags appropriate to what they do, but for really refined control you might want to create your own custom cache tags. This is, surprisingly, quite easy. In fact, you don't have to do anything in particular to create the tag – you just have to start using it and, somewhere in your code, invalidate it:


 As an example, to continue the scenario from part one about a page showing recent articles, there is one thing about this page that the tags we've already looked at don't quite cover. What if a new article gets created with a publish date that should be shown on your page? Or, maybe an article which isn't currently displayed has its publish date updated, and now it should start showing up? It's impractical to include a node-specific tag for every article that might possibly have to show up on your page, especially since those articles might not exist yet. But we do want the page to update to show new articles when appropriate.

The solution? A custom cache tag. The name of the tag doesn't matter much, but might be something such as my_module:article_date_published. That tag could be added on the page, and it could be invalidated (using the function above) in a node_insert hook for articles and in a node_update anytime that the Date Published field on an article gets changed. This might invalidate the cached version of your page a little more frequently than is strictly necessary (such as when an article's publish date gets changed to something that still isn't recent enough to have it show up on your custom page), but it certainly shouldn't miss any such updates.

This is a simple example of a custom cache tag, but they can be used for many other situations as well. The key is to figure out what conditions your content needs to be invalidated in and then start invalidating an appropriate custom tag when those conditions are met.Ready to get the most out of Drupal?  Schedule a free consultation with an Ashday Drupal Expert. 

Rules for Cache Max-Age

Setting a maximum time for something to be cached is sort of a fallback solution – it's useful in situations where contexts and tags just can't quite accomplish what you need, but should generally be avoided if it can be. As I mentioned in a previous article, a great example of this is content which is shown on your site but which gets retrieved from a remote web service. Your site won't automatically know when the content on the remote site gets updated, but by setting a max-age of 1 hour on your caching of that content, you can be sure your site is never more than an hour out of date. This isn't ideal in cases where you need up-to-the-minute accuracy to the data from the web service, but in most scenarios some amount of potential "delay" in your site updating is perfectly acceptable, and whether that delay can be a full day or as short as a few minutes, caching for that time is better than not caching at all. 

However, there is one big caveat to using max-age: It isn't directly compatible with the Internal Page Cache module that caches entire pages for anonymous users. Cache contexts and tags "bubble up" to the page cache, but max-age doesn't. The Internal Page Cache module just completely ignores the max-age set on any parts of the page. There is an existing issue on about potentially changing this, but until that happens, it's something that you'll want to account for in your cache handling.

For instance, maybe you have a block that you want to have cached for 15 minutes. Setting a max-age on that block will work fine for authenticated users, but the Internal Page Cache will ignore this setting and, essentially, cause the block to be cached permanently on any page it gets shown on to an anonymous user. That probably isn't what you actually want it to do.

You have a few options in this case.

First, you could choose to not cache the pages containing that block at all (using the "kill switch" noted in part one). This means you wouldn't get any benefit from using max-age, and would in fact negate all caching on that page, but it would guarantee that your content wouldn't get out of date. As with any use of the "kill switch," however, this should be a last resort.

Second, you could turn off the Internal Page Cache module. Unfortunately, it doesn't seem to be possible to disable it on a page-by-page basis (if you know a way, please drop us a line and we'll update this post), but if most of your pages need to use a max-age, this may be a decent option. Even with this module disabled, the Internal Dynamic Page Cache will cache the individual pieces of your page and give you some caching benefits for anonymous users, even if it can't do as much as both modules together.

My preferred option for this is actually to not use a max-age at all and to instead create a custom, time-based cache tag. For instance, instead of setting a max-age of 1 hour, you might create a custom cache tag of "time:hourly", and then set up a cron task to invalidate that tag every hour. This isn't quite the same as a max-age (a max-age would expire 1 hour after the content gets cached, while this tag would be invalidated every hour on the hour) but the caching benefits end up being similar, and it works for anonymous users.

Up Next

Now that we've gotten an overview of how to determine what rules you should use to cache content on your site, it's time to get a little bit more technical. Next time, I'll be taking a look at how Drupal stores and retrieves cached data, which can be immensely useful to understanding why the cache works the way it does, and it's also quite helpful to know when fixing any caching-related bugs you might encounter. Watch this blog for all the details!

Offer for a free consultation with an Ashday expert

Nov 07 2018
Nov 07

Your browser does not support the audio element. TEN7-Podcast-Ep-044-DrupalCamp-Ottawa.mp3

It is our pleasure to welcome once again Tess Flynn, TEN7's DevOps Engineer and DrupalCamp ambassador, to discuss the 2018 DrupalCamp Ottawa.

Here's what we're discussing in this podcast:

  • 2018 DrupalCamp Ottawa
  • Minnesota maple syrup
  • Camp format
  • Ottawa's move to Drupal open source
  • Award for travelling the farthest to attend
  • Camp without BOFs
  • Drupal 101
  • Keynote: “Building Accessible Experiences”
  • Accessibility is a core aspect of the entire design experience
  • Socketwench presents: "Healthcheck your site!"
  • Building software as a service
  • Privacy laws differences between Canada and the US


IVAN STEGIC: Hey Everyone! You’re listening to the TEN7 Podcast, where we get together every fortnight, and sometimes more often, to talk about technology, business and the humans in it. I am your host Ivan Stegic. In this episode of the Podcast, we’re talking to Tess Flynn about her visit to DrupalCamp Ottawa 2018, that happened on Friday, October 26. Tess, welcome back to the Podcast.

TESS FLYNN: Could you even use fortnight now? Isn’t that copyrighted?

IVAN: (laughing) Well, it’s spelled differently, so I think we might be ok. Yea, good point though. Let’s see, DrupalCamp Ottawa. You just got back from Canada. Did you bring back any maple syrup?

TESS: I did, but the problem is, that some of the maple syrup we get here locally actually tastes a bit better than the kind you get from the touristy travel shops that you get in Canada.

IVAN: Yea, we’re a little spoiled in Minnesota with maple syrup, I agree. So, DrupalCamp Ottawa is a little different in format than DrupalCorn that we talked about last. It’s one day of Camp, it’s a Friday, so 25% the length of the other Camps. How did that feel compared to the extended four days that we talked about last time?

TESS: I think that it actually felt rather appropriate. Mostly because you can’t really talk about this Camp without mentioning the fact it is doing head on comparison competition with BADCamp.

IVAN: Oh, that’s right. I’d forgotten that BADCamp was at the same time. What’s the format for BADCamp?

TESS: BADCamp’s a little bit more like TCDrupal. There’s a day of training, then two days of sessions, then a day of contributions.

IVAN: Do you think that affected attendance in Ottawa?

TESS: Well, I actually was wondering about this, as well. The question whether or not is, if you actually had the choice between the two, would you go to one or the other. And I think that’s kind of a false dichotomy, because from another perspective Ottawa is in a completely different country. Even though it’s not very far from Minnesota, at the same time it is technically a different country. So, there are reasons to actually choose a date that even coincides with one of the biggest regional Camps in the United States.

IVAN: And it’s also on the complete opposite end of the Continent as well.

TESS: Yea, it’s on the Eastern time zone.

IVAN: And, how large was DrupalCamp Ottawa, in terms of number of people?  Just share attendance. Just a guess.

TESS: They said that about 250 people registered. Some of those were going to be sponsors, and a fairly typical pattern is that they’ll register more people than actually shows up. So, I would probably guess maybe 175 at least, probably more like 200 and change.

IVAN: Wow, that’s a whole lot for regional Camp and only one day of programming.

TESS: Well, you know, it’s that other country factor, and there’s a lot to really unpack there, because it’s not just a DrupalCamp somewhere else. There are specific regional concerns that go along with having a DrupalCamp in Canada and using Drupal in Canada.

IVAN: So, let’s talk about that a little bit. Would you guess that most of the attendees were from Ottawa and from Ontario?

TESS: I would probably say so, because Ottawa, from what I recall, is the Capitol. So, there’s a lot of government in Ottawa. A lot. And, Ottawa is trying to pivot towards doing more Drupal open source, and more open source in general. So, the idea that a lot of people would attend this Camp to get more open source information makes perfect sense. And, to put it in the same city that a lot of people work in, also makes a lot of sense.

IVAN: It does make a lot of sense. Now, I heard you received a special award.

TESS: (laughing) There was kind of a joke about that. As a Camp speaker, there’s always kind of a little bit of a joke about, if you were the farthest one to attend the Camp. And, from my knowledge, I might’ve been one of the few Americans to attend the entire Camp, and probably the only one that really needed to take a flight to get there.

IVAN: (laughing) What was the prize? Or was it just a proverbial pat on the back?

TESS: It was more like, “oh, really, I am the furthest away one. Oh, that’s nice.” That was it. (laughing)

IVAN: (laughing) Now, I looked at the schedule and it looked like it was broken up into three tracks for the day, and it loosely seemed to be something along the lines of front end, back end and everything else. And, everything else was kind of like business, strategy, communications, content, which kind of makes sense. Did I get that right? Was that more or less how it was?

TESS: It certainly felt like that. I mean with only one day of Camp, and only about four different session periods, there’s not really that much need to break it up along too many different functional lines. There’s only so many slots available.

IVAN: And no BOFs from what I could tell.

TESS: No. I don’t think they had the room available at the venue in order to do that, but they might have.

IVAN: I see. Nice segue into the location of where the event was, it was at the University of Ottawa. The website says the SITE Building. Could you tell me more about the space?

TESS: That place has just got such an interesting personality. How can I explain this? Like if someone took material design and construction aesthetic and mashed them together, you get this combination of bright colors and metals and all sorts of interesting things. It was really, really, a nifty little venue. It was very visually interesting. And, because the Camp wasn’t particularly big, everything was in one building, so it was very easy to find everything.

IVAN: So, three rooms, all in one building. I would assume lunch in a central place, as we’ve come to expect?

TESS: That’s correct.

IVAN: Right. That’s great. That seems to make quite a cozy atmosphere for attendees. I really like those, when they’re all close together and bunched up. Let’s talk a little about the pre-keynote. It looked like there was a session on the scheduled called Drupal 101, that seemed to be very inviting for beginners, kind of before the keynote happens, if you’re new to Drupal, not sure what a node is. The description says, “bring your coffee and get a quick course in Drupal terminology.” I love this idea of kind of giving an intro before the festivities or the keynote begins.

TESS: Yea, I rather liked how that went because it provides a nice bit of framing, that would’ve otherwise been taken out by a training session on the first day of a multi-day Camp. And, I think it was a nice compromise in order to allow people who have heard about this Drupal thing, and then get a nice introduction, so that they can get value out of the Camp. And, because the Camp was on a Friday, some people might be attending this on their work hours.

IVAN: Yea, I think that’s a great welcoming idea. It would be interesting to talk to the organizers to hear what their take on the motivation behind that was. So then, that rolled into the keynote and the keynote was titled, “Building Accessible Experiences.” And, it was from developer advocacy lead at Shopify. I can’t pronounce Tiffany’s last name, I’m going to try, Tiffany Tse. Any ideas if I got close, Tess?

TESS: No, I don’t think the coffee had quite kicked in, and I think I barely missed her last name too. So, I can’t quite remember the pronunciation either.

IVAN: (laughing) We’re sorry Tiffany, if you’re listening. Call us and let us know how we did. Yea, so Shopify, first of all, I love the fact that the keynote was from someone outside of the Drupal ecosystem.

TESS: I just really appreciated this particular keynote. A lot of keynotes lately, including one that I gave myself, tended to be a lot more broad-reaching, a lot more big ideas and directions and business policies. And this one was a lot more down to earth, a lot more practical, really put you into the pilot seat of, “okay, you’re going to be an accessibility designer. What’s wrong with this?” And, it was just a wonderful experience, because it really sat you down and made you think about what you were looking at, and it was nice to do that as the first thing in a Camp, because it felt very direct.

IVAN: Glad to hear it. So, what do you think your major takeaway from the keynote was?

TESS: Well, I think the general message that I took away from it was that accessibility is not something that you can just bolt on later. It is a core aspect of the entire design experience, and you should consider it very carefully from the very beginning, because a site can be a lot more versatile than say an application can be. And, it has a lot more audiences, and a lot more modalities in which that, it is presented to different users. And, it was really, really, well communicated.

IVAN: And, further to that, the thing that I always want to try to remind everyone we’re working with, and the people that we help with our sites is, not only is accessibility important to think about from the design aspect and right from the beginning, but it doesn’t stop after you’ve launched a site. It’s something that continues, that all members of the team that are responsible for the site have to be aware of and continue to build on. It’s not something that you just launch as a feature, and you’re done. So, I’m glad to hear that was a good keynote. And, it looks like your session was directly after the keynote, in the same room (laughing). So, did you luck out and have a whole lot of people stay?

TESS: I apparently did have a lot of people staying for that session. I was kind of surprised, actually, about the number of people that attended. I think it was some 50 people that I counted right before I started. And, I know that some people came in after I got started as well, that I didn’t get a chance to count.

IVAN: And you gave away all the TEN7 swag at your session.

TESS: Yea. We were running a little bit late because the keynote ran a little bit long. So, when I first set up, I basically put everything out, and anyone who was an early bird I said, “here, come take. Don’t make me take this back through American security.”

IVAN: (laughing) Yea, we were a little light on swag at this Camp, because of the fact that you were traveling internationally. But, I’m sure we had enough to make some people happy there.

TESS: It all vanished anyway.

IVAN: Yea, that’s what we want. Any particularly interesting questions that came up in your session, that maybe you haven’t heard before?

TESS: So, the thing with my sessions is that very rarely do people actually come up with questions, because once I tend to get started, it’s really hard to get a question in edgewise, because I just have (laughing) such a presentation that is just a firehose of nonstop rambling for almost an hour. And it’s really hard for people to just stop and ask questions. Sometimes people do, but my sessions tend not to get a lot of questions.

IVAN: I think you do a great job of explaining things so clearly with analogies and with detail that, that’s maybe why there aren’t any questions. I certainly appreciate attending those. So, just looking at the other sessions on the schedule, a few that peaked my interest, “The New Face of CiviCRM.” CiviCRM still makes me a little scared, so I’m glad that there’s a new face. “Building Software as a Service in Drupal,” another session I thought was something I might have attended had I been at the Camp. And then, “Drupal as the Base of an Inclusive Workplace,” which was Mike Gifford’s session. It’s an interesting idea. I kind of read the description of the session, the fact that Drupal is largely still known as a CMS, and people really don’t realize that it’s much more than that, especially when you think about accessibility and user experience. You went to that session, right?

TESS: I did. But I went in with the expectation, because I didn’t read the description very well, that it was going to be a little bit more culturally focused, and how to build a more diverse team as a result of using Drupal. And, so, when they started going on the technical merits I was like, “Ahhh",  and it’s totally my fault, I didn’t read the session description very well.

IVAN: Oh. So, what was your takeaway then from that session?

TESS: A lot of it reminded me of the keynote, but it also kept pointing out one thing that was really important is that, accessibility doesn’t just benefit those who are disabled, because accessibility is not just going to be for those who have a permanent disability, but a temporal or situational disability as well. And, there was a lot of focus on bringing that into the conversation as well.

IVAN: Mike does a great job of being inclusive, and I imagine that was a wonderful session to attend. Did you go to the “Building Software as a Service on Drupal” session?

TESS: I did go to that one. I also, kind of was hoping this one was going to be a little bit more business focused. It actually was mostly a technical discussion about how to use Aegir, which has been around for the better part of 10 years in Drupal circles, and is still going, and is still a method to provide a Drupal solution as a software as a service. And, the next version of Aegir is supposed to finally support more than just Drupal, and virtually any php application, and possibly any web application that can be deployed.

IVAN: So that’s how you say it?

TESS: What? Software as a service?

IVAN: No. Aegir. I always wondered about that.

TESS: No, I only remember that because I think I listened to. Was it a Drupal easy podcast, like years ago, half a decade ago, about Aegir? And that was like one of the first things that they were going to talk about was, “how do you pronounce this? It’s got a diphthong in it, why?”

IVAN: (laughing) I want to spend some time talking about this building software as a service session. So, from what I understand, Aegir’s basically a way for you to host your own site, and maybe even sell hosting to others as a service, particularly just Drupal sites. And you said that it would, in the next version, be supporting more than just Drupal sites, but PHP applications as well. Is this the basis for Pantheon? Is this where Pantheon started? I have no idea. How is it similar or different to Pantheon?

TESS: I don’t know if Aegir was actually used in Pantheon at the beginning. I do know that they were using their own home brewed containerized solution, possibly using Xen or KVM at some point, and that they recently transitioned to Google Kubernetes engine in order to run most of their container systems. And primarily the product that they have is a web front end and a pricing tier in order to better leverage all of that usage. And, I’m not sure if they ever really utilized Aegir for that or not.

IVAN: It looked like this was a session that was more in the style of a BOF, the way the description was written. It felt like it was going to be more discussion oriented. Did that turn out to be the case?

TESS: It did turn out to be the case. I was really hoping for a lot more perspective from the business perspective, because it felt like it was very technically focused, very capability focused, as in Aegir can do this, Aegir can do this, this is how you do this. Yes, you can run it on your own hardware, why would you want to do that? And, this is where one of the key things that I took away from the entire Camp starting sitting in my mind, is, that, because I’m not in the West, there are different concerns for hosting, and a lot of Canadian companies do not want to rely on any US hosting. And I cannot blame them, considering our utterly lackadaisical privacy laws. And, I’m being generous when I describe it that way.

IVAN: So, what turned out to be the options for Canadian companies for doing hosting, if they’re not going to rely on US technology?

TESS: Well, I think that AWS is now involved, but that’s still a company that’s technically owned and operated from the US, and that might not be as comfortable for people. I actually haven’t had enough time yet, to really investigate the hosting market in Canada. It feels like it needs more development, honestly, is my initial impression. I could be wrong about that. I can guess that there’s probably a lot of on-premise hosting, but not nearly enough, like cloud-based hosting. And, there might be a lot of shared hosting, as well, that is used by a lot of smaller sites. But I’m really concerned that there’s just not enough cloud hosting, that is also hosted in Canada, in order to make sure that the privacy laws still apply, that the local/regional laws still apply, and that these are actually utilized for Canadian sites. And, this may be a hollow argument if a lot of the Drupal market share is government, because they’ll be more likely to self-host than use cloud products. Although, that made me think the following day, why isn’t it that the Canadian government itself, doesn’t form a wholly-owned and operated company that does nothing but hosting an infrastructure providing in a cloud facility. They’ve got to have more than one data center under their ownership already.

IVAN: Yea, that’s a good point. It seems like a market opportunity, that a company like Pantheon or Acquia could certainly take advantage of. But then at the same time, they’re a US company that are operating in Canada, and so, maybe there’s a Pantheon Canada that gets formed, or a company that’s run and operated in Canada by similar or related people to the same US company, and yet they have their own privacy standards and use privacy protocols that are acceptable to the Canadian laws. I think Google has GKE zones that are available in Canada, so in theory, you could potentially do that. I suppose.

TESS: Yea, I think there probably are some GKE zones in Canada as well. I have to look into that to be sure.

IVAN: Maybe we should start a hosting company, Tess.

TESS: I’m all for that.

IVAN: (laughing)

TESS: Ottawa isn’t bad, but I like Toronto more. (laughing)

IVAN: (laughing) Ok. We could be wherever you want.

TESS: This is a thing of mine though. When I used to do business travel a lot, I noticed that I tend to get immediate impressions of places that I touchdown in. It’s really weird, because it doesn’t seem to make any logical sense to me either. But Toronto had a very familiar vibe to what I’m used to in Minneapolis, but there were certain rounded corners that I didn’t have from the same vibe in Minneapolis. And those were probably where a lot more of a Canadian cultural vibe was poking out. And, Ottawa felt very similar to that, but a slower pace. It’s a little bit hard to describe. I wanted to describe it using a music analogy. So, the thing that pops to my mind is that, there is a video game called Undertale that has been around for a while, and towards the end there’s this one area of the game that has very upbeat, fast paced music. But if you take an alternate story path in that game, that same music plays, but in a very slow, lumbering pace instead. And, I didn’t get that exact feeling, but it definitely made me think, “wow, this is the same song, but it’s slightly slower. That’s interesting.” (laughing) I know this is all a ridiculous subjective, but that was something that just kept coming up when I was there.

IVAN: It felt familial and accessible I would argue. So, no direct flights to Ottawa from Minneapolis. Did you fly through Toronto?

TESS: I did fly through Toronto, and that was actually fairly easy. It’s only a two-hour flight from MSP and you could get a direct. The only problem is, once you’re in Toronto, you have to catch a one hour connecting flight to Ottawa. Now, I didn’t have any problems going to Ottawa, but coming back I just kept getting hit with delay after delay, and it was a little bit frustrating, because we left probably 15 minutes from Ottawa, from when we were supposed to. I didn’t mind so much, because I had enough time to account for the difference. But then once I landed in Ottawa, I failed to remember that the international security procedures had changed since I last traveled internationally. And now you have to go through international customs as an American citizen on the international side, rather than the US side. And that was a Kafkaesque experience to say the least. I felt like I was reenacting the movie Brazil a little bit there.

IVAN: (laughing) Yea, we had that same experience flying through Toronto on the way back from Europe this year, and, it actually made me think of, kind of what laws apply on the Canadian side after you’ve cleared US customs. I know that it’s US law that applies, but that just feels wrong.

TESS: Someone explained to me that Canadian transit agency, their equivalent of the TSA, is actually a superset of TSA law which just makes me go, “oh geez.” (laughing) In TSA law, if you know even a little bit about it, is already this nightmarish labyrinth of weird edge cases, and political meddling, and none of it makes any sense anymore, and it hasn’t since about 2007 honestly.

IVAN: Yea, it was pretty insane, certainly Kafkaesque as you said, going and clearing customs in Canada for the US, and then physically being in Canada, but technically being in the US after you’ve done that.

TESS: Well, the real hilarious part is the nature of how this works in the Toronto airport. When you actually go through Canadian security at the Toronto airport, and you first get cleared, you’re opened into this wide foyer and it’s got this giant flower sculpture thing, and the underside of each petal is actually your arrival and departure time screens. It’s really nice. And then afterwards I had to walk through there to go to a completely different concourse, and when you get to that concourse you have to go through security again, then you have to go through customs again, then you have to go through the customs waiting area, because they won’t let you go directly to your gate. Then after you go there then you walk to your gate, and by the time I got all the way to the end of my gate, I was on the other side of a glass window, and on the other side of that glass window that I was looking right through, was the giant flower. And that took an hour and a half.

IVAN: (laughing) Wow. Well, I think we have a little more time to talk about the other session that kind of peaked my interest, and you also I think went to it, was the “Journey through the Solr System.” And, the only reason it peaked my interest was because I thought the title of the session was amazing.

TESS: The slides were also great too. They had a really nice visual style that I really appreciated. It made it very fun, but at the same time it focused on information. And the talk itself was also different than I expected. Now, usually when you think of Solr and Drupal, you’re going to think of, well you’re probably going to use a search API implementation, and it’s going to be one site, and you’re going to configure which entities that you’re going to have going to which in that system, and then you’ll use views in order to make your search pages and yada, yada, yada. Well, they couldn’t do that with this solution. The problem is that they have some two hundred different sites, and they had to have a unified singular search mechanism. And it wasn’t a multi-site either, so you couldn’t kind of cheat and use some of that facility in order to populate a single index. So, either they had to come up with a completely custom solution in which any time content was posted for each individual site it went back to a standard search API server, or they’d have to do something completely different. What they used to use, they used to use a Google search appliance, and this was great because it was on premises, all of the data was local, they owned it. And then, suddenly, those yellow boxes stopped arriving from Google because Google deprecated the entire product line. Now you have to forward all of your search index information to some American server, and this is not comfortable for some people, and that is perfectly fair. So, they could’ve paid for a different solution, or they could’ve went, “well, we’ll just risk the privacy implications,” but instead they decided, “you know what, let’s see if we can try to build one of these ourselves.” So, the solution they came up with was, a high availability Solr configuration with an open source web crawler called Nutch, and it was just a fascinating combination of elements to make, basically, your own Google, but within your own organization, for your own sites, without having to have a direct backend connection.

IVAN: Nice. I really love that name, Nutch.

TESS: That was a really, really fascinating talk, and I wish that I could’ve captured more of the technical details of that, but I was coming right off of doing my session, so I still had a lot of adrenaline in me. (laughing)

IVAN: Yea, and I’m sure that the session video will be posted once it’s available. Yea let’s talk about that a little bit, and then I think we’ll wrap. So, it looks like there were sessions that were recorded again, courtesy of Kevin Thull and his equipment.

TESS: Well, not quite.

IVAN: Not quite?

TESS: Not quite. Kevin Thull was not there, he was at BADCamp.

IVAN: Oh. But his equipment was there.

TESS: Well, from my understanding what happened is that Kevin Thull trained the DrupalCamp Ottawa staff, and provided them a list of the hardware that he uses for his talks. So, they reimplemented all of that under his guidance, and then ran it themselves, independently. So, it was a very familiar experience. Everyone had the big red button that they had to press. So it was very, very familiar. I do know that they have a few gotchas with the session recording, but they had generally had a fairly good capture ratio.

IVAN: That’s wonderful. I do see that on the DrupalCamp Ottawa website they published a playlist on YouTube, and I think there are about six videos on there right now, six sessions that are currently available with the note that they’ll be adding the rest of the sessions in the coming week or so. So that’s great. We’re going to have a recording of your session, and you could probably go back to the Solr session as well and check the details of that one out as well. Well, all in all, a good Camp. Something that maybe I’ll consider going to next year, and maybe we’ll send you again next year. Tess, thank you so much for spending your time with me and talking through DrupalCamp Ottawa 2018.

TESS: No problem.

IVAN: You’ve been listening to the TEN7 Podcast. Find us online at And if you have a second, do send us a message. We love hearing from you. Our email address is [email protected]. Until next time, this is Ivan Stegic. Thank you for listening.

Nov 07 2018
Nov 07
Create Charts in Drupal 8 with Views

There are many ways to present data to your readers. One example would be a table or a list. Sometimes the best approach is to show data on a chart.

It can ease understanding of large quantities of data. There is a way to make charts in Drupal with the help of the Charts module and Views.

In this tutorial, you will learn the basic usage of the module in combination with the Google Charts library. Let’s start!

Install the Charts Module and the Library

  • Download and install the Charts module.
  • Click Extend.
  • Enable in the Modules page the Charts module and its submodule Google Charts.
  • Click Install:

click install for Drupal charts

Installation Using Composer (recommended)

If you use Composer to manage dependencies, edit "/composer.json" as follows.

  • Run "composer require --prefer-dist composer/installers" to ensure that you have the "composer/installers" package installed. This package facilitates the installation of packages into directories other than "/vendor" (e.g. "/libraries") using Composer.
  • Add the following to the "installer-paths" section of "composer.json": "libraries/{$name}": ["type:drupal-library"],
  • Add the following to the "repositories" section of "composer.json":
                "type": "package",
                "package": {
                    "name": "google/charts",
                    "version": "45",
                    "type": "drupal-library",
                    "extra": {
                        "installer-name": "google_charts"
                    "dist": {
                        "url": "",
                        "type": "file"
                    "require": {
                        "composer/installers": "~1.0"
  • Run:
    composer require --prefer-dist google/charts:45
  • You should find that new directories have been created under /libraries
  • Click Configuration > Content authoring > Charts default configuration. 
  • Select Google Charts as the default charting library.
  • Click Save defaults:

select google charts

Step #2. Create a Content Type for your Drupal Charts

We need some kind of structured data to present in our charts. I’m going to compare the population of all the countries in South America. You can, of course, make your own example.

  • Go to Structure > Content types > Add content type.
  • Create a content type for your Drupal charts

create your content type

  • Add the required fields to match your data:

add required fields

  • At the end, you should have something like this:

the final result

  • Now that you have your content type in place, let's proceed to create the nodes. In this example, each node will be an individual country.

create countries

Step #3. Create the View for your Drupal charts

  • Click Structure > Views > Add view. 
  • Give your view a proper name. 
  • Choose the content type you want to present to your readers.
  • Choose to create a block with a display format Unformatted list of fields. You won’t be able to proceed in this step if you choose Chart due to a small bug in the logic of the module.
  • I’ve chosen 12 items per block because there are 12 countries I want to show in my chart.
  • Click Save and edit:

click save and edit

  • In the FIELDS section of Views UI click Add.
  • Look for the relevant field for your chart and click Add and configure fields.
  • Leave the defaults and click Apply:

add and configure fields

click apply

  • In the FORMAT section click Unformatted list.
  • Choose Chart.
  • Click Apply:

in the format section click apply

  • Select the Charting library in the drop-down. 
  • Select the title as the label field, if it’s not been selected already.
  • Check your relevant data field as provided data.
  • Scroll down and change the Legend position to None.
  • Click Apply. 
  • Feel free to play with all the configuration options available here to match the chart you want or need.

play with configuration options

  • Save the View.

Step #4. Place Your Block

  • Click Structure > Block layout.
  • Search for the region you want to place the block in.
  • Click Place block.
  • Search your block and click Place block once again.
  • Click Save blocks at the bottom of the screen and take a look at your site.

look at your site

There you have it - your Drupal chart is live. Of course, if you change the data in one of your nodes, the chart will adjust itself accordingly. If you want to change the chart display, just change it in the Chart settings of your view. 

You can also give the other charting libraries (C3, Highcharts) a try and see what fits your needs best.

As always thank you for reading! If you want to learn more Drupal, join OSTraining now. You'll get access to a vast library of Drupal training videos, plus the best-selling"Drupal 8 Explained" book!

About the author

Jorge lived in Ecuador and Germany. Now he is back to his homeland Colombia. He spends his time translating from English and German to Spanish. He enjoys playing with Drupal and other Open Source Content Management Systems and technologies.
Nov 07 2018
Nov 07

Every big Drupal release opens fantastic opportunities for websites. Three years ago, the eighth Drupal version came to this world — and the world fell in love with top-notch Drupal 8 improvements. Drupal 8 has been getting more and more awesome on its way from Drupal 8.1 to Drupal 8.5, and the latest Drupal 8.6 is cooler still. Drupal 8 is on its peak of flourishing, but the cycles of development never stop. That’s why the Drupal community has already announced the expected release of Drupal 9 and end-of-life for Drupal 8 and 7. Let’s see what it means for Drupal 8 and 7 website owners and what action is needed from them. And, of course, our Drupal team is ready to help them take this action.

The planned release of Drupal 9 and end-of-life for Drupal 8 and 7

A little while after D8 was released, the most impatient and curious ones began to ask questions about the future Drupal 9. When will it be released? What will it offer? Will websites need another upgrade?

Despite the fact that the development branch for Drupal 9 was started years ago, no one could tell its release year for sure. There were even suggestions that Drupal 9 would never come at all — raised by a totally new approach to updates in Drupal 8 and the regular innovation mode.

However, the situation changed in September 2018, at the world-wide meetup for drupalers — Drupal Europe in Darmstadt. The “phantom” of Drupal 9 took exact shape. D9 will come for sure, and the year is defined! Drupal founder Dries Buytaert announced the Drupal community plans on this, as well as illustrated them with images:

  • Drupal 9 release: 2020
  • Drupal 8 end-of-life: 2021
  • Drupal 7 end-of-life: 2021

What the future release of Drupal 9 means for Drupal 8 and 7

End-of-life for Drupal versions: hey, what does it mean?

End-of-life, or EOL, is the moment when official support for a particular Drupal version drops. The Drupal team stops watching over it and creates no more updates and patches for it, including those in the security area. This means more vulnerability to hacker attacks and, of course, no new features.

For example, on February 24, 2016, was the EOL for Drupal 6. The service of upgrades from Drupal 6 to Drupal 7 or 8 became a very popular one with our Drupal team, because many customers applied for that. By the way, if you are still with Drupal 6, it’s high time to upgrade — better late than never!

Usually two latest Drupal versions are supported: the newly released and the previous one. However, in the case with D9, the EOL for D7 comes a little bit later and for D8 a little bit earlier than usual — in 2021 for both.

Although the end-of-life sounds a little scary, there is no need to worry. See next chapters for more information.

What Drupal 9 release means for Drupal 8 websites

What the future release of Drupal 9 means for Drupal 8 and 7

Drupal 8 is in the center of community’s ideas and is getting lucrative technological innovations all the time. Despite the EOL in 2021, the future looks particularly bright for D8 website owners. And here’s why.

Compared to Drupal 7, Drupal 8 is a technological breakthrough. That’s why upgrades from D7 to D8 are often lengthy (depending on the site complexity). But if you once moved to D8 from D7, that was your LAST cumbersome upgrade. No more of that will be needed!

Drupal 8 websites will move to D9 quickly and smoothly. Lightning-fast upgrades will be provided for those that are using the latest Drupal 8 minor version and no deprecated APIs. This golden rule of keeping up-to-date and avoiding deprecated APIs helps even contributed and custom Drupal 8 modules be instantly compatible with Drupal 9! Our Drupal web studio is always ready to take care of this for you.

So an upgrade from D8 to D9 will be something you will almost not notice. The secret is, according to the principle of continuous innovation, Drupal 8 releases backwards-compatible minor versions every half a year. Drupal 9 promises to be almost identical to the latest minor version of Drupal 8 with deprecated code removed, as stated in the article by Dries Buytaert “Making Drupal upgrades easy forever”.

What Drupal 9 release means for Drupal 7 websites

What the future release of Drupal 9 means for Drupal 8 and 7

We wrote that the future looks bright for Drupal 8 website owners, but it does for Drupal 7 website owners as well! They only need to take more decisive action. So what should they do considering the drop of support in 2021?

  • They might hope for a commercial support program for Drupal 7 that the community is thinking to implement, but relying on it looks like staying with the past.
  • They might also wait for Drupal 9 and jump directly to it. But they will need a big upgrade someday anyway — to Drupal 8 or 9, which are very close relatives. The time passes, and all this time they could be enjoying Drupal 8 without putting their success on the shelf.
  • The best option is to move to Drupal 8 in the nearest future. Just one upgrade will be their ticket to the future hassle-free upgrades (to Drupal 9, 10, and beyond). And, of course, they will be in pace with the times and have all Drupal 8 innovations. Our migration experts are ready to smoothly move you to Drupal 8.

So there’s no need for a fortune-teller to predict your future in relation to the release of Drupal 9 ;) The future looks bright for you in any case! The only condition is that have good drupalers at hand.

Drupal updates and upgrades, as well as Drupal support are among our areas of expertise.

Contact our Drupal team, and let’s choose the best action for your website!

Nov 07 2018
Nov 07

Installing Lando on a Windows machine is easy. Just follow these 30 (or more) simple steps:

  1. Review the directions.
  2. Figure out which version of Windows you are running.
  3. Realize that you need to upgrade to Windows 10 Professional, because apparently you have to pay extra to actually do work on a Windows machine.
  4. Open the Windows Store.
  5. Spend half an hour trying to figure out why the Windows store is only showing blank pages.
  6. Take a break, go vote, spend some time with your kids, and seriously consider buying a Mac so that you don't have to deal with this shit.
  7. Reboot your computer and finally get Windows store to respond.
  8. Pay $100 dollars, while updating your account information because everything is three years out-of-date. Do not pass Go.
  9. Reboot your computer twice.
  10. Go to the Lando releases page.
  11. Spend some time looking for the last stable release (note: there is no spoon stable release).
  12. Download and run the latest .exe.
  13. The installer will complain that you don't have Hyper V, which you just paid for.
  14. Find the obscure command you need to enable Hyper V.
  15. Find Powershell in the Start menu.
  16. Discover that you can paste into PowerShell just by right-clicking your mouse. This seems convenient, but it's a trap!
  17. Run the command. It doesn't work.
  18. Learn how to run PowerShell as an administrator.
  19. Run the command, again.
  20. Reboot your computer, again.
  21. Run the .exe, again.
  22. The installer wants to install Docker. Let it.
  23. The Docker installer wants you to log you out. Let it
  24. Log back in.
  25. Open Babun and try the lando command. It isn't found.
  26. Open Powershell and try the lando command. It isn't found.
  27. Open the Command Prompt and try the lando command. It isn't found.
  28. Re-run the Lando installer, for the third time. It turns out that it never finished because Docker logged you out.
  29. Open Powershell and try the lando command.
  30. It works! Congratulations, you are done!*

* Just kidding...

  1. Open PowerShell. Go to the directory where you have your Drupal site.
  2. Run lando init.
  3. Choose the drupal 7 recipe.
  4. Why is it asking for a Pantheon machine token? This isn't a Pantheon site! Hit Ctrl-C.
  5. Log into Pantheon, create a machine token for your Windows machine. note: Terminus and Lando are notorious for asking for this machine token over and over, so make sure to paste this machine token into a file somewhere, which kind of defeats the entire point of having a machine token.
  6. Run lando init, again.
  7. Right clicking to paste doesn't work for the hidden machine token. So, learn a different way to paste the machine token into PowerShell.
  8. Congratulations, you are done!**

** Just kidding...

  1. Run lando start. Your terminal will proceed to spew error messages for several minutes.
  2. Spend an hour searching through the Lando issue queue trying to find the magic sequence that will fix these errors.
  3. Go and start comparing the new MacBook Air to the new Mac Mini. Figure out if you can afford either one so that you don't have to deal with this shit.
  4. Your kids are picking up on your frustration, and everyone is melting down because it is bedtime (and your are anxious about the election).
  5. Give up for the night, and obsessively refresh the election results at until the results are clear at 11:00 PM.
  6. Get up the next morning and write a satirical article about installing Lando on your Windows machine.

I will let you know if I ever actually get it working.

Nov 07 2018
Nov 07

Last September Dropsolid sponsored and attended Drupal Europe. Compared to the Northern America’s conferences, getting Europeans to move to another location is challenging. Certainly when there are many conferences of such high quality that compete such as Drupalcamps, Drupal Dev Days, Frontend United, Drupalaton, Drupaljam, Drupal Business Days. I’m happy for the team they succeeded in making Drupal Europe profitable, this is a huge accomplishment and it also sends a strong signal to the market!

Knowing these tendencies, it was amazing to see that there is a huge market-fit for the conference that Drupal Europe filled in. Also a great sign for Drupal as a base technology and the growth of Drupal. Hence, for Dropsolid it was a must to attend, help and to sponsor such an event. Not only because it helps us getting the visibility in the developer community but also to connect with the latest technologies surrounding the Drupal ecosystem.

The shift to decoupled projects is a noticeable one for Dropsolid and even the Dropsolid platform is a Drupal decoupled project using Angular as our frontend. Next to that, we had a demo at our booth that showed a web VR environment in our Oculus Rift where cotent came from a Drupal 8 application.

People trying our VR-demo at Drupal Europe

On top of that, Drupal Europe was so important to us that our CTO helped the content team by being a volunteer and selection the sessions that were related to Devops & Infrastructure. Nick has been closely involved in this area and we’re glad to donate his time to help curate and select qualitative sessions for Drupal Europe.

None of this would have been possible without the support of our own Government who supports companies like Dropsolid to be present at these international conferences. Even though Drupal Europe is a new concept, it was seen and accepted as a niche conference that allows companies like Dropsolid to get brand awareness and knowledge outside of Belgium. We thank them for this support!

Afbeeldingsresultaat voor flanders investment and trade

From Nick: “One of the most interesting sessions for me was the keynote about the “Future of the open web and open source”. The panel included, next to Dries, Barb Palser from Google, DB Hurley from Mautic and Heather Burns. From what we gathered Matt Mullenberg was also supposed to be there but he wasn’t present. Too bad, as I was hoping to see such a collaboration and discussion. The discussion that got me the most is the “creepifying” of our personal data and how this could be reversed. How can one gain control the access of your own data and how can one revoke such an access. Just imagine, how many companies have your personal name and email and how could technology disrupt such a world where an individual controls what is theirs. I recommend watching the keynote in any case!”

[embedded content]

We’ve also seen how could look like with the announced integration with Gitlab. I can’t recall myself being more excited when it comes to personal maintenance pain. In-line editing of code being one of the most amazing ones. More explanation can be found at

[embedded content]

From Nick: 
“Another session that really caught our eye and is worthy of a completely separate blogpost is the session of Markus Kalkbrenner about Advanced Solr. Perhaps to give you some context, I’ve been working with Solr for more than 9 years. I can prove it with a commit even! This session was mind blowing. Markus used very advanced concepts from which I hardly knew the existence of, let alone found an application for it. 

One of the use cases is a per-user sort based on the favorites of a user. The example Markus used was a recipe site where you can rate recipes. Obviously you could sort on the average rating but what if you want to sort the recipe’s by “your” rating. This might seem trivial but is a very hard problem to solve as you have to normalize a dataset in Solr which is by default a denormalized dataset. 

Now, what if you want to use this data to get personalized recommendations. This means we have to learn about the user and use this data on the fly to get these recommendations based on the votes the user applied to recipes. Watch how this work in the recording of Markus and be prepared to have your mind blown.”

[embedded content]

There were a lot of other interesting sessions and most of them had recordings and their details can be found and viewed at If you are interested in the future of the web and how Drupal plays an important role in this we suggest you take a look. If you are more into meeting people in real-time and being an active listener there is Drupalcamp Ghent ( at the 23rd and the 24th of November. Dropsolid is also a proud sponsor of this event.

And an additional tip: Markus’s session will also be presented there ;-)

Nov 07 2018
Nov 07

Last week I attended BADCamp and as usual, I can confirm firsthand that BADCamp keeps being a blast. I will mention some of the reasons why.

The Summits

I had a chance to attend DevOps and Front-end Summits half day each. During such summits, participants shared their experiences about the tools and techniques used regularly while working with clients. While at the DevOps Summit, it was great to hear that a lot of developers are interested in Kubernetes. Also, it was interesting to hear conversations about CI/CD workflows and the different tools used when building disposable instances per PR and deployments per branches which is something we are already working on in weKnow’s client projects.

The sessions

Confident to say that GatsbyJS stole the show. The event included three sessions about GatsbyJS back-to-back and people was eager to learn more about the buzzword:

All the great minds behind the Gatsby presentations at @BADCamp

— Gatsby (@gatsbyjs) October 26, 2018

Other recurrent and interesting topics mentioned on different sessions during the event:

  • Design systems and Pattern Lab.
  • The new Drupal Layout Builder.

My session

I had an opportunity to speak at the event. The title of my session was “How To Keep Drupal Relevant In The Git-based and API-driven CMS Era”. Yes, I was presenting one of the sessions related to GatsbyJS. Check out the slides here and feel free to watch the recording as well:

[embedded content]

Feel free to ask any questions using the comment section of this blog post or via twitter mention me directly @jmolivas.

The party

As usual, BADCamp vibes were incredible and the party was great as the event itself and the weKnow team had an amazing time.


Thank you for sponsoring the event.

The after-party

Well, the first rule of the after-party is not to talk about the after-party. So next year jump into the bus at midnight and join the after-party (only if you want to have fun).

See you at BADCamp 2019 or maybe sooner at DrupalCamp Atlanta 2018

Nov 06 2018
Nov 06

I’ve been running a lot lately, and so have been listening to lots of podcasts! Which is how I stumbled upon this great episode of the Lullabot podcast recently — embarrassingly one from over a year ago: “Talking Performance with Pantheon’s David Strauss and Josh Koenig”, with David and Josh from Pantheon and Nate Lampton from Lullabot.

(Also, I’ve been meaning to blog more, including simple responses to other blog posts!)

Interesting remarks about BigPipe

Around 49:00, they start talking about BigPipe. David made these observations around 50:22:

I have some mixed views on exactly whether that’s the perfect approach going forward, in the sense that it relies on PHP pumping cached data through its own system which basically requires handling a whole bunch of strings to send them out, as well as that it seems to be optimized around this sort of HTTP 1.1 behavior. Which, to compare against HTTP 2, there’s not really any cost to additional cost to additional connections in HTTP 2. So I think it still remains to be seen how much benefit it provides in the real world with the ongoing evolution of some of these technologies.

David is right; BigPipe is written for a HTTP 1.1 world, because BigPipe is intended to benefit as many end users as possible.

And around 52:00, Josh then made these observations:

It’s really great that BigPipe is in Drupal core because it’s the kind of thing that if you’re building your application from scratch that you might have to do a six month refactor to even make possible. And the cache layer that supports it, can support lots other interesting things that we’ll be able to develop in the future on top of Drupal 8. […] I would also say that I think the number of cases where BigPipe or ESI are actually called for is very very small. I always whenever we talk about these really hot awesome bleeding-edge cache technologies, I kinda want to go back to what Nate said: start with your Page Cache, figure out when and how to use that, and figure out how to do all the fundamentals of performance before even entertaining doing any of these cutting-edge technologies, because they’re much trickier to implement, much more complex and people sometimes go after those things first and get in over their head, and miss out on a lot of the really big wins that are easier to get and will honestly matter a lot more to end users. “Stop thinking about ESI, turn on your block cache.”

Josh is right too, BigPipe is not a silver bullet for all performance problems; definitely ensure your images and JS are optimized first. But equating BigPipe with ESI is a bit much; ESI is indeed extremely tricky to set up. And … Drupal 8 has always cached blocks by default. :)

Finally, around 53:30 David cites another reason to stress why more sites are not handling authenticated traffic:

[…] things like commenting often move to tools like Disqus and whether you want to use Facebook or the Google+ ones or any one of those kind of options; none of those require dynamic interaction with Drupal.

Also true, but we’re now seeing the inverse movement, with the increased skepticism of trusting social media giants, not to mention the privacy (GDPR) implications. Which means sites that have great performance for dynamic/personalized/uncacheable responses are becoming more important again.

BigPipe’s goal

David and Josh were being constructively critical; I would expect nothing less! :)

But in their description and subsequent questioning of BigPipe, I think they forget its two crucial strengths:

BigPipe works on any server, and is therefore available to everybody, and it works for many things out of the box, including f.e. every uncacheable Drupal block! In other words: no infrastructure (changes) required!

Bringing this optimization that sits at the intersection of front-end & back-end performance to the masses rather than having it only be available for web giants like Facebook and LinkedIn is a big step forward in making the entire web fast.

Using BigPipe does not require writing a single line of custom code; the module effectively progressively enhances Drupal’s HTML rendering — and turned on by default since Drupal 8.5!


Like Josh and David say: don’t forget about performance fundamentals! BigPipe is no silver bullet. If you serve 100% anon traffic, BigPipe won’t make a difference. But for sites with auth traffic, personalized and uncacheable blocks on your Drupal site are streamed automatically by BigPipe, no code changes necessary:

[embedded content]

(That’s with 2 slow blocks that take 3 s to render. Only one is cacheable. Hence the page load takes ~6 s with cold caches, ~3 s with warm caches.)

Nov 06 2018
Nov 06

By Jesus Manuel OlivasHead of Products | November 06, 2018

By Jesus Manuel OlivasHead of Products | November 06, 2018

During this year and at several events SANDCamp, DrupalCamp LA, DrupalCon Nashville, and DrupalCamp Colorado I had a chance to talk and show how at WeKnow we approached the development of API driven applications. For all of you that use Drupal, this is something like decoupled or headless Drupal but without the Drupal part.

This article outlines weKnow’s approach and provides some insight into how we develop some web applications.

Yes, this may sound strange but whenever we need to build an application that is not content-centric, we use Symfony instead of Drupal; what are those cases? Whenever we do not require the out-of-the-box functionality that Drupal offers as content management, content revision workflow, field widgets/formatters, views, and managing data structure from the UI (content types).

Why we still use PHP.

We definitely knew the language pretty well, we have a large experience working with PHP, Drupal and Symfony and we decided to take advantage of that knowledge and use it to build API driven applications.

Why the API Platform.

This project is a REST and GraphQL framework that helps you to build modern API-driven projects. The project provides an API component that includes Symfony 4, Flex, and Doctrine ORM. It also provides you with client-side components and an Admin based on React and a Docker configuration ready to start up your project using one single command. Allowing you to take advantage of thousands of existing Symfony bundles and React components.

Wrapping up

Our developer's expertise within different technologies has given us the advantage to provide a great time to market while developing client projects. We also like sharing, if you want to see this session live, probably for the last time you should attend and join me at DrupalCamp Atlanta

Video from DrupalCon Nashville at the Youtube Drupal Association channel here:

[embedded content]

You can find the latest version of the slides from DrupalCampLA here 

Nov 06 2018
Nov 06

Pattern Lab (PL), a commonly known pattern library, is an open-source project to generate a design system for your site. In the last two years it has gotten a lot of attention in the Drupal community. It's a great way to implement a design system into your front-end workflow.

The following post describes how our client (the City and County of San Francisco) began to implement a pattern library that will eventually be expanded upon and re-used for other agency websites across the ecosystem.


Using the U.S. Web Design System (USWDS), until their own pattern library was ready for prime time, was a client requirement.

USWDS uses a pattern library system called Fractal. I think Fractal is a great idea, but it lacked support for Twig, Twig is the template engine used by Drupal. Fractal out of the box uses Handlebars (templating engine in JavaScript), and thought the template language in Fractal can be customized I wasn’t able to make it work with Twig macros and iterations even with the use of twig.js

Creating the Pattern Lab

Ultimately, I decided to start from scratch. In addition to the USWDS requirement, the client also needed to be able to reuse this pattern library on other projects. I used the Pattern Lab Standard Edition for Twig, among other things this means that you need PHP in the command line in order to "compile" or generate the pattern library.

I added a gulpfile that was in charge of watching for changes in the PL source folders. Once a Twig, Sass or JavaScript file was changed, the pattern library was re-generated.

Generating the Pattern Library

I also needed Gulp to watch the file changes.

The following is a SIMPLE example of the Gulp task that generates the PL watching the folders. the following code snippet shows the config object containing an array of the folder directories.

  "css": {
    "file" : "src/sass/_all.scss",
    "src": [
    "pattern_lab_destination": "pattern-lab/public/css",
    "dist_folder": "dist/css"
  "js": {
    "src": [

And the following one is a common watcher in gulp:

gulp.task('watch', function () {, ['legacy:js']);, ['pl:css']);, ['generate:pl']);, ['generate:pl']);

The following task is in charge of generating the pattern library with PHP:

gulp.task('pl:php', shell.task('php pattern-lab/core/console --generate'));

Please NOTE that this is an oversimplified example.


Having generated the Pattern Library, I figured out that in order to use this Pattern Lab into my Drupal theme, I needed to generate a single CSS file and single JavaScript (JS) file.
The main Sass file imports all Sass code from USWDS by using the `@import` statement.
I imported the source Sass code from USWDS which I required with npm and imported  the source file directly from the node_modules folder:


// Styles basic HTML elements
@import '../../../node_modules/uswds/src/stylesheets/elements/buttons';
@import '../../../node_modules/uswds/src/stylesheets/elements/embed';

Then I imported the scss files that were inside my pattern elements:

// Styles inside patterns.
@import "../_patterns/00-protons/*.scss";
@import "../_patterns/01-atoms/**/*.scss";
@import "../_patterns/02-molecules/**/*.scss";
@import "../_patterns/03-organisms/**/*.scss";
@import "../_patterns/04-templates/**/*.scss";
@import "../_patterns/05-pages/**/*.scss";

All the styles were dumped into a single file called components.css

Having this single CSS file created I was able to use USWDS CSS classes along with the new ones.
I had to add a /dist folder where the transpiled Sass would live and be committed for later use in the Drupal theme.


I did something similar for JavaScript. The biggest challenge was to compile the USWDS JavaScript files exactly as they were. I resorted to copying all the source for the JavaScript into the src folder of the pattern library and set a watcher specifically for the USWDS JavaScript, and added another watcher for the new Pattern Lab JavaScript:


In the following example I compile all the JS that lives inside the components into a single file.

Then the resulting file is copied  to: ./pattern-lab/public/js which is the folder that reads the Pattern Lab when working on Pattern Lab only.
The other copy of the file goes to the distribution folder ./dist/pl/js which is the one I use in my Drupal theme.

// Component JS.
// -------------------------------------------------------------------- //
// The following task concatenates all the JavaScript files inside the
// _patterns folder, if new patterns need to be added the config.json array
// needs to be edited to watch for more folders.

gulp.task('pl:js', () => {
    return gulp.src(config.pattern_lab.javascript.src)
            presets: ['es2015']

The resulting files:


Were included the HEAD of my Pattern Lab by editing pattern-lab/source/_meta/_00-head.twig 

I included the following lines:

<link rel="stylesheet" href="" media="all">
<script src=""></script>
<script src=""></script>

Please refer to the repo if you need the details of the integration: GitHub - SFDigitalServices/sfgov-pattern-lab: SFGOV Pattern Lab

Integrating the Pattern Lab with Drupal.

Composer and libraries:

I used the following plugin:

composer require oomphinc/composer-installers-extender

This plugin allowed me to put the pattern library in a folder different than vendor
Then I added some configuration to the composer.json

Under extra I specified where composer should install the repository of type github:

"extra": {
        "installer-paths": {
            "web/libraries/{$name}": ["type:github"],

Then under repositories I set the type:github

"repositories": {
        "github": {
            "type": "package",
            "package": {
                "name": "sf-digital-services/sfgov-pattern-lab",
                "version": "master",
                "type": "drupal-library",
                "source": {
                    "url": "",
                    "type": "git",
                    "reference": "master"

and required the package under require: As you can see the name matches the name in the previously declared github repo:

"require": {
   "sf-digital-services/sfgov-pattern-lab": "dev-master",

A composer update should clone the github repo and place the Pattern Lab inside relative to the Drupal web folder:


Components Libraries

The Component Libraries module was especially important because it allowed me to map the Pattern Lab components easily into my theme.

Then I had to map my Pattern Lab components with the Drupal theme:

The Drupal Theme:

I created a standard Drupal theme:

The file:

In the following part of the file I connected the Pattern Lab Twig files to Drupal:

      - ../../../libraries/sfgov-pattern-lab/pattern-lab/source/_patterns/00-protons
      - ../../../libraries/sfgov-pattern-lab/pattern-lab/source/_patterns/01-atoms
      - ../../../libraries/sfgov-pattern-lab/pattern-lab/source/_patterns/02-molecules
      - ../../../libraries/sfgov-pattern-lab/pattern-lab/source/_patterns/03-organisms
      - ../../../libraries/sfgov-pattern-lab/pattern-lab/source/_patterns/04-templates
      - ../../../libraries/sfgov-pattern-lab/pattern-lab/source/_patterns/05-pages

  - sfgovpl/sfgov-pattern-lab

The sfgovpl.libraries.yml file:

In the last line of the previous code example, you can see that I required sfgov-pattern-lab library,  which will include the files compiled by Gulp into my Pattern Lab.

      '/libraries/sfgov-pattern-lab/dist/css/components.css': {}
    '/libraries/sfgov-pattern-lab/dist/pl/js/components.js': {}

Using the Twig templates in our theme:

The following is an example of how to use a molecule from the pattern library into the Drupal theme:

You can include the @molecules/08-search-results/03-topic-search-result.twig twig like this:

Pattern Lab twig:


<div class="topic-search-result">
<div class="topic-search-result--container">
<div class="content-type"><i class="sfgov-icon-filefilled"></i><span>{{ content_type }}</span></div>
<a class="title-url" href="{{ url }}"><h4>{{ title }}</h4></a>
<p class="body">{{ body|striptags('<a>')|raw }}</p>

Drupal template:

The following example calls the Pattern lab molecule originally located at: web/libraries/sfgov-pattern-lab/pattern-lab/source/_patterns/02-molecules/08-search-results/03-topic-search-result.twig but thanks to the Components module we just call it as: @molecules/08-search-results/03-topic-search-result.twig

{# Set variables to use in the component. #}
{% set url = path('entity.node.canonical', {'node': elements['#node'].id()  }) %}
{% set description = node.get('field_description').getValue()[0]['value'] %}
{% set type = node.type.entity.label %} {# content type #}
{# Icluding the molecule in our Pattern Lab.#}

{% include "@molecules/08-search-results/03-topic-search-result.twig" with {
  "content_type": type,
  "url": url,
  "title": elements['#node'].get('title').getString(),
  "body": description
} %}


SFGOV Pattern Lab, Initial development was made in large part for Chapter Three and this post is intended to show the core concepts of decoupling your Pattern Lab from your Drupal theme.

You can find the full code implementation for the Pattern library and Drupal in the following Urls:

SFDigitalServices/sfgov and the Pattern Lab here: SFDigitalServices/sfgov-pattern-lab

You should try the value module, it is great for extracting values in the Drupal twig templates and connect them with your Pattern Lab twig templates.

Give a try to the UI Patterns module, looks promising and a great solution for decoupled Pattern Libraries.

Nov 06 2018
Nov 06

This is part 2 in this series that explores how to use paragraph bundles to store configuration for dynamic content. The example I built in part 1 was a "read next" section, which could then be added as a component within the flow of the page. The strategy makes sense for component-based sites and landing pages, but probably less so for blogs or content heavy sites, since what we really want is for each article to include the read next section at the end of the page. For that, a view that displays as a block would perfectly suffice. In practice, however, it can be really useful to have a single custom block type, which I often call a "component block", that has an entity reference revisions field that we can leverage to create reusable components.

This strategy offers a simple and unified interface for creating reusable components and adding them to sections of the page. Combined with Pattern Lab and the block visibility groups module, we get a pretty powerful tool for page building and theming.

The image below captures the configuration screen for the "Up next" block you can find at the bottom of this page. As you see, it sets the heading, the primary tag, and the number of items to show. Astute readers might notice, however, that there is a small problem with this implementation. It makes sense if all the articles are about Drupal, but on sites where there are lots of topics, having a reusable component with a hard-coded taxonomy reference makes less sense. Rather, we'd like the related content component to show content that is actually related to the content of the article being read.

For the purpose of this article, let's define the following two requirements: first, if the tagged content component has been added as a paragraph bundle to the page itself, then we will respect the tag supplied in its configuration. If, however, the component is being rendered in the up next block, then we will use the first term the article has been tagged with.

To do that, we need three things: 1) we need our custom block to exist and to have a delta that we can use, 2) we need a preprocess hook to assign the theme variables, and 3) we need a twig template to render the component. If you're following along in your own project, then go ahead and create the component block now. I'll return momentarily to a discussion about custom block and the config system.

Once the up next block exists, we can create the following preprocess function:

function component_helper_preprocess_block__upnextblock(&$variables) {
  if ($current_node = \Drupal::request()->attributes->get('node')) {
    $variables['primary_tag'] = $current_node->field_tags->target_id;
    $variables['nid'] = $current_node->id();
    $paragraph = $variables['content']['field_component_reference'][0]['#paragraph'];
    $variables['limit'] = $paragraph->field_number_of_items->getValue()[0]['value'];
    $variables['heading'] = $paragraph->field_heading->getValue()[0]['value'];

If you remember from the first article, our tagged content paragraph template passed those values along to Pattern Lab for rendering. That strategy won't work this time around, though, because theme variables assigned to a block entity, for example, are not passed down to the content that is being rendered within the block.

You might wonder if it's worth dealing with this complexity, given that we could simply render the view as a block, modify the contextual filter, place it and be done with it. What I like about this approach is the flexibility it gives us to render paragraph components in predictable ways. In many sites, we have 5, 10 or more component types. Not all (or even most) of them are likely to be reused in blocks, but it's a nice feature to have if your content strategy requires it. Ultimately, the only reason we're doing this small backflip is because we want to use the article's primary tag as the argument, rather than what was added to the component itself. In other component blocks (an image we want in the sidebar, for example) we could simply allow the default template to render its content.

In the end, our approach is pretty simple: Our up next block template includes the paragraph template, rather than the standard block {{ content }} rendering. This approach makes the template variables we assigned in the preprocess function available:

{% include "@afro_theme/paragraphs/paragraph--tagged-content.html.twig" %}

A different approach to consider would be adding a checkbox to the tagged content configuration, such as "Use page context instead of a specified tag". That would avoid having us having an extra hook and template. Other useful configuration fields we've for dynamic component configuration include whether the query should require all tags, or any tag, when multiple are assigned, or the ability to specify whether the related content should exclude duplicates (useful when you have several dynamic components on a page but you don't want them to include the same content).

As we wrap up, a final note I'll add is about custom blocks and the config system. The apprach I've been using for content entities that also become config (which is the case here), is to first create the custom block in my local development environment, then export the config and remove the UUID from the config while also copying the plugin uuid. You can then create an update hook that creates the content for the block before it gets imported to config:

 * Adds the "up next" block for posts.
function component_helper_update_8001() {
  $blockEntityManager = \Drupal::service('entity.manager')

  $block = $blockEntityManager->create(array(
    'type' => 'component_block',
    'uuid' => 'b0dd7f75-a7aa-420f-bc86-eb5778dc3a54',
    'label_display' => 0,

  $block->info = "Up next block";

  $paragraph = Drupal\paragraphs\Entity\Paragraph::create([
    'type' => 'tagged_content',
    'field_heading' => [
      'value' => 'Up next'
    'field_number_of_items' => [
      'value' => '3'
    'field_referenced_tags' => [
      'target_id' => 1,


Once we deploy and run the update hook, we're able to import the site config and our custom block should be rendering on the page. Please let me know if you have any questions or feedback in the comments below. Happy Drupaling.

Nov 06 2018
Nov 06

Drupal Modules: The One Percent — Admin Denied (video tutorial)

[embedded content]

Episode 51

Here is where we bring awareness to Drupal modules running on less than 1% of reporting sites. Today we'll investigate Admin Denied, a module which prevents you from accessing the super user's account.

Nov 06 2018
Nov 06

VisualN provides an interface to check "how it works" for any available drawer on the site. To see the list of drawers go to VisualN -> Available Drawers Preview menu item.

Available drawers list

Though VisualN allows you to use any resource type as data source (e.g. csv, xls files or views), for demo purposes it is enough to have some dummy data. Such data can be obtained from data generators. Data generators are simple plugins returning an array of data (which is just another resource type) of a given structure that can be used by certain drawers (e.g. Leaflet uses lat, lon and title data fields).

Available drawers list

Data generators may also provide info about drawer or drawers that can use generated data. Those drawers and data generators are considered compatible. Drawers highlighted green have compatible data generators.

There are a couple of use cases when you may want to use the Available drawers preview UI:

  • check drawer in action, examine configuration form settings
  • set configuration values to create a visualization style
  • use the preview UI to help drawer development and to test changes
  • check data format used by drawers (e.g. using table drawer)

Linechart Basic drawer preview

Choose «preview» for any of green-highlited drawers. The preview will be opened:

Linechart drawer preview

In the example presented below the graphics visualization is an interactive area with line charts. The values at the mouse-pointed areas are available, data series activating/ignoring etc. These are basic possibilities of the drawer logics, but VisualN module doesn’t narrow the set and complexity of these possibilities.

The drawer supports a set of configuration parameters in a UI for the user (drawer config).

The drawer supports a UI for configuring data generator parameters. Different drawers can use different data generators. In the VisualN module there’s used a term «resource» for an ordered set of data to make a drawing; and data generators represent one of resource providers.

To analyze drawer possibilities use different combinations of configuration parameters and press the «redraw» button.

Creating Visualization styles

When configuring is terminated use the Create style button. VisualN uses Drawing Style to keep the set of parameters of a drawer for a variety of drawings. Changing style configuration will change parameters for all the drawings with that style, wherever they are located in the project.

When style is created, name it and set its parameters:

Creating drawing style

More about styles can be seen in other articles.

Using Table Html Basic drawer to inspect data format

Sometimes you may be not sure about data format used by a drawer. It may happen when the format is not obvious by itself and no description is provided by developer. Though if you are a developer too, it still shouldn't be a problem, otherwise there should be some way to guess the format required.

In most cases, when drawer uses data in the form of plain array (plain table) and has a compatible generator provided, Table Html Basic drawer can be used to guess the format. It will output any data provided by data generator.

By default, Table Html Basic drawer uses data generator that generates random strings.

HTML Table drawer preview

On the other hand, if it doesn't work for some drawer's data generator, the generator obviously returns data of some more complex structure, e.g. trees or nested lists.

Thus, Drawer Preview functionality is useful for both content-managers and developers.

Series contents

  1. Getting started
  2. Using available drawers preview
  3. Creating visualization styles
  4. Creating drawing (entity) types
  5. Creating interactive content in CKEditor with VisualN Embed
  6. Configuring VisualN Embed integration with CKEditor
  7. Using VisualN Excel module for Excel files support
  8. Sharing embedded drawings across sites
  9. etc.
Nov 06 2018
Nov 06

Agiledrop is highlighting active Drupal community members through a series of interviews. Learn who are the people behind Drupal projects.

This week we talked with David Valdez. Read about what impact Drupal made on him, what contribution is he the proudest of and what Drutopia is.

1. Please tell us a little about yourself. How do you participate in the Drupal community and what do you do professionally?

I’ve been doing web development for fourteen years and Drupal the last eight.

I currently work for Agaric which is a worker-owned cooperative. This allows us to make decisions about the cooperative democratically. Equally important is that we support one another, not just professionally but personally as well. 

Agaric is involved in several Drupal Projects, including Drupal Training Days, Sprint Weekends, and other local events. You can learn more here

2. When did you first came across Drupal? What convinced you to stay, software or the community, and why?

The first time I used Drupal, I faced the well known steep learning curve. In the beginning, I disliked how difficult the CMS seemed, but later when I started to understand why things were done the way they are, I began to appreciate all the cool things you can do with it, how well thought the subsystems were and how Drupal dramatically improves between one version to the next.

And later, when I had questions about specific problems or bugs, I found many talented people working on the project and giving support. It was amazing and I felt motivated to also contribute back to the community. In this way, I learned a ton of new things, and at the same time, I was helping other people.

3. What impact Drupal made on you? Is there a particular moment you remember?

Drupal gave a new direction to my career. At the time I was working on several different technologies and frameworks. Drupal motivated me to become a specialist, so I left my job and sought out an opportunity to work in a Drupal shop, where I could spend more time improving my Drupal skills.

Having that in mind, I travelled to DrupalCon Austin at 2014 (it was my first time in the USA), and I was convinced, that I wanted to work in a Drupal shop to be more involved in the project.

4. How do you explain what Drupal is to other, non-Drupal people?

Firstly, I usually try to explain what Free Software is about, how this allows projects like Drupal to become so good and how it helps many people.

5. How did you see Drupal evolving over the years? What do you think the future will bring?

Drupal has always been considered as a Content Management Framework, and I believe Drupal 8 is following this path to become one of the most solid options to build any project.

6. What are some of the contribution to open source code or community that you are most proud of?

There are a few contributions at the Core which allowed me to interact in the whole process to fix a bug on Drupal 8. 

For instance, at Drupal 8.1 the permalinks were broken on the comments, so I helped to write the patch, discuss changes and wrote the tests, to make sure this bug won’t happen again. 

I learned by reading the feedback from other, more experienced developers, and at the same time, I understood how Drupal works (at least in the parts related to the bug).

The same happened with a bug in the migrations and the REST module.

And learning from those issues helped me to contribute in fixing other smaller core bugs and fixing bugs in a several contributed modules, from porting small modules as Image Resize Filter, to contribute to well-known modules as Migrate Plus.

7. Is there an initiative or a project in Drupal space that you would like to promote or highlight?

Yes, we at Agaric have been working on Drutopia (, which is a series of Drupal distributions for nonprofits and other low-budget groups. 

8. Is there anything else that excites you beyond Drupal? Either a new technology or a personal endeavorment. 

I live in Mexico and I’m a member of a PHP Group (, where we talk about good practices, help each other improve our skills and keep informed of other cool technologies. 

Nov 06 2018
Nov 06

The TWG coding standards committee is announcing two issues for final discussion. Feedback will be reviewed on 11/13/2018.

New issues for discussion:

Needs love

Interested in helping out?

You can get started quickly by helping us to update an issue summary or two or dive in and check out the full list of open proposals and see if there's anything you'd like to champion!

Nov 06 2018
Nov 06

The TWG coding standards committee is announcing two issues for final discussion. Feedback will be reviewed on 11/13/2018.

New issues for discussion:

Needs love

Interested in helping out?

You can get started quickly by helping us to update an issue summary or two or dive in and check out the full list of open proposals and see if there's anything you'd like to champion!

Nov 05 2018
Nov 05

By Manuel SantibanezFront-end Developer | November 05, 2018

By Manuel SantibanezFront-end Developer | November 05, 2018

weKnow gave me the opportunity to attend my first BADCamp as part of the team that represented the company at this awesome event.

First day I attended the Drupal Frontend Submit, a roundtable format that I had not experienced before. It was very rewarding to discuss my experience as a developer who has worked with accessibility guidelines, sharing the tools and strategies that I have used to implement such an important standard. 

Lots of great sessions shared valuable knowledge that allowed me to leave BADCamp as a better developer!

My top picks:

Without a doubt, the new kid on the block was Gatsby, a piece of technology that takes the development of sites to a new level.

In my opinion, and I may be a little biased here, one of the best talks of the camp was "How to keep Drupal relevant in the Git-based and API-driven CMS", given by Jesus Manuel Olivas. This session opened a great discussion about Drupal’s vision, touching base on how it can integrate to become a fundamental piece in the scheme of modern technologies and strategies in web development, allowing Drupal to focus on what it does best which is to manage content.

As a final note, I would like to highlight that the venue was great at UC Berkeley; awesome lounge area with coffee to keep us energized all day, pinball machines were a great surprise and a special mention for the waffles!

Thanks for everything, hope to return next year and this time proposing a talk and thus sharing my own knowledge and experience!

Nov 05 2018
Nov 05

For the past two North American DrupalCons, my presentations have focused on introducing people to the Webform module for Drupal 8. First and foremost, it’s important that people understand the primary use case behind the Webform module, within Drupal's ecosystem of contributed modules, which is to…

Build a form which collects submission data

The other important message I include in all my presentations is…

The Webform module provides all the features expected from an enterprise proprietary form builder combined with the flexibility and openness of Drupal.

Over the past two years, between presentations, screencasts, blog posts, and providing support, the Webform module has become very robust and feature complete. Only experienced and advanced Drupal developers have been able to fully tap into the flexibility and openness of the Webform module.

The flexibility and openness of Drupal

Drupal's 'openness' stems from the fact that the software is Open Source; every line of code is freely shared. The Drupal's community's collaborative nature does more than just 'share code'. We share our ideas, failures, successes, and more. This collaboration leads to an incredible amount of flexibility. In the massive world of Content Management Systems, 'flexibility' is what makes Drupal stand apart from its competitors.

Most blog posts and promotional material about Drupal's flexibility reasonably omits the fact that Drupal has a steep learning curve. Developers new to Drupal struggle to understand entities, plugins, hooks, event subscribers, derivatives, and more until they have an ‘Aha’ moment where they realize how ridiculously flexible Drupal is.

The Webform module also has a steep learning curve

The Webform module's user experience focuses on making it easy for people to start building fairly robust forms quickly, including the ability to edit the YAML source behind a form. This gives users a starting point to understanding Drupal's render and form APIs. As soon as someone decides to peek at the Webform module's entities and plugins, they begin to see the steep learning curve that is Drupal.

Fortunately, most Webform related APIs which include entities, plugins, and theming follow existing patterns and best practices provided by Drupal core. There are some Webform-specific use cases around access controls, external libraries, and advanced form elements and composites, which require some unique solutions. These APIs and design patterns can become overwhelming making it difficult to know where to get started when it comes to customizing and extending the Webform module.

It is time to start talking about advanced webforms.

Advanced Webforms @ DrupaCon Seattle 2019

Drupal Seattle 2019 - April 8-12

Drupal Seattle 2019 - April 8-12

DrupalCon Seattle is scheduled to take place in April 2019, but session proposals are closed. There are a lot of changes coming for DrupalCon Seattle 2019. The most immediate one affecting presenters is that proposals are limited to either a 30-minute session or a 90-minute training. I understanding this change because a well-planned and focused topic can be addressed in 30 minutes while training someone requires more time. I’m happy to say that my proposal for a 90-minute Advanced Webform presentation was accepted. Now I need to start putting together the session outline and materials

Talking about advanced ​webforms for 90 minutes is going to exhausting so I promise everyone attending there will be a break.

I’ve decided to break up this presentation into two parts. The first part is going to be an advanced demo. I’ll be walking through how to build an event registration system and maybe an application evaluation system. During these demos, we’re going to explore more some of the Webform's modules advanced features and use-cases.

The second part of the presentation is going to be a walkthrough the APIs and concepts behind the Webform module.

Topics will include…

  • Creating custom form elements

  • Posting submissions using handlers

  • Altering forms and elements

  • Leveraging API's

  • Writing tests

  • Development tips & tricks

This presentation is going to require a lot of work. Luckily, I have six months to practice my presentation and work out some of the kinks.

Talking about code is hard

Fortunately, there are plenty of DrupalCamps before DrupalCon Seattle where I can rehearse my presentation. Two weeks ago, I presented the second part of my training at BadCamp and learned that scrolling through code while talking is challenging. I struggled with clicking thru PHPStorm's directory trees and showing different interfaces for entities and plugins.

Being able to navigate and understand code is key to climbing Drupal's steep learning curve.

I enjoy giving live Webform demos because I really believe that physically showing people how easy it is to build a form makes them feel more comfortable with getting their hands dirty. Even making mistakes during a live demo helps people see that clicking a wrong button is just part of the process. For me, the most successful and inspiring presentations are the ones where I walk out of the room with a new appreciation of the topic and a new perspective regarding the best way to learn more about the subject matter. My live demo of code at BadCamp did not work great but now I have the opportunity to improve it and I need to…

Figure out how to show the code.

Showing code during a presentation

A few years ago, my friend and coworker, Eric Sod (esod), did an excellent technical presentation about Drupal Console, which is a command line tool used to generate boilerplate code, interact with and debug Drupal. He figured out how to get past the challenge of talking and showing people how to use a command line tool. He did this by recording each console command. It was an inspiring presentation because he calmly stood in front of the room and explained everything that was happening in his recorded demo. He didn’t have to worry about misspelling a command.

Eric's approach to pre-recording challenging demos may be the solution that I am looking for. I am going to test it out at DrupalCamp Atlanta.

Going on the road and practicing this presentation

The biggest takeaway from this blog post should be:

The road to presenting at DrupalCon requires a lot of practice.

If your session proposal did not get accepted for this year's DrupalCon, keep on practicing. Make sure to record your presentations, even if it is a private screencast so that you can share this recording with next year's DrupalCon Minneapolis session selection committee. They want to know that you are comfortable getting up in front of a sizeable and curious audience.

There are plenty of upcoming local meetups and DrupalCamps where you can present..

Below are the DrupalCamps I am hoping to speak at before DrupalCon Seattle.

I hope to see you at a local DrupalCamp or we can catch up at DrupalCon Seattle. And if you have any suggestions on how to improve my presentation, for the next six months I’ll be working on it and I’m all ears…

If you want to know where I am speaking next, please subscribe to my blog below or follow me on Twitter.

Almost done…

We just sent you an email. Please click the link in the email to confirm your subscription!

OKSubscriptions powered by Strikingly

Nov 05 2018
Nov 05

By Harold JuárezFull Stack Developer | November 05, 2018

By Harold JuárezFull Stack Developer | November 05, 2018

BADCamp 2018 was the first real big event I attended, aside from actively participating in Drupal Camp Costa Rica for three years. Kindly enough some co-workers who had already assisted shared with me their experience which gave me great expectations. In addition, I was excited to sightsee San Francisco and Berkeley.

After dedicating this year to front-end, BADCamp sessions left me more than satisfied, with refreshed knowledge and practices. So I would like to share my experience and the content of sessions I participated:

The second day was a highlight, assistants were given challenges and tools, dialogue tables enriched my personal experience by listening to others talk about ways to improve development applications.

My first BadCamp 03

On Friday Pattern Lab sessions were quite interesting, practising the creation of themes without relying on a backend. Although I already had the experience of using this tool before, it provided new knowledge to improve its implementation at work.

React + Gatsby’s potential to create static sites was explored, and I learned compelling ways to take advantage of these new tools to improve the performance of an application using React to render the page and Drupal as an API to enter data. This talk was presented by my co-worker Jesus in his session HOW TO KEEP DRUPAL RELEVANT IN THE GIT-BASED AND API-DRIVEN CMS ERA.

My first BadCamp 04

On Saturday I attended an Accessibility session that showed tools for people with different types of disability, some may be paid or free to implement on the site, it all depends on the needs of the specific project.

Another talk that caught my attention was Artificial Intelligence in Drupal, by using Google Cloud Vision API in sites that provide tagging of images, face, logo and explicit content detection through Machine Learning.

A fantastic experience and I am very grateful to weKnow for helping me attend. It was a great success that I hope to repeat in a near future!

My first BadCamp 05

Most Interesting Sessions

Nov 05 2018
Nov 05

Last year, Drupal Association has decided to take a break to consolidate and not organize an official DrupalCon Europe in 2018. Twelve community members stepped in and developed a plan to organize a DrupalCon replacement − named Drupal Europe. The final result was outstanding.

More than 1000 Drupalers from all over the world gathered in Darmstadt, Germany from 10th to 14th September 2018 to attend the yearly biggest European Drupal event. The new event brought a new concept. It featured 10 amazing tracks that guaranteed high-quality content for all of the Drupal target groups − developers, marketers and agencies. Additionally, it created more room for contribution and collaboration outside session slots.

Official Group Photo Drupal Europe Darmstadt 2018

We supported the event by giving three talks in two different tracks.

Miro Dietiker - Connecting media solutions beyond Drupal

On Tuesday, Miro Dietiker, founder of MD Systems, gave an interesting talk about integrating Digital Asset Management systems with Drupal. He compared the benefits of existing solutions and provided a practical guide what kind of solutions you could use to fulfill your media management needs.

More information about the session and slides can be found on while the session recording is available below.

[embedded content]

On Wednesday, Miloš Bovan held a session about Paragraphs module and its best practices. The full room of people proves that people love and use Paragraphs a lot. The talk focused on answering frequently asked questions when it comes to working with Paragraphs. Those covered some of the new features that are not well known as well as ideas on how to improve the editorial experience easily.

Miloš Bovan - Enrich your Paragraphs workflow with features you didn’t know about

The session summary is available at

The conference featured many interesting sessions that provided ideas and actual implementations on how to make the Paragraphs user interface better (Creating an enterprise level editorial experience for Drupal 8 using React, Front-end page composition with Geysir, Improving the Editor Experience: Paragraphs FTW). The discussions about Paragraphs continued during all the conference and resulted in a BoF that happened on Thursday where Drupalers have discussed the future of Paragraphs UI. We look forward to fruitful collaboration.

John Gustavo Choque Condori - Drupal PKM: A personal knowledge management Drupal distro

John Choque, our colleague, gave a talk about the personal knowledge management distribution he has created as part of his bachelor thesis. The talk gathered people interested in education to get ideas how to improve knowledge management in their organizations. The session summary as well as slides are available at

[embedded content]

Social activities

Besides sessions, contributions and coding we enjoyed attending social events as well. On Tuesday, the organizers brought participants to the Bavarian Beer Garden to taste traditional German bratwurst and beers. The place was located in the park and there was no chance to miss the great event. 

On Wednesday, we joined our fellow Drupalers from the Swiss community for a dinner. It is always nice to catch up with local people in an international event such as Drupal Europe.

As usual, the event came to an end with traditional and yet always exciting Trivia Night.

What’s next?

During Drupal Europe, the project founder, Dries Buytaert has announced that Drupal Association signed an agreement with Kuoni, an international company specialized in events organizations. This results in bringing the official DrupalCon Europe event back in 2019 and it is going to happen in the capital of The Netherlands − Amsterdam.

We hope to see you all there!

Nov 03 2018
Nov 03

Team AdWeb has worked for a distinctive list of industries counting from hospitability to technology and retailers to an online lottery purchase system based website. Yes, we recently collaborated with a Japan-based company to build their website with lottery purchase system, using Drupal 8. We’ve been Drupal-ing even before our inception and have been an active member of the Drupal community, globally. Our association and experience of Drupal were the base of the client’s immense faith in us and we knew that we’re going to stand true to that.

About the Project
The project requirement of the client was to build a website for them in Drupal 8. The website is basically an online lottery purchase system. Due to confidential reasons, we can not share the name of the company/client but would like to share that the experience of working on this project was new and enriching.


Major Features/Functionalities
We personally love experimenting and implementing innovative features to enhance the client’s website. Plus, we get a little more excited when its a Drupal 8 website. We integrated a host of futuristic features to this very website too. But since, it’s an online lottery purchase system we knew that the integration of the Payment Gateway is going to be one of an integral part. Hence, we created three types of Payment Gateway, as follows:\


  • GMO Payment

  • Coins Payment

  • WebMoney Payment


The user is an integral part of this entire online lottery system and hence several functionalities are crafted around them. Like, a user can purchase coins by WebMoney Payment method and can also buy lottery from choosing any product bundle. A user also has an option to select the quantity of the product or go for the complete set. The payment for either of it can be done by the coins, GMO credit card or points.

Draw system is used for the selection of the lottery winner. Other than the lottery prize, the user also stands a chance to win the Kiriban Product as a prize. The Kiriban Product is based on the product bundle configuration, which is an additional product that a user gets as defined by an admin user.

The Problem

Any e-commerce website will definitely have multiple users buying for the same product. In this situation, the backend technicalities should be as such that it updates the quantity left of the product after the last purchase is made. Issues occur when two or more users place the order at the same time. This is an issue that is involved in concurrent shopping. In this case, the lottery opened for some specific time. Hence, the issue occurred in showcasing the updated quantity. This problem came to our notice when the site went live and around 7-8 users made the transaction at one specific time. We immediately started working on the issue.

Challenges Faced:

We quickly picked up the problem and started searching for the resolution. We have had several times, prior to this, created an e-commerce website. Hence, we used multiple methods to resolve the issues, mentioned below, but none of them worked in this particular case.

  • Initially, we tried using a Drupal lock to resolve the issue, but in vain.

  • We, later on, used the MySQL lock but this too didn’t work, due to the involvement of multiple quantities inside for loop.

  • The usage of sleep time with random sleep time also did not work, because it created the nearby value and not the exact one.

Though the method of random sleep time did not work in this case, it gave birth to the final resolution that worked. And hence, we did a minor modification to the same and divided the sleep time in a range of 3. Also, to avoid the possibility of any further clash, we adopted a table of 100.

The Final Resolution:

After trying out a handful of methods, we finally came up with a method that did work out in our favor. Let us share what steps did finally help us in addressing the problem of concurrent shopping that we faced:

  • A table consisting of 1 to 100 numbers was taken, with the sleep time by a range of 3.

  • Later, a random number was picked and a flag value for the same was set

  • Then, a greater number from those numbers with the range of 3 was picked

Below is the table that was created to bring out the final solution:

How it works:

  • At the beginning of the transaction, the max sleep_time will be checked where flag=1

  • The sleep_time for the first user will be 0

  • After this, a random number from max sleep_time is selected with a range of 3

  • The first user’s range is 1-3

  • In the case of the second user, one number will be skipped after the max time and will be started after that number

  • In case a user gets the max sleep_time in 3 then the range for the random number will be 5-7

  • If the second user gets the random number as 6 then the random number range for the third user will be 8-10

  • The flag value will be updated as 1 for this random number

  • In the end, the flag value of the transaction will be updated with 0

The Final Say:

“All is well, that ends well.” And that’s exactly we have to say for this particular project. Yes, though we had coded and created many e-commerce websites before, this was the first time that we picked up a project to create a Drupal 8 website with an online lottery system. And believe us, it was a monumental success for us and satisfying project for the client.

Nov 03 2018
Nov 03

A machine learning model, that could lead a driver directly to an empty parking spot, fetched the second prize in the Graduate level: MS category at the 2018 Science and Technology Open House Competition. It goes without saying that dreams of computer systems with godlike powers and the wisdom to use them is not just a theological construct but a technological possibility. And sci-fi éminence grise Arthur C. Clarke rightfully remarked that “any sufficiently advanced technology is indistinguishable from magic.”

A robot sitting on a brown chair, draped in a red blanket, working on a laptop with a glass of wine beside him and river water falling over a real human working on a laptop

Artificial Intelligence (AI) may be the buzzword of our times but Machine Learning (ML) is really the brass tacks. Machine learning has made great inroads into different areas. It has the capability of looking at the pictures of biopsies and picking out possible cancers. It can be taught to predict the outcome of legal cases, writing press releases and even composing music! However, the sci-fi future where a machine learning beats a human in all the conceivable department and is perpetually learning isn’t a reality yet. So, how does machine learning fit into the world of content management system like Drupal? Before finding that out, let’s go back to the times when computers did not even exist.

Machine learning predates computers! 

In this day and age, self-driving cars, voice-activated assistants and social media feed are some of the tools which are powered by machine learning. Compilations made by BBC and Forbes show that machine learning has a long timeline that relies on mathematics from hundreds of years ago and the elephantine developments in computing over the years.

Machine learning has a long timeline that relies on mathematics from hundreds of years ago and the elephantine developments in computing over the years

Mathematical innovations like Bayes’ Theorem (1812), Least Squares method for data fitting (1805) and Markov Chains (1913) laid the foundation for modern machine learning concept. 

In the late 1940s, stored-program computers like Manchester Small-Scale Experimental Machine (1948) came into the picture. Through the 1950s and 1960s, several influential discoveries were made like the ‘Turing Test’, first computer learning program, first neural network for computers and the ‘nearest neighbour’ algorithm. In the nineties, IBM’s Deep Blue beat the world chess champion.

Post-millennium, we have several technology giants like Google, Amazon, Microsoft, IBM and Facebook today actively working on more advanced machine learning models. Proof of this is the Alpha algorithm, developed by Google DeepMind, which beat a professional in the Go competition and it is considered more intricate than chess!

Discovering Machine Learning

A gif showing a flowchart of machine learning through icons like computers, bulb, magnifying glass, Rubik's cube and boxes.

Machine learning is a form of AI that allows a system to learn from data instead of doing that through explicit programming. It is not a simple process. As the algorithms ingest training data, producing more accurate models based on that data is possible.

Advanced machine learning algorithms are composed of many technologies (such as deep learning, neural networks and natural-language processing), used in unsupervised and supervised learning, that operate guided by lessons from existing information. - Gartner

When you train your machine learning algorithm with data, the output that is generated is the machine learning model. After training, when you provide an input to the model, an output will be given to you. For instance, a predictive algorithm will build a predictive model. Then, when the predictive model is provided with the data, you receive a prediction based on the data that trained the model.

Difference between AI and machine learning

Illustration showing small boxes inside and outside a bigger box to explain machine learning and artificial intelligenceSource: IBM

Machine learning may have relished a massive success of late but it is just one of the approaches for achieving artificial intelligence.
Forrester defines artificial intelligence as “the theory and capabilities that strive to mimic human intelligence through experience and learning”. AI systems generally demonstrate traits like planning, learning, reasoning, problem solving, knowledge solving, social intelligence and creativity among others.
Alongside machine learning, there are numerous other approaches used to build AI systems such as evolutionary computation, expert systems etc.

Categories of machine learning

Machine learning is generally divided into the following categories:

  • Supervised learning: It typically begins with an established set of data and with a certain understanding of the classification of that data is done and intends to find patterns in data for applying that to an analytics process.
  • Unsupervised learning: It is used when the problem needs a large amount of unlabeled data.
  • Reinforcement learning: It is a behavioural learning model. The algorithm receives feedback from the data analysis thereby guiding the user to the best outcome.
  • Deep learning: It incorporates neural networks in successive layers for learning the data in an iterative manner.

Why is machine learning accelerating?

A pyramid split into 4 parts of a different colour with each describing the Intelligence capability business modelSource: The Apttus Intelligence Capability Model

Today, the majority of enterprises require descriptive analytics, that is needed for efficient management, but not sufficient to enhance business performance. For the businesses to scale higher level of responsiveness, they need to move beyond descriptive analytics and move up the intelligence capability pyramid. This is where machine learning plays a key role.

For the businesses to scale higher level of responsiveness, they need to move beyond descriptive analytics and move up the intelligence capability pyramid.

Machine learning is not a new technique but the interest in the field has grown multifold in recent years. For enterprises, machine learning has the ability to scale across a broad range of businesses like manufacturing, financial services, healthcare, retail, travel and many others.

Three boxes on top and three boxes at the bottom with small icons over each of the boxes representing different industries of machine learningSource: Tata Consultancy Services

Business processes directly related to revenue-making are among the most-valued applications like sales, contract management, customer service, finance, legal, quality, pricing and order fulfilment.
Exponential data growth with unstructured data like social media posts, connected devices sensing data, competitor and partner pricing and supply chain tracking data among others is one of the reasons of why adoptions rates of machine learning have skyrocketed.
The Internet of Things (IoT) networks, connected devices and embedded systems are generating real-time data which is great for optimising supply chain networks and increasing demand forecast precision.
Another reason why machine learning is successful because of its ability to generate massive data sets through synthetic means like extrapolation and projection of existing historical data to develop realistic simulated data.
Moreover, the economics of safe and secure digital storage and cloud computing are merging to put infrastructure costs into free fall thereby making machine learning more cost effective for all the enterprises.

Machine Learning for Drupal

A session at DrupalCon Baltimore 2017 had a presentation which was useful for machine learning enthusiasts and it did not require any coding experience. It showed how to look at data from the eye view of a machine learning engineer. 

It also leveraged deep learning and site content to give Drupal superpowers by making use of same technology that is exploding at Facebook, Google and Amazon.

[embedded content]

The demonstration focused on mining Drupal content as the fuel for deep learning. It showed when to use existing ML models or services when to build your own, deployment of ML models and using them in production. It showed free pre-built models and paid services from Amazon, IBM, Microsoft, Google and others.

Drag and drop interface was used for creating, training and deploying a simple ML model to the cloud with the help of Microsoft Azure ML API. Google Speech API was used to turn spoken audio content into the text content to use them with chatbots and virtual assistants. Watson REST API was leveraged to perform sentiment analysis. Google Vision API module was used so that uploaded images can add Face, Logo, and Object Detection. And Microsoft’s ML API was leveraged to automatically build summaries from node content.

Another session at DrupalCon Baltimore 2017 showed how to personalise web content experiences on the basis of subtle elements of a person’s digital persona.

[embedded content]

Standard personalisation approaches recommend content on the basis of a person’s profile or the past activity. For instance, if a person is searching for a gym bag, something like this works - “Here are some more gym bags”. Or if he or she is reading about movie reviews, this would work - “Maybe you would like this review of the recently released movie”.

But the demonstration shown at this session had advanced motives. They exhibited Deep Feeling, a proof-of-concept project that utilises machine learning techniques doing better recommendations to the users. This proof-of-concept recommended travel experiences on the basis of kind of things a person shares with the help of Acquia Lift service and Drupal 8.

With the help of Instagram API to access a person’s stream-of-consciousness, the demo showed that their feeds were filtered via a computer-vision API and was used to detect and learn subtle themes about the person’s preferences. Once a notion on what sort of experiences, which the person thinks are worth sharing, is established, then the person’s characteristics were matched against their own databases.

Another presentation held at Bay Area Drupal Camp 2018 explored how the CMS and Drupal Community can put machine learning into practice by leveraging a Drupal module, taxonomy system and Google’s Natural Language Processing API.

[embedded content]

Natural language processing concepts like sentiment analysis, entity analysis, topic segmentation, language identification among others were discussed. Numerous natural language processing API alternatives were compared like Google’s natural language processing API, TextRazor, Amazon Comprehend and open source solutions like Datamuse.

It explored use cases by assessing and automatically categorising news articles using Drupal’s taxonomy system. Those categories were merged with the sentiment analysis in order to make a recommendation system for a hypothetical news audience.

Future of Machine learning

A report on Markets and Markets states that the machine learning market size will grow from USD 1.41 Billion in 2017 to USD 8.81 Billion by 2022 at a Compound Annual Growth Rate (CAGR) of 44.1%.

The report further states that the major driving factors for the global machine learning market are the technological advancement and proliferation in data generation. Moreover, increasing demand for intelligent business processes and the aggrandising adoption rates of modern applications are expected to offer opportunities for more growth.

Some of the near-term predictions are:

  • Most applications will include machine learning. In a few years, machine learning will become part of almost every other software applications with engineers embedding these capabilities directly into our devices.
  • Machine learning as a service (MLaaS) will be a commonplace. More businesses will start using the cloud to offer MLaaS and take advantage of machine learning without making huge hardware investments or training their own algorithms.
  • Computers will get good at talking like humans. As technology gets better and better, solutions such as IBM Watson Assistant will learn to communicate endlessly without using code.
  • Algorithms will perpetually retrain. In the near future, more ML systems will connect to the internet and constantly retrain on the most relevant information.
  • Specialised hardware will be delivering performance breakthroughs. GPUs (Graphics Processing Unit) is advantageous for running ML algorithms as they have a large number of simple cores. AI experts are also leveraging Field-Programmable Gate Arrays (FPGAs) which, at times, can even outclass GPUs.


Whether computers start ruling us someday by gaining superabundance of intelligence is not a likely outcome. Even though it is a possibility which is why it is widely debated whenever artificial intelligence and machine learning is discussed.
On the brighter side, machine learning has a plenitude of scope in making our lives better with its tremendous capabilities of providing unprecedented insights into different matters. And when Drupal and machine learning come together, it is even more exciting as it results in the provision of awesome web experience.

Opensense Labs always strives to fulfil digital transformation endeavours of our partners with a suite of services.

Contact us at [email protected] to know how machine learning can be put to great to use in your Drupal web application.

Nov 03 2018
Nov 03

Lately I've been spending a lot of time working with Drupal in Kubernetes and other containerized environments; one problem that's bothered me lately is the fact that when autoscaling Drupal, it always takes at least a few seconds to get a new Drupal instance running. Not installing Drupal, configuring the database, building caches; none of that. I'm just talking about having a Drupal site that's already operational, and scaling by adding an additional Drupal instance or container.

One of the principles of the 12 Factor App is:

IX. Disposability

Maximize robustness with fast startup and graceful shutdown.

Disposability is important because it enables things like easy, fast code deployments, easy, fast autoscaling, and high availability. It also forces you to make your code stateless and efficient, so it starts up fast even with a cold cache. Read more about the disposability factor on the 12factor site.

Before diving into the details of how I'm working to get my Drupal-in-K8s instances faster to start, I wanted to discuss one of the primary optimizations, opcache...

Measuring opcache's impact

I first wanted to see how fast page loads were when they used PHP's opcache (which basically stores an optimized copy of all the PHP code that runs Drupal in memory, so individual requests don't have to read in all the PHP files and compile them on every request.

  1. On a fresh Acquia BLT installation running in Drupal VM, I uninstalled the Internal Dynamic Page Cache and Internal Page Cache modules.
  2. I also copied the codebase from the shared NFS directory /var/www/[mysite] into /var/www/localsite and updated Apache's virtualhost to point to the local directory (/var/www/[mysite], is, by default, an NFS shared mount to the host machine) to eliminate NFS filesystem variability from the testing.
  3. In Drupal VM, run the command while true; do echo 1 > /proc/sys/vm/drop_caches; sleep 1; done to effectively disable the linux filesystem cache (keep this running in the background while you run all these tests).
  4. I logged into the site in my browser (using drush uli to get a user 1 login), and grabbed the session cookie, then stored that as export cookie="KEY=VALUE" in my Terminal session.
  5. In Terminal, run time curl -b $cookie http://local.example.test/admin/modules three times to warm up the PHP caches and see page load times for a quick baseline.
  6. In Terminal, run ab -n 25 -c 1 -C $cookie http://local.example.test/admin/modules (requires apachebench to be installed).

At this point, I could see that with PHP's opcache enabled and Drupal's page caches disabled, the page loads took on average 688 ms. A caching proxy and/or requesting cached pages as an anonymous user would dramatically improve that (the anonymous user/login page takes 160 ms in this test setup), but for a heavy PHP application like Drupal, < 700 ms to load every code path on the filesystem and deliver a generated page is not bad.

Next, I set opcache.enable=0 (was 1) in the configuration file /etc/php/7.1/fpm/conf.d10-opcache.ini, restarted PHP-FPM (sudo systemctl restart php7.1-fpm), and confirmed in Drupal's status report page that opcache was disabled (Drupal shows a warning if opcache is disabled). Then I ran another set of tests:

  1. In Terminal, run time curl -b $cookie http://local.example.test/admin/modules three times.
  2. In Terminal, run ab -n 25 -c 1 -C $cookie http://local.example.test/admin/modules

With opcache disabled, average page load time was up to 1464 ms. So in comparison:

Opcache status Average page load time Difference Enabled 688 ms baseline Disabled 1464 ms 776 ms (72%, or 2.1x slower)

Note: Exact timings are unimportant in this comparison; the delta between different scenarios what's important. Always run benchmarks on your own systems for the most accurate results.

Going further - simulating real-world disk I/O in VirtualBox

So, now that we know a fresh Drupal page load is almost 4x slower than one with the code precompiled in opcache, what if the disk access were slower? I'm running these tests on a 2016 MacBook Pro with an insanely-fast local NVMe drive, which can pump through many gigabytes per second sequentially, or hundreds of megabytes per second random access. Most cloud servers have disk I/O which is much more limited, even if they say they are 'SSD-backed' on the tin.

Since Drupal VM uses VirtualBox, I can limit the VM's disk bandwidth using the VBoxManage CLI (see Limiting bandwidth for disk images):

# Stop Drupal VM.
vagrant halt

# Add a disk bandwidth limit to the VM, 1 MB/sec.
VBoxManage bandwidthctl "VirtualBox-VM-Name-Here" add Limit --type disk --limit 5M

# Get the name of the disk image (vmdk) corresponding to the VM.
VBoxManage list hdds

# Apply the limit to the VM's disk.
VBoxManage storageattach "VirtualBox-VM-Name-Here" --storagectl "IDE Controller" --port 0 --device 0 --type hdd --medium "full-path-to-vmdk-from-above-command" --bandwidthgroup Limit

# Start Drupal VM.
vagrant up

# (You can update the limit in real time once the VM's running with the command below)
# VBoxManage bandwidthctl "VirtualBox-VM-Name-Here" set Limit --limit 800K

I re-ran the tests above, and the average page load time was now 2171 ms. Adding that to the test results above, we get:

Opcache status Average page load time Difference Enabled 688 ms baseline Disabled 1464 ms 776 ms (72%, or 2.1x slower) Disabled (slow I/O) 2171 ms 1483 ms (104%, or 3.2x slower)

Not every cloud VM has that slow of disk I/O... but I've seen many situations where I/O gets severely limited, especially in cases where you have multiple volumes mounted per VM (e.g. maximum EC2 instance EBS bandwidth per instance) and they're all getting hit pretty hard. So it's good to test for these kinds of worst-case scenarios. In fact, last year I found that a hard outage was caused by an E_F_S volume hitting a burst throughput limit, and bandwidth went down to 100 Kbps. This caused so many issues, so I had to architect around that potential issue to prevent it from happening in the future.

The point is, if you need fast PHP startup times, slow disk IO can be a very real problem. This could be especially troublesome if trying to run Drupal in environments like Lambda or other Serverless environments, where disk I/O is usually the lowest priority—especially if you choose to allocate a smaller portion of memory to your function! Cutting down the initial request compile time could be immensely helpful for serverless, microservices, etc.

Finding the largest bottlenecks

Now that we know the delta for opcache vs. not-opcache, and vs. not-opcache on a very slow disk, it's important to realize that compilation is just one in a series of many different operations which occurs when you start up a new Drupal container:

  • If using Kubernetes, the container image might need to be pulled (therefore network bandwidth and image size may have a great affect on startup time)
  • The amount of time Docker spends allocating resources for the new container, creating volume mounts (e.g. for a shared files directory) can differ depending on system resources
  • The latency between the container and the database (whether in a container or in some external system like Amazon RDS or Aurora) can cause tens or even hundreds of ms of time during startup

However, at least in this particular site's case—assuming the container image is already pulled on the node where the new container is being started—the time spent reading in code into the opcache is by far the longest amount of time (~700ms) spent waiting for a fresh Drupal Docker container to serve its first web request.

Can you precompile Drupal for faster startup?

Well... not really, at least not with any reasonable sanity, currently.

But there is hope on the horizon: There's a possibility PHP 7.4 could add a cool new feature, Preloading! You can read the gory details in the RFC link, but the gist of it is: when you are building your container image, you could precompile all of your application code (or at least the hot code paths) so when the container starts, it only takes a couple ms instead of hundreds of ms to get your application's code compiled into opcache.

We'll see if this RFC gets some uptake; in the meantime, there's not really much you can do to mitigate the opcache warming problem.


With Preloading, we might be able to pre-compile our PHP applications—notably beefy ones like Drupal or Magento—so they can start up much more quickly in lightweight environments like Kubernetes clusters, Lambda functions, and production-ready docker containers. Until that time, if it's important to have Drupal serve its first request as quickly as possible, consider finding ways to trim your codebase so it doesn't take half a second (or longer) to compile into the opcache!

Nov 02 2018
Nov 02

This blog has been re-posted and edited with permission from Dries Buytaert's blog. Please leave your comments on the original post.

Configuration management is an important feature of any modern content management system. Those following modern development best-practices use a development workflow that involves some sort of development and staging environment that is separate from the production environment.

Configuration management example

Given such a development workflow, you need to push configuration changes from development to production (similar to how you need to push code or content between environments). Drupal's configuration management system helps you do that in a powerful yet elegant way.

Since I announced the original Configuration Management Initiative over seven years ago, we've developed and shipped a strong configuration management API in Drupal 8. Drupal 8's configuration management system is a huge step forward from where we were in Drupal 7, and a much more robust solution than what is offered by many of our competitors.

All configuration in a Drupal 8 site — from one-off settings such as site name to content types and field definitions — can be seamlessly moved between environments, allowing for quick and easy deployment between development, staging and production environments.

However, now that we have a couple of years of building Drupal 8 sites behind us, various limitations have surfaced. While these limitations usually have solutions via contributed modules, it has become clear that we would benefit from extending Drupal core's built-in configuration management APIs. This way, we can establish best practices and standard approaches that work for all.

Configuraton management initiative

The four different focus areas for Drupal 8. The configuration management initiative is part of the 'Improve Drupal for developers' track.

I first talked about this need in my DrupalCon Nashville keynote, where I announced the Configuration Management 2.0 initiative. The goal of this initiative is to extend Drupal's built-in configuration management so we can support more common workflows out-of-the-box without the need of contributed modules.

What is an example workflow that is not currently supported out-of-the-box? Support for different configurations by environment. This is a valuable use case because some settings are undesirable to have enabled in all environments. For example, you most likely don't want to enable debugging tools in production.

Configuration management example

The contributed module Config Filter extends Drupal core's built-in configuration management capabilities by providing an API to support different workflows which filter out or transform certain configuration changes as they are being pushed to production. Config Split, another contributed module, builds on top of Config Filter to allow for differences in configuration between various environments.

The Config Split module's use case is just one example of how we can improve Drupal's out-of-the-box configuration management capabilities. The community created a longer list of pain points and advanced use cases for the configuration management system.

While the initiative team is working on executing on these long-term improvements, they are also focused on delivering incremental improvements with each new version of Drupal 8, and have distilled the most high-priority items into a configuration management roadmap.

  • In Drupal 8.6, we added support for creating new sites from existing configuration. This enables developers to launch a development site that matches a production site's configuration with just a few clicks.
  • For Drupal 8.7, we're planning on shipping an experimental module for dealing with environment specific configuration, moving the capabilities of Config Filter and the basic capabilities of Config Split to Drupal core through the addition of a Configuration Transformer API.
  • For Drupal 8.8, the focus is on supporting configuration updates across different sites. We want to allow both sites and distributions to package configuration (similar to the well-known Features module) so they can easily be deployed across other sites.

How to get involved

There are many opportunities to contribute to this initiative and we'd love your help.

If you would like to get involved, check out the Configuration Management 2.0 project and various Drupal core issues tagged as "CMI 2.0 candidate".

Special thanks to Fabian Bircher (Nuvole), Jeff Beeman (Acquia), Angela Byron (Acquia), ASH (Acquia), and Alex Pott (Thunder) for contributions to this blog post.

Nov 02 2018
Nov 02

I am currently building a Drupal 8 application which is running outside Acquia Cloud, and I noticed there are a few 'magic' settings I'm used to working on Acquia Cloud which don't work if you aren't inside an Acquia or Pantheon environment; most notably, the automatic Configuration Split settings choice (for environments like local, dev, and prod) don't work if you're in a custom hosting environment.

You have to basically reset the settings BLT provides, and tell Drupal which config split should be active based on your own logic. In my case, I have a site which only has a local, ci, and prod environment. To override the settings defined in BLT's included config.settings.php file, I created a config.settings.php file in my site in the path docroot/sites/settings/config.settings.php, and I put in the following contents:

* Settings overrides for configuration management.

// Disable all splits which may have been enabled by BLT's configuration.
foreach ($split_envs as $split_env) {
  $config["$split_filename_prefix.$split_env"]['status'] = FALSE;

$split = 'none';

// Local env.
if ($is_local_env) {
  $split = 'local';
// CI env.
if ($is_ci_env) {
  $split = 'ci';
// Prod env.
if (getenv('K8S_ENVIRONMENT') == 'prod') {
  $split = 'prod';

// Enable the environment split only if it exists.
if ($split != 'none') {
  $config["$split_filename_prefix.$split"]['status'] = TRUE;

The K8S_ENVIRONMENT refers to an environment variable I have set up in the production Kubernetes cluster where the BLT Drupal 8 codebase is running. There are a few other little tweaks I've made to make this BLT project build and run inside a Kubernetes cluster, but I'll leave those for another blog post and another day :)

Nov 02 2018
Nov 02

What's your favorite tool for creating content layouts in Drupal? Paragraphs, Display Suite, Panelizer or maybe Panels? Or CKEditor styles & templates? How about the much talked about and yet still experimental Drupal 8 Layout Builder module?

Have you "played” with it yet?

As Drupal site builders, we all agree that a good page layout builder should be:

  1. flexible; it should empower you to easily and fully customize every single node/content item on your website (not just blocks)
  2. intuitive, super easy to use (unlike "Paragraphs", for instance, where building a complex "layout", then attempting to move something within it, turns into a major challenge)

And it's precisely these 2 features that stand for the key goals of the Layout Initiative for Drupal

To turn the resulting module into that user-friendly, powerful and empowering page builder that all Drupal site builders had been expecting.

Now, let's see how the module manages to “check” these must-have strengths off the list. And why it revolutionizes the way we put together pages, how we create, customize and further edit layouts.

How we build websites in Drupal...

1. The Context: A Good Page Builder Was (Desperately) Needed in Drupal

It had been a shared opinion in the open source community:

A good page builder was needed in Drupal.

For, even if we had a toolbox full of content layout creation tools, none of them was “the One”. That flexible, easy to use, “all-features-in-one” website builder that would enable us to:

  • build complex pages, carrying a lot of mixed content, quick and easy (with no coding         expertise)
  • fully customize every little content item on our websites and not just entire blocks of content site-wide
  • easily edit each content layout by dragging and dropping images, video content, multiple columns of text and so on, the way we want to

Therefore, the Drupal 8 Layout Builder module was launched! And it's been moved to core upon the release of Drupal 8.6.

Although it still wears its “experimental, do no use on production sites!” type of “warning tag”, the module has already leveled up from an “alpha” to a more “beta” phase.

With a more stable architecture now, in Drupal 8.6, significant improvements and a highly intuitive UI (combined with Drupal's well-known content management features) it stands all the chances to turn into a powerful website builder.

That great page builder that the whole Drupal community had been “craving” for.

2. The Drupal 8 Layout Builder Module: Quick Overview

First of all, we should get one thing straight:

The Drupal 8.6. Layout Builder module is Panelizer in core!

What does it do?

It enables you, the Drupal site builder, to configure layouts on different sections on your website.

From selecting a predefined layout to adding new blocks, managing the display, swapping the content elements and so on, creating content layouts in Drupal is as (fun and) intuitive as putting Lego pieces together.

Also, the “content hierarchy” is more than logical:

  • you have multiple content sections
  • you get to choose a predefined layout or a custom-design one for each section
  • you can place your blocks of choice (field blocks, custom blocks) within that selected layout

Note: moving blocks from one section to another is unexpectedly easy when using Layout Builder!

3. Configuring the Layout of a Content Type on Your Website

Now, let's imagine the Drupal 8 Layout Module “in action”.

But first, I should point out that there are 2 ways that you could use it:

  1. to create and edit a layout for every content type on your Drupal website
  2. to create and edit a layout for specific, individual nodes/ pieces of content

It's the first use case of the module that we'll focus on for the moment.

So, first things first: in order to use it, there are some modules that you should enable — Layout Builder and Layout Discovery. Also, remember to install the Layout Library, as well!

Next, let's delve into the steps required for configuring your content type's (“Article”, let's say) display:

  • go to Admin > Structure > Content types > Article > Manage Display
  • hit the “Manage layout” button

… and you'll instantly access the layout page for the content type in question (in our case, “Article”).

It's there that you can configure your content type's layout, which is made of:

  • sections of content (display in 1,2, 3... columns and other content elements)
  • display blocks: tabs, page title...
  • fields: tags, body, title

While you're on that screen... get as creative as you want:

  • choose a predefined layout for your section —  “Add section” —  from the Settings tab opening up on the right side of the screen
  • add some blocks —  “Add block”; you'll then notice the “Configure” and “Remove” options “neighboring” each block
  • drag and drop the layout elements, arranging them to your liking; then you can click on either “Save Layout” or “Cancel Layout” to save or cancel your layout configuration

And since we're highly visual creatures, here, you may want to have a look at this Drupal 8 Layout Builder tutorial here, made by Lee Rowlands, one of the core contributors.

In short: this page builder tool enables you to customize the layout of your content to your liking. Put together multiple sections — each one with its own different layout —  and build website pages, carrying mixed content and multiple layouts, that fit your design requirements exactly.

4. Configuring and Fully Customizing the Layout of a Specific Node...

This second use case of the Drupal 8 Layout Builder module makes it perfect for building landing pages.

Now, here's how you use it for customizing a single content type:

  • go to Structure>Content types (choose a specific content type)
  • click “Manage display” on the drop-down menu 
  • then click the “Allow each content item to have its layout customized” checkbox
  • and hit “Save”

Next, just:

  • click the “Content” tab in your admin panel
  • choose that particular article that you'd like to customize
  • click the “Layout” tab

… and you'll then access the very same layout builder UI.

The only difference is that now you're about to customize the display of one particular article only.

Note: basically, each piece of content has its own “Layout” tab that allows you to add sections, to choose layouts. 

Each content item becomes fully customizable when using Drupal 8 Layout Builder.

5. The Drupal 8.6. Layout Builder vs Paragraphs

“Why not do everything in Paragraphs?" has been the shared opinion in the Drupal community for a long time.

And yet, since the Layout Builder tool was launched, the Paragraphs “supremacy” has started to lose ground. Here's why:

  • the Layout builder enables you to customize every fieldable entity's layout
  • it makes combining multiple sections of content on a page and moving blocks around as easy as... moving around Lego pieces 

Now. just try to move... anything within a complex layout using Paragraphs:

  • you'll either need to keep your fingers crossed so that everything lands in the right place once you've dragged and dropped your blocks
  • or... rebuild the whole page layout from scratch

The END!

What do you think:

Does Drupal 8 Layout Builder stand the chance to compete with WordPress' popular page builders?

To “dethrone” Paragraphs and become THAT page layout builder that we've all been expected for?

Or do you think there's still plenty of work ahead to turn it into that content layout builder we've all been looking forward to?

Nov 02 2018
Nov 02

You Can’t Put a Price Tag on Visibility, Creditability, and Collegiality

“pink pig” by Fabian Blank on Unsplash

Organizing a DrupalCamp takes a lot of commitment from volunteers, so when someone gets motivated to help organize these events, the financial risks can be quite alarming and sometimes overwhelming. But forget all that mess, you are a Drupal enthusiast and have drummed up the courage to volunteer with the organization of your local DrupalCamp. During your first meeting, you find out that there are no free college or community spaces in the area and the estimated price tag is $25,000. Holy Batman that is a lot of money!

Naturally, you start thinking about how we are going to cover that price tag, so you immediately ask, “how many people usually attend?” Well unless you are one of the big 5, (BadCamp, NYCCamp, GovCon, MidCamp or FloridaCamp) we average between 100 and 200 people. Then you ask, “how much can we charge?” You are then told that we cannot charge more than $50 because camps are supposed to be affordable for the local community and that has been the culture of most DrupalCamps.

Are you interested in attending the first online DrupalCamp Organizers Meeting, on Friday, November 9th at 4:00pm (EST)? RSVP Here.

If Drupal is the Enterprise solution why are all of our camps priced and sponsored like we are still hobbyist in 2002?

Why Don’t We Treat DrupalCamps Like It’s the Enterprise Solution?

Drupal is the Enterprise solution. Drupal has forgotten about the hobbyist and is only concerned about large-scale projects. Drupal developers and companies make more per hour than Wordpress developers. These are all things I have heard from people within the community. So if any of these statements are valid, why are all the camps priced like it is 2002 and we are all sitting around in a circle singing Kumbaya? In 2016 for DrupalCamp Atlanta, we couldn’t make the numbers work, so we decided to raise the price of the camp from $45 to $65 (early bird) and $85 (regular rate). This was a long drawn out and heated debate that took nearly all of our 2 hours allotted for our google hangout. At the end of the day, one of our board members who is also a Diamond sponsor said,

“when you compare how other technology conferences are priced and what they are offering for sessions, DrupalCamps are severely under-priced for the value they provide to the community.”

Courtesy of Labs

If a camp roughly costs $25,000 and you can only charge 150 people $50, how in the world are DrupalCamps produced? The simple answer, sponsors, sponsors, and more sponsors. Most camps solely rely on the sponsors to cover the costs. One camp, in particular, BADCamp has roughly 2,000 attendees and the registration is FREE. That’s right, the camp is completely free and did I forget to mention that it’s in San Francisco? Based on the BADCamp model and due to the fact the diamond sponsorship for DrupalCon Nashville was $50,000, getting 10 companies to sponsor your camp at $2,500 will be no sweat. Oh and don’t forget Drupal is the enterprise solution, right?

With all of your newfound confidence in obtaining sponsorships, you start contacting some of the larger Drupal shops in your area and after a week nothing. You reach out again maybe by phone this time and actually speak to someone but they are not committing because they want some more information as to why they should sponsor the camp such as, what other perks can you throw in for the sponsorship, are we guaranteed presentation slots, and do you provide the participant list. Of course, the worst response is the dreaded no, we cannot sponsor your conference because we have already met our sponsorship budget for the year.

At this point, you feel defeated and confused as to why organizations are not chomping at the bit to fork over $2,500 to be the sponsor. Yep, that’s right, twenty-five hundred, not $25,000 to be the highest level, sponsor. Mind you many Drupal shops charge anywhere between $150 — $250 an hour. So that means donating 10–17 hours of your organizations time to support a Drupal event in your local community. Yes, you understand that there are a lot of DrupalCamps contacting the same companies for sponsorship so you ask yourself, what has changed from years past?

Are you interested in attending the first online DrupalCamp Organizers Meeting, on Friday, November 9th at 4:00 pm (EST)? RSVP Here.

What Do Companies Expect to Gain From DrupalCamp Sponsorships?

At DrupalCon Nashville, I got an awesome opportunity to participate in a session around organizing DrupalCamps. It was really interesting to hear about how other organizers produce their camp and what were some of the biggest pain points.

Group Photo — DrupalCon 2018 Nashville by Susanne Coates

During this session, we were talking about a centralized sponsorship program for all DrupalCamps (that I personally disagree with and will save that discussion for another blog post) and an individual asked the question,

“why should my company sponsor DrupalCamp Atlanta? There is nothing there for me that makes it worth it. We don’t pick up clients, you to don’t distribute the participant list, so why should we sponsor the camp?”

Needless to say, they caught me completely off guard, so I paused then replied,

“DrupalCamp Atlanta has between 150–200 people, most of them from other Drupal shops, so what is it that you are expecting to get out of the sponsorship that would make it worth it to you? Why do you sponsor any DrupalCamps?”

Have Drupal Companies Outgrown the Need to Sponsor DrupalCamps?

On the plane ride back to the ATL it got me thinking, why does an organization sponsor DrupalCamps? What is the return on their investment? I started reminiscing of the very first DrupalCamp that I attended in 2008 and all the rage at that time (and still is), was inbound marketing and how using a content strategy and or conference presentations can establish your company as thought leaders in the field, therefore, clients will find your information useful and approach you when its time to hire for services. Maybe this is why so many camps received a ton of presentation submissions and why it was easy to find sponsors, but that was over 10 years ago now and some of those same companies have now been established as leaders in the field. Could it be, that established companies no longer need the visibility of DrupalCamps?

What happens to DrupalCamps when companies no longer need the visibility or credibility from the Drupal community?

The Drupal community thrives when Drupal shops become bigger and take on those huge projects because it results in contributions back to the code, therefore, making our project more competitive. But an unintended consequence of these Drupal shops becoming larger is that there is a lot more pressure on them to raise funding thus they need to spend more resources on obtaining clients outside of the Drupal community. Acquia, the company built by the founder of Drupal, Dries Buytaert, have made it clear that they are pulling back on their local camp sponsorships and have even created their own conference called Acquia Engage that showcases their enterprise clients. Now from a business perspective, I totally understand why they would create this event as it provides a much higher return on their investment but it results in competing with other camps (ahem, this year’s DrupalCamp Atlanta), but more importantly the sponsorship dollars all of us depend on are now being redirected to other initiatives.

Are you interested in attending the first online DrupalCamp Organizers Meeting, on Friday, November 9th at 4:00 pm (EST)? RSVP Here.

Why Should Established Companies Sponsor a DrupalCamp?

The reality of the situation is that sponsoring these DrupalCamps are most likely not going to land your next big client that pays your company a $500,000 contract. So what are true reasons to sponsor a DrupalCamp:

  • Visibility
    When sponsoring these DrupalCamps most of us organizers do a pretty good job of tweeting thanks to the company and if the organization has presenters we usually promote the sessions as well. In addition, most camps print logos on the website, merchandise, and name after parties. Yes, its only a little bit but the internet is forever and the more you are mentioned the better off you are. But you are from a well established Drupal shop so you don’t need any more visibility.
  • Credibility
    Even the companies who are have been established need their staff to be credible. There will always be some amount of turnover and when that happens your clients still want to know if this person is talented. And if your company is new, being associated with Drupal in your local community does provide your company a sense of credibility.
  • Collegiality
    I saved the best for last. Collegiality is highly overlooked when looking at sponsoring camps. Most companies have a referral program for new hires and when the time comes for you to hire, people tend to refer their friends and their professional acquaintances. There is no better place to meet and interact with other Drupalist than a DrupalCamp. What about employee engagement? In a recent focus group I participated in with a Drupal shop, many of the staff wanted more opportunities for professional development. These local camps are affordable and can allow staff to attend multiple events in a year when you have small budgets.

I must end by saying, that there are so many great Drupal companies that I have had the pleasure to work with and if it were not for the Acquia’s of the world Drupal wouldn’t exist. I understand that CEO’s are responsible for their employees and their families so I don’t want to underestimate the pressures that come with making payroll and having a client pipeline. The purpose of this post was to explain how it feels as a volunteer who is doing something for the community and the frustrations that sometimes come with it.

Nov 02 2018
Nov 02

‘Blocks’ in Drupal are pieces of content that can be placed anywhere throughout the site. They are an integral part of Drupal and the way it displays information. While Drupal has a variety of useful blocks out of the box for most scenarios, there might be times when custom blocks are required. That is what I’ll be addressing in this post, by going through how to create a custom block in Drupal 8.

There are two ways in which you can create a custom block:

  • Through Drupal’s own GUI, or
  • Programmatically.

Via Drupal GUI

This method is pretty straightforward and easier than creating a block programmatically. However, it also is less flexible and customizable than programmatically creating a block.

  • Go to admin -> structure -> block layout -> custom block library.
  • Click ‘block types’ tab. Once here, click on the ‘Add custom block type’ button.
  • Enter block label and description.
  • Now, you can add fields, manage display type, manage display etc. for your custom block. Customize the block to your liking and click save.
  • Now, go back to custom block library and click the blue ‘Add custom block’ button, to add the block to your library.
  • The next step is to simply place the block into your desired region by navigating to admin -> structure -> block layout.

Programmatically Creating Block

This method requires a little more understanding of the way Drupal works, however, once you get the hang of it, it gets pretty easy.

Create a module

In Drupal 8, it is necessary to create an info.yml file that contains the metadata for every custom module, theme or plugin you create. Similarly, for our custom block, we will need to create an info.yml file in the ‘modules/custom’ directory. Note that if the custom folder isn’t already created, you will need to create it. For creating a custom block, we will need to make a custom module.

Now create an ‘info.yml’ file such as ‘’. Inside this file enter following:

name: Custom Block Example
type: module
description: Define a custom block.
core: 8.x
package: Custom
  - block

You can now go to your Drupal dashboard and enable the custom module we have just created.

Create Class

Now, in order to define the logic of the block, we need to create a class which will be placed under the modules/custom/custom_block_example/src/Plugin/Block directory. 

The class file should contain annotation as well. The annotation allows us to identify the block. Apart from the annotation, this class will contain 4 methods:

  • build() - Returns a basic markup by rendering a renderable array.
  • blockAccess() - Defines a custom user access logic.
  • blockForm() - Defines a custom block configuration form using the Form API.
  • blockSubmit() - Used to save a configuration, defined on the blockForm() method.

Now, this is what the class file should contain in the end:


namespace Drupal\my_block_example\Plugin\Block;

use Drupal\Core\Access\AccessResult;
use Drupal\Core\Block\BlockBase;
use Drupal\Core\Form\FormStateInterface;
use Drupal\Core\Session\AccountInterface;

 * Provides a block with a simple text.
 * @Block(
 *   id = "my_block_example_block",
 *   admin_label = @Translation("My block"),
 * )
class MyBlock extends BlockBase {
   * {@inheritdoc}
  public function build() {
    return [
      '#markup' => $this->t('This is a simple block!'),

   * {@inheritdoc}
  protected function blockAccess(AccountInterface $account) {
    return AccessResult::allowedIfHasPermission($account, 'access content');

   * {@inheritdoc}
  public function blockForm($form, FormStateInterface $form_state) {
    $config = $this->getConfiguration();

    return $form;

   * {@inheritdoc}
  public function blockSubmit($form, FormStateInterface $form_state) {
    $this->configuration['my_block_settings'] = $form_state->getValue('my_block_settings');

Now, go back to your site, and you should be able to see the block you have just created. Simply assign the block to a region of your choice and it should become visible.


As mentioned earlier, blocks are an integral part of a Drupal site. Learning to customize and play with the blocks in your own way can be a very useful skill.

Having trouble with customizing your Drupal site? Contact us, here at Agiledrop, and forget about having to worry about getting stuck with your Drupal site ever again.

Nov 02 2018
Nov 02

We are in the process of transforming the way we host our applications to a docker based workflow. One of the challenges we face is the file storage. At the heart of our business are open source technologies and tools, therefore  we have looked into in using Minio (more or less the same as Amazon S3 for file storage) instead of local filesystem (or Amazon S3).

We are going to use the Drupal module Flysystem S3 - that works both with Amazon S3 and Minio (compatible with the Amazon S3).

Flysystem is a filesystem abstraction library for PHP which allows you to easily swap out a local filesystem for a remote one - or from one remote to another.

For a new site it is pretty straight forward, for a legacy site you need to migrate your files from one storage to another - that I am going to look into in the next blog post.

Minio container

First we need Minio up and running. For that i am using docker, here is an example docker-compose.yml:

    image: minio/minio:edge
    container_name: minio
    hostname: minio
      - "8001:9000"
      - "./data:/data"
      - "MINIO_SECRET_KEY=klertyuiopgrtasjukli"
      - "MINIO_REGION=us-east-1"
    command: server /data


When you have installed the Flysystem S3 module (and the dependency - the module Flysystem), we need to add the settings for Minio to our settings.php file (there is no settings for this in Drupal. Yet.):

$schemes = [
    's3' => [
        'driver' => 's3',
        'config' => [
            'key'    => 'AFGEG578KL', 
            'secret' => 'klertyuiopgrtasjukli',
            'region' => 'us-east-1',
            'bucket' => 'my-site',
            'endpoint' => "",
            'protocol' => "http",
            'cname_is_bucket' => false,
            "cname" => "",
            'use_path_style_endpoint' => TRUE,
            'public' => true,
            'prefix' => 'publicfiles',
        'cache' => TRUE, 
        'serve_js' => TRUE,
        'serve_css' => TRUE,
$settings['flysystem'] = $schemes;

Endpoint is for communicating with Minio, cname is the base URL that files is going to get on the site. Serve_js and serve_css is for Minio to store aggregated CSS and JS.

Create a field

You now need to define which fields are going to use the S3 storage, for this, I create a new image reference field, and use “Flysystem: s3” as the Upload destination.

Surf over to Minio - our example is on - add the defined bucket, my-site, and make sure that drupal can write to it (edit policy in Mino and make sure it has read and write on the prefix - or for wildcard prefix - *)

And you are done

And that is it - now we are using Minio for storing the images. Try to upload a file on the field you created - and you should see the file in Minio. Also on the site - you should of course see the image - but now with the URL used in the settings for CNAME, in our case,

We have put some time and effort into the Flysystem S3 module together with other contributors, and we hope you will test it out and report any feedback. Have fun!

Nov 02 2018
Nov 02

As part of the session I presented in Drupal Europe, REST Ready: Auditing established Drupal 8 websites for use as a content hub, I presented a module called “Entity Access Audit”.

This has proved to be a useful tool for auditing our projects for unusual access scenarios as part of our standard go-live security checks or when opening sites up to additional mechanisms of content delivery, such as REST endpoints. Today this code has been released on Entity Access Audit.

There are two primary interfaces for viewing access results, the overview screen and a detailed overview for each entity type. Here is a limited example of the whole-site overview showing a few of the entity types you might find in core or custom modules:

Entity access audit

Here is a more detailed report for a single entity type:

Entity access audit

The driving motivation behind these interfaces was being able to visually scan entity types and ensure that the access results align with our expectations. This has so far helped identify various bugs in custom and contributed code.

In order to conduct a thorough access test, the module uses a predefined set of dimensions and then uses a cartesian product of these dimensions to test every combination. The dimensions tested out of the box, where applicable to the given entity type are:

  • All bundles of an entity type.
  • If the current user is the entity owner or not.
  • The access operation: create, view, update, delete.
  • All the available roles.

It’s worth noting that these are only common factors used to determine access results, they are not comprehensive. If access was determined by other factors, there would be no visibility of this in the generated reports.

The module is certainly not a silver bullet for validating the security of Drupal 8 websites, but has proved to be a useful additional tool when conducting audits.

Photo of Sam Becker

Posted by Sam Becker
Senior Developer

Dated 2 November 2018

Add new comment

Nov 01 2018
Nov 01

BADCamp 2018 just wrapped up last Saturday. As usual it was a great volunteer organized event that brought together all sorts of folks from the Drupal Community.

Every year Kanopi provides organizational assistance, and this year was no exception. We had Kanopian volunteers working on; writing code for website, organizing fundraising, general operations planning, assisting as room monitors, and working the registration booth.

An event like this doesn’t happen without a lot of work across a lot of different areas and we’re very proud of Kanopi’s contributions.

Personally, Kanopi was able to send me down from Vancouver, Canada to spend time doing a day long training course, as well as doing the regular conference summits and sessions.

The course I chose was “Component-based Theming with Twig” which was really informative. We covered the basics Pattern Lab and then worked on best practice methods to integrate those Pattern Lab tools in to a Drupal theme.

Some of the takeaways:

  • The Gesso ( theme is a great starting place for getting Pattern Lab in to your project.
  • Make sure you are reusing all your basic html components and make the templates flexible. Resist the urge to simply copy and paste markup into a new template.
  • The best way to map Pattern Lab components in Drupal is to use Paragraph types and their display modes.
  • To get the most out of Twig template, make sure you are using the module Twig Tweak (

For the regular conference sessions, the most interest seemed to lie in the possibilities of GatsbyJS ( All the developers with whom I spoke are focused on the performance and security perspective, as it can be completely decoupled from Drupal, allowing for fewer security issues. One interesting talk on Gatsby was this one by Kyle Mathews.

Kanopi was also fortunate enough get four sessions selected:

All in all BADCamp 2018 was a great experience. It’s terrific to meet our distributed co-workers as well as see friends from other parts of the Drupal community.

Nov 01 2018
Nov 01

As Drupal 8 has matured as an enterprise content management system, so has its ability to connect with enterprise SaaS CRMs such as Salesforce. As the undisputed IBM of CRM solutions (for now, anyway) Salesforce is a cornerstone for most businesses. And now with tighter integrations than ever before, Drupal 8 can be too.

With that, let's explore some key considerations involved in connecting Drupal 8 with Salesforce. 

All Hail the Cloud

At its most basic core, Salesforce is really a database of contacts in the same way that Drupal is a database of content. Yes, Drupal also has users and Salesforce often houses products, events, etc., but you get the idea. What’s important is that customers interact with both systems. Whether it’s reading website content or opening an email from a salesperson, customer data across all fronts is critical to consolidate, manage and leverage.

Integration is a Dirty Word

A representation of Salesforce as one integration in the broader Drupal ecoysystemYou may be wondering what’s involved with a Drupal integration with Salesforce. Ah, the dreaded “I” word...integration. So often the herald of scope creep and blown budgets. Integrating Salesforce with Drupal 8 can vary between something as simple as submitting contact forms to the CRM, to running a global ABM effort supported by a sophisticated Drupal website equipped with real-time personalization. In either case, leveraging Drupal 8’s API-first architecture and its plethora of open source modules are key. In this case, the Drupal Salesforce module is our starting point.

Modules Make the World Go Round

The Drupal Salesforce Suite module is a testament to both the ingenuity and passion of the Drupal community and the flexibility of Drupal as an enterprise platform. As a contributed module, the Salesforce Suite for Drupal enables out of the box connection with Salesforce, no matter your configuration.

Available free on, the module offers:

  • Single Sign-On (SSO) with OAUTH2 authentication, which lets you pass credentials to Salesforce (and log in seamlessly. Salesforce events are also accessible through Drupal 8. Handy!

  • Entity mapping, which means tying fields in your Drupal site to those in Salesforce, such as “Markets” you serve for upcoming events or hidden user fields like “Lead Score.”

  • Ability to push data to Salesforce from Drupal, such as users engaging with gated content, new leads, or activity data to ensure Salesforce has all the information it needs to make decisions. This is critically important with AI advancements such as Salesforce Einstein.

  • Ability to pull data such as new products, syncing events, etc into Drupal. Often, this takes the form of rough data imports for critical fields (like product information) that site admins can add to using Drupal 8’s editing capabilities.

Take it to the Skies

While the Salesforce Suite module is a great start, any complex integration requires an experienced and competent Drupal development team to implement. Establishing an API connection is one thing, but building a Drupal 8 site to adapt to changing conditions on the Salesforce side is critical, as well as sound architecture on the Drupal 8 side to ensure data integrity and easy management for non-technical site admins.  
Looking to connect Drupal 8 with Salesforce? Contact us about your project and see how we can help.

Nov 01 2018
Nov 01

Content migration is a topic with a lot of facets. We’ve already covered some important migration information on our blog:

So far, readers of this series will have gotten lots of good process information, and learned how to move a Drupal 6 or 7 site into Drupal 8. This post, though, will cover what you do when your content is in some other data framework. If you haven’t read through the previous installments, I highly recommend you do so. We’ll be building on some of those concepts here.

Content Type Translation

One of the first steps of a Drupal to Drupal migration is setting up the content types in the destination site. But what do you do if you are moving to Drupal from another system? Well, you will need to do a little extra analysis in your discovery phase, but it’s very doable.

Most content management systems have at least some structure that is similar to Drupal’s node types, as well as a tag/classification/category system that is analogous to Drupal’s taxonomy. And it’s almost certain to have some sort of user account. So, the first part of your job is to figure out how all that works.

Is there only one ‘content type’, which is differentiated by some sort of tag (“Blog Post”, “Product Page”, etc.)? Well, then, each of those might be a different content type in Drupal. Are Editors and Writers stored in two different database tables? Well, you probably just discovered two different user roles, and will be putting both user types into Drupal users, but with different roles. Does your source site allow comments? That maps pretty closely to Drupal comments, but make sure that you actually want to migrate them before putting in the work! Drupal 8 Content Migration: A Guide For Marketers, one of the early posts in this series, can help you make that decision.

Most CMS systems will also have a set of meta-data that is pretty similar to Drupal’s: created, changed, author, status and so on. You should give some thought to how you will map those fields across as well. Note that author is often a reference to users, so you’ll need to consider migration order as well.

If your source data is not in a content management system (or you don’t have access to it), you may have to dig into the database directly. If you have received some or all of your content in the XML, CSV, or other text-type formats, you may just have to open the files and read them to see what you are working with.

In short, your job here will be to distill the non-Drupal conventions of your source site into a set of Drupal-compatible entity types, and then build them.

Migration from CSV

CSV is an acronym for “Comma-Separated Value”, and is a file format often used for transferring data in large quantity. If you get some of your data from a client in a spreadsheet, it’s wise to export it to CSV. This format strips all the MS Office or Google Sheets gobbledygook, and just gives you a straight block of data.

Currently, migrations of CSV files into Drupal use the Migrate Source CSV module. However, this module is being moved into core and deprecated. Check the Bring migrate_source_csv to core issue to see what the status on that is, and adjust this information accordingly.

The Migrate Source CSV module has a great example and some good documentation, so I’ll just touch on the highlights here.

First, know that CSV isn’t super-well structured, so each entity type will need to be a separate file. If you have a spreadsheet with multiple tabs, you will need to export each separately, as well.

Second, connecting to it is somewhat different than connecting to a Drupal database. Let’s take a look at the data and source configuration from the default example linked above.


  1. id,first_name,last_name,email,country,ip_address,date_of_birth

  2. 1,Justin,Dean,,Indonesia,,01/05/1955

  3. 2,Joan,Jordan,,Thailand,,10/14/1958

  4. 3,William,Ray,,Germany,,08/13/1962

migrate_source_csv/tests/modules/migrate_source_csv_test/config/install/migrate_plus.migration.migrate_csv.yml (Abbreviated)

  1. ...

  2. source:

  3.   plugin: csv

  4.   path: /artifacts/people.csv

  5.   keys:

  6.     - id

  7.   header_row_count: 1

  8.   column_names:

  9.     -

  10.       id: Identifier

  11.     -

  12.       first_name: 'First Name'

  13.     -

  14.       last_name: 'Last Name'

  15.     -

  16.       email: 'Email Address'

  17.     -

  18.       country: Country

  19.     -

  20.       ip_address: 'IP Address'

  21.     -

  22.       date_of_birth: 'Date of Birth'

  23. ...

Note first that this migration is using plugin: csv, instead of the d7_node or d7_taxonomy_term that we’ve seen previously. This plugin is in the Migrate Source CSV module, and handles reading the data from the CSV file.

  path: /artifacts/people.csv

The path config, as you can probably imagine, is the path to the file you’re migrating.  In this case, the file is contained within the module itself.

  1. keys:

  2. - id

The keys config is an array of columns that are the unique id of the data.

  1. header_row_count: 1

  2. column_names:

  3. -

  4. id: Identifier

  5. -

  6. first_name: 'First Name'

  7. -

  8. last_name: 'Last Name'

  9. ...

These two configurations interact in an interesting way. If your data has a row of headers at the top, you will need to let Drupal know about it by setting a header_row_count. When you do that, Drupal will parse the header row into field ids, then move the file to the next line for actual data parsing.

However, if you set the column_names configuration, Drupal will override the field ids created when it parsed the header row. By passing only select field ids, you can skip fields entirely without having to edit the actual data. It also allows you to specify a human-readable field name for the column of data, which can be handy for your reference, or if you’re using Drupal Migrate’s admin interface.

You really should set at least one of these for each CSV migration.

The process configuration will treat these field ids exactly the same as a Drupal fieldname.

Process and Destination configuration for CSV files are pretty much the same as with a Drupal-to-Drupal import, and they are run with Drush exactly the same.

Migration from XML/RSS

XML’s a common data storage format, that presents data in a tagged format. Many content management systems or databases have an ‘export as xml’ option. One advantage XML has over CSV is that you can put multiple data types into a single file. Of course, if you have lots of data, this advantage could turn into a disadvantage as the file size balloons! Weigh your choice carefully.

The Migrate Plus module has a data parser for XMl, so if you’ve been following along with our series so far, you should already have this capability installed.

Much like CSV, you will have to connect to a file, rather than a database. RSS is a commonly used xml format, so we’ll walk through connecting to an RSS file for our example. I pulled some data from Phase2’s own blog RSS for our use, too. (Abbreviated)

  1. <?xml version="1.0" encoding="utf-8"?>

  2. <rss ... xml:base="">

  3.   <channel>

  4.     <title>Phase2 Ideas</title>

  5.     <link></link>

  6.     <description/>

  7.     <language>en</language>

  8.         <item>

  9.             <title>The Top 5 Myths of Content Migration *plus one bonus fairytale</title>

  10.             <link></link>

  11.             <description>The Top 5 Myths of Content Migration ... </description>

  12.             <pubDate>Wed, 08 Aug 2018 14:23:34 +0000</pubDate>

  13.             <dc:creator>Bonnie Strong</dc:creator>

  14.             <guid isPermaLink="false">1304 at</guid>

  15.         </item>

  16.     </channel>

  17. </rss>


  1. id: example_xml_articles

  2. label: 'Import articles'

  3. status: true

  4. source:

  5.   plugin: url

  6.   data_fetcher_plugin: http

  7.   urls: ''

  8.   data_parser_plugin: simple_xml

  9.   item_selector: /rss/channel/item

  10.   fields:

  11.     -

  12.       name: guid

  13.       label: GUID

  14.       selector: guid

  15.     -

  16.       name: title

  17.       label: Title

  18.       selector: title

  19.     -

  20.       name: pub_date

  21.       label: 'Publication date'

  22.       selector: pubDate

  23.     -

  24.       label: 'Origin link'

  25.       selector: link
  26.     -

  27.       name: summary

  28.       label: Summary

  29.       selector: description

  30.   ids:

  31.     guid:

  32.       type: string

  33. destination:

  34.   plugin: 'entity:node'

  35. process:

  36.   title:

  37.     plugin: get

  38.     source: title

  39.   field_remote_url: link
  40.   body: summary

  41.   created:

  42.     plugin: format_date

  43.     from_format: 'D, d M Y H:i:s O'

  44.     to_format: 'U'

  45.     source: pub_date

  46.   status:

  47.     plugin: default_value

  48.     default_value: 1

  49.   type:

  50.     plugin: default_value

  51.     default_value: article

The key bits here are in the source configuration.

  1. source:

  2. plugin: url

  3. data_fetcher_plugin: http

  4. urls: ''

  5. data_parser_plugin: simple_xml

  6. item_selector: /rss/channel/item

Much like CSV’s use of the csv plugin to read a file, XML is not using the d7_node or d7_taxonomy_term plugin to read the data. Instead, it’s pulling in a url and reading the data it finds there. The data_fetcher_plugin takes one of two different possible values, either http or file. HTTP is for a remote source, like an RSS feed, while File is for a local file. The urls config should be pretty obvious.

The data_parser_plugin specifies what php library to use to read and interpret the data. Possible parsers here include JSON, SOAP, XML and SimpleXML. SimpleXML’s a great library, so we’re using that here.

Finally, item_selector defines where in the XML the items we’re importing can be found. If you look at our data example above, you’ll see that the actual nodes are in rss -> channel -> item. Each node would be an item.

  1.  fields:

  2. ...

  3.     -

  4.       name: pub_date

  5.       label: 'Publication date'

  6.       selector: pubDate

  7. ...

Here you see one of the fields from the xml. The label is just a human-readable label for the field, while the selector is the field within the XML item we’re getting.

The name is what we’ll call a pseudo-field. A pseudo-fields acts as a temporary storage for data. When we get to the Process section, the pseudo-fields are treated essentially as though they were fields in a database.

We’ve seen pseudo-fields before, when we were migrating taxonomy fields in Drupal 8 Migrations: Taxonomy and Nodes. We will see why they are important here in a minute, but there’s one more important thing in source.

  1.  ids:

  2.     guid:

  3.       type: string

This snippet here sets the guid to be a unique of the article we’re importing. This guarantees us uniqueness and is very important to specify.

Finally, we get to the process section.

  1. process:

  2. ...

  3. created:

  4. plugin: format_date

  5. from_format: 'D, d M Y H:i:s O'

  6. to_format: 'U'

  7. source: pub_date

  8. ...

So, here is where we’re using the pseudo-field we set up before. This takes the value from pubDate that we stored in the pseudo-field pub_date, does some formatting to it, and assigns it to the created field in Drupal. The rest of the fields are done in a similar fashion.

Destination is set up exactly like a Drupal-to-Drupal migration, and the whole thing is run with Drush the exact same way. Since RSS is a feed of real-time content, it would be easy to set up a cron job to run that drush command, add the --update flag, and have this migration go from one-time content import to being a regular update job that kept your site in sync with the source.

Migration from WordPress

WordPress export screenshotA common migration path is from WordPress to Drupal. Phase2 recently did so with our own site, and we have done it for clients as well. There are several ways to go about it, but our own migration used the WordPress Migrate module.

In your WordPress site, under Tools >> Export, you will find a tool to dump your site data into a customized xml format. You can also use the wp-cli tool to do it from the command line, if you like.

Once you have this file, it becomes your source for all the migrations. Here’s some good news: it’s an XML file, so working with it is very similar to working with RSS. The main difference is in how we specify our source connections.


  1. langcode: en

  2. status: true

  3. dependencies:

  4.   enforced:

  5.     module:

  6.       - phase2_migrate

  7. id: example_wordpress_authors

  8. class: null

  9. field_plugin_method: null

  10. cck_plugin_method: null

  11. migration_tags:

  12.   - example_wordpress

  13.   - users

  14. migration_group: example_wordpress_group

  15. label: 'Import authors (users) from WordPress WXL file.'

  16. source:

  17.   plugin: url

  18.   data_fetcher_plugin: file
  19.   data_parser_plugin: xml

  20.   item_selector: '/rss/channel/wp:author'

  21.   namespaces:

  22.     wp: ''

  23.     excerpt: ''

  24.     content: ''

  25.     wfw: '

  26.     dc: ''

  27.   urls:

  28.     - 'private://example_output.wordpress.2018-01-31.000.xml'

  29.   fields:

  30.     -

  31.       name: author_login

  32.       label: 'WordPress username'

  33.       selector: 'wp:author_login'

  34.     -

  35.       name: author_email

  36.       label: 'WordPress email address'

  37.       selector: 'wp:author_email'

  38.     -

  39.       name: author_display_name

  40.       label: 'WordPress display name (defaults to username)'

  41.       selector: 'wp:author_display_name'

  42.     -

  43.       name: author_first_name

  44.       label: 'WordPress author first name'

  45.       selector: 'wp:author_first_name'

  46.     -

  47.       name: author_last_name

  48.       label: 'WordPress author last name'

  49.       selector: 'wp:author_last_name'

  50.   ids:

  51.     author_login:

  52.       type: string

  53. process:

  54.   name:

  55.     plugin: get

  56.     source: author_login

  57.     plugin: get

  58.     source: author_email

  59.   field_display_name

  60.     plugin: get

  61.     source: author_display_name

  62.   field_first_name:

  63.     plugin: get

  64.     source: author_first_name

  65.   field_last_name:

  66.     plugin: get

  67.     source: author_last_name

  68.   status:

  69.     plugin: default_value

  70.     default_value: 0

  71. destination:

  72.   plugin: 'entity:user'

  73. migration_dependencies: null

If you’ve been following along in our series, a lot of this should look familiar.

  1. source:

  2. plugin: url

  3. data_fetcher_plugin: file
  4. data_parser_plugin: xml

  5. item_selector: '/rss/channel/wp:author'

This section works just exactly like the XML RSS example above. Instead of using http, we are using file for the data_fetcher_plugin, so it looks for a local file instead of making an http request. Additionally, due to the difference in the structure of an RSS feed compared to a WordPress WXL file, the item_selector is different, but it works the same way.

  1.     namespaces:

  2.       wp: ''

  3.       excerpt: ''

  4.       content: ''

  5.       wfw: ''

  6.       dc: ''

These namespace designations allow Drupal’s xml parser to understand the particular brand and format of the Wordpress export.

  1.    urls:

  2.       - 'private://example_output.wordpress.2018-01-31.000.xml'

Finally, this is the path to your export file. Note that it is in the private filespace for Drupal, so you will need to have private file management configured in your Drupal site before you can use it.

  1. fields:

  2. -

  3. name: author_login

  4. label: 'WordPress username'

  5. selector: 'wp:author_login'

We’re also setting up pseudo-fields again, storing the value from wp:author_login in author_login.

Finally, we get to the process section.

  1. process:

  2. name:

  3. plugin: get

  4. source: author_login

So, here is where we’re using the pseudo-field we set up before. This takes the value from wp:author_login that we stored in author_login and assigns it to the name field in Drupal.

Configuration for the migration of the rest of the entities - categories, tags, posts, and pages - look pretty much the same. The main difference is that the source will change slightly:

example_wordpress_migrate/config/install/migrate_plus.migration.example_wordpress_category.yml  (abbreviated)

  1. source:

  2. ...

  3. item_selector: '/rss/channel/wp:category'

example_wordpress_migrate/config/install/migrate_plus.migration.example_wordpress_tag.yml (abbreviated)

  1. source:

  2. ...

  3. item_selector: '/rss/channel/wp:tag'

example_wordpress_migrate/config/install/migrate_plus.migration.example_wordpress_post.yml (abbreviated)

  1. source:

  2. ...

  3. item_selector: '/rss/channel/item[wp:post_type="post"]'

And, just like our previous two examples, Wordpress migrations can be run with Drush.

A cautionary tale

As we noted in Managing Your Drupal 8 Migration, it’s possible to write custom Process Plugins. Depending on your data structure, it may be necessary to write a couple to handle values in these fields. On the migration of Phase2’s site recently, after doing a baseline test migration of our content, we discovered a ton of malformed links and media entities. So, we wrote a process plugin that did a bunch of preg_replace to clean up links, file paths, and code formatting in our body content. This was chained with the default get plugin like so:

  1. process:

  2. body/value:

  3. -

  4. plugin: get

  5. source: content

  6. -

  7. plugin: p2body

The plugin itself is a pretty custom bit of work, so I’m not including it here. However, a post on custom plugins for migration is in the works, so stay tuned.

Useful Resources and References

If you’ve enjoyed this series so far, we think you might enjoy a live version, too! Please drop by our session proposal for Drupalcon Seattle, Moving Out, Moving In! Migrating Content to Drupal 8 and leave some positive comments.

Nov 01 2018
Nov 01

"In a virtual community we can go directly to the place where our favourite subjects are being discussed, then get acquainted with people who share our passions or who use words in a way we find attractive. Your chances of making friends are magnified by orders of magnitude over the old methods of finding a peer group."
- Howard Rheingold, The Virtual Community, 1994

Communities are important for the success of any multimedia information systems, today. Gaming is no exception, especially when it has become a part of our new media culture, entertaining people of all ages. The satisfaction of gaming community members can influence the success of a game and it is no secret why highest selling games have the largest communities. 

pokemon gif with money falling over on meow

To keep up the community and the platform with the latest trends, features, and functionalities, it is important that you choose the right technology for your platform. Drupal is an easy choice. But why are gaming communities increasingly opting for Drupal as the platform of their choice? 

“The famous AR game, Pokemon, managed to give an unprecedented swiftness, leading to Nintendo’s stock value increasing dramatically and achieving $470 million in revenue in just 80 days.”

The Power Of Gaming: Why Gaming Industry Needs Community?

Not very often will we associate the word community with gaming. And yet, these community platforms are where the games really mature. In terms of engagement and shared values, a common cultural background plays an important role, which can be reflected by the spatiotemporal distribution of the gamers. 

The community of gamers can be identified either as a whole or part of video game culture. It comprises of people who play games and those are interested in watching and reading about it. 

Community support is important for both game development and community building. 

  • User Acquisition: A shared goal, interest provides the reason for being a part of the community. A community is what builds a game, and community is what drives the game beyond its niche success into the blockbuster — shaping the success of ROI for an engaged, excited community is off the charts.

    Intense interactions and strong ties are not only important for online multiplayer games, they enhance the intensity and user experience too.  

    Over 53% of US teenagers play online games with people they know in their offline lives (Pew Research, 2015). Community support allows integration of offline friend circles into online communities.  

  • User Retention: Gaming communities form a very crucial part in retaining the users as video games have grown into a subculture since their birth.

    Community services enhance competition within games, which builds up customer loyalty as a consequence. Games and gaming communities are strongly intertwined and experience permanent co-development. 

    Discussions on new features, the problems they encounter at playing, advice about gaming strategies via forums is where the retention starts at. 

    The modern games provide direct in-game communication, which is not restricted to a simple message exchange, but also involves further service functionality. 

  • Improves Quality: Gaming communities are a place of intense interaction after all games are about shared experiences, rendered with extraordinarily interaction and ownership.  All successful games have communities. tracer over the shoulder victory poserThe infamous butt pose - Tracer' over the shoulder victory poseAnd this where the changes come from. Remember the infamous Tracer butt controversy from 2016? Well, it was after the community chose to put their outrage did the gaming giant Blizzard Entertainment had to pull down the post to show the accurate representation of the values.

Why are Gaming Communities Opting for Drupal?

What does Drupal offer to the gaming communities that they are opting for it? Here is a list of why Drupal is the choice for the community platforms.

  • Decoupled Drupal for Intuitive Game Live UI Experiences

Much like physical sports, video games demand a certain standard of ability where the player can enjoy from the very moment the game is started. Regardless of whether there is an explicit tutorial, players must instantly intuit what to do, what the basic rules are, what is good, what is bad, and how to go about doing the various things that can be accomplished.

The more the realism your game offers to the gamer, the longer they would want to play. 

With the decoupled experience in Drupal, you can create an interactive experience for the gamers by utilizing your site to completely control the useful in-program applications. While the front end can easily be coupled with other technologies such as jQuery, JavaScript, Node.js, and React.js. The backend in turn shifts to become the system of record, but the interaction happens real-time in the browser, back-and-forth. 

The headless site advancement can possibly release the imaginative influence of the diversion with intense gaming experience which is faster, more natural, intuitive and responsive at the gamers’ end. The end result is smoother and faster games played live. 

  • Gameplay based customizations

Games allow players to perceive themselves in alternate ways in their imagined worlds. Player identification – with Avatar and Character – helps build the interest while also improving the gameplay experience and is important to maintain the identity in the possible communities as well. 

Garen Avatar from the leagues of legends highlightedAvatars in the Leagues of Legend

An example of this could be the website of League of Legends – built on Drupal – which is a team-oriented strategy game where the goal is to work together and bring down the enemy nexus located in the middle of the game. 

Roles of assasin, fighter, mage, and marksmen offered in the Leagues of LegendRoles offered in the Leagues of Legend

Drupal has tools and services for building user profiles, fostering the creation of virtual sessions, allowing communication with third party serious games, and storing and processing the game analytics. This is important since it helps the gamer take the game more seriously and relate to it on a virtual level.

  • Scalability

Zynga – a leading developer of the world's most popular social games – website is built on Drupal. It claims to have 100 million monthly unique visitors, making it the largest online gaming destination on the web.

Scalability is Drupal’s middle name
Farmville 2 description on Zynga

Handling high volumes of visitors, content, and users is a tough job. But Drupal does it easily. As it is said, “scalability is Drupal’s middle name”. Some of the busiest sites across the world are built on Drupal. 

It is apt in handling sites that burst with humongous traffic, which means your gaming website can perform spectacularly even on the busiest of the days without breaking or bending. 

  • Multimedia support

Visit the famous StarWars The Old Republic (SWOTR) website and the background has video snippets playing from the game. Multimedia support is not new to the gaming industry. To keep the engagement high, you need to support multimedia features like scorecards, videos, photos, audios among others. 

gif from star wars games

Drupal is a highly versatile and customizable CMS. It has various modules available to support this need. The photo gallery module, media entity module, and easy to use templates to customize appearance are just a few from the list.   

Not just this the photo gallery module helps you customize images with templates, build you scorecards

  • Mobile Responsiveness

Video games have once again found themselves more widely played and accepted, thanks to the increasing smartphone reach. Add to it one more feature, your game needs to be device responsive too with easy and intuitive controls. 

Drupal 8 is device responsive out-of-the-box. Which means your content adjusts well from the big screen of your desktop to the little screen. Image size changes, menu items shift to drop-downs, other items are pushed around to make sense of content and size of the device. 

But games are not just about the squeezing to a different size thing. They need to offer the same experience as in the native web application without taking away the intuitive design. This can be sorted with the Hammer.js module in Drupal. Hammer.js helps you enhance the user experience by providing cross-browser support and taking away a lot of complexity when implementing touch and pointer gestures. Leveraging it in Drupal is easier than ever due to the Library API of Drupal 8.

  • Adding complex categories and catalogs

Gaming communities are a lot different from what the gaming websites offer. Since each game will have different sub-communities, it becomes a need to build those categories with design and category apt to the theme. 

screenshot from leagues of legends with various categories and catalogs

Drupal provides a powerful taxonomy engine, allowing gaming companies to support intricate designs and complex categories and catalogs, without much ado. The flexibility of adding different types of products and content is ensured by content creation kit (CCK). CCK allows you to add custom fields to any of the content types using a web interface

  • Discussions, Reviews, and News

Communities are all about discussing what is happening or happened. Therefore one of the primary community needs is for a easy content creation with different content types. The more the types, higher the engagement, more the users will interact. Blogs, events, FAQs, news are all important.

screenshot of leagues of legend with news sectionScreengrab from League of Legends
  • Quick Search 

Communities are a busy place with a lot of activities happening at the same time. Content that might interest a user can get lost in the myriad of content. In Drupal, Solr is used to get more accuracy within less time. 

search for new games and its results

Drupal has Solr integrated for a quicker search. Solr is a highly reliable, scalable and fault tolerant search application which provides distributed indexing, replication, and load-balanced querying with a centralized configuration. 

  • E-commerce Solution

Integrating commerce with the website is an old practice and most gaming companies leverage this opportunity to boost their sales. Klei – an Independent game studio – chose Drupal to create a seamless shopping experience for both mobile and desktop users.

According to The Jibe, "Klei needed a site and store that was as easy for them to manage as it was for their customers to buy: easy sorting, featured items, promo-code inputs, simple searching, and clear calls-to-action."

shopping cart with Klei

After integrating the online store with Drupal the team can easily add new products and games on the fly while also managing the promotions and highlighting featured items easily.

DrupalCommerce and Commerce Kickstart are two of the most popular solution that Drupal offers. With easy payment gateway integration, your online transactions are secure with Drupal.

Drupal vs. Wordpress 2018

Building a Community website

Building an online community, a network of people with shared interests and goals with target niche audience to be part of it with easy usability and navigation. 

Example: Pinterest Community

Winner: Drupal 8 

Why? For an extensive user management in your community, it would require custom fields, different content types, scalability, varied user roles and permissions among the others - all of which are easy to build in Drupal 8. In case you need a simple to-do community with limited features and functionalities, then maybe Wordpress will work. But then that format would be closer to a blog, anyway.

Building a Gaming Website

These are the sites featuring direct online gaming with single or multiplayer and can include games of any type from the different genre. 

Example: Zynga

Winner: Drupal 8 (Clearly)

Why? While you might think of Drupal as a preconfigured PHP framework, it is vastly more suited to developing an online game than Wordpress is. Drupal is fast, mobile responsive and scalable. It can handle as content as much as you want, as many people as you can think of - without crashing. 

And as far as WordPress is concerned, why would you want to choose a software built from a blogging background to create a game?

Building a Basic Gaming related Website

These are the types devoted to the world and culture of computer gaming. Will includes gaming news, magazines, FAQs, and resources. 

Winner: WordPress

Why? Although Drupal 8 more suited to handle the content, WordPress has a slight edge here. All the types mentioned here are related to publishing. Being a blogging platform (niche) WP can suit the needs better since its out-of-the-box configuration comes closer to your goals. 

Although in case there are varied features added like user login, reviews, managing multimedia content, and discussions then, Drupal is clearly the hero. 

Building a Media-Streaming Website

These are the sites that offer audio/video streaming services, such as podcast, television and film, sports, music, among others.

Example: AXN 

Winner: Drupal 8

Why? Drupal 8 can handle multimedia content much more flexible than WordPress. While WordPress can excellently handle content that's primary text, Drupal 8 makes all types of a media a first-class citizen. 

With clear taxonomy and easier role management, coupled with faster-load time, it won’t bend or break when streaming content live. 

Summing Up

Community platforms have become an easy measure to the success of any game since they serve a combination of purposes varying from technical to human factors. Further community satisfaction measures need to be considered in order to improve the product model and quality in future. 

Drupal mostly serves the needs of the gaming industry, is should be a no-brainer when opting for it. Drop a mail at [email protected] to connect with us if you are building your gaming website or community platform.

Nov 01 2018
Nov 01

Visualise that you are working for an organisation that builds web applications for its clients. Every time you gain a new client for a web application, you visit AWS or any cloud provider for that matter. You wind up with 2 VMs for running the app and for the associated database. You will need at least two copies of this infrastructure for production and staging and then start deploying the code for that client. And this process starts all over again for a new client and so forth. Instead, by utilising Infrastructure as Code (IaC), you run a bit of code and that’s it, you are all set to go!

Foundation of a building under construction on left hand side and fully constructed building on right hand side

Infrastructure and Operations (I&O) teams must disrupt their traditional infrastructure architecture strategies with IaC. This comprises of investing in hybrid cloud, containers, composable infrastructure and the automation for supporting these workloads. As we hurtle through the wall-to-wall internet of things (IoT) and edge computing, a holistic strategy for IaC becomes more significant to enterprises than ever before. It will be interesting to witness the power of Infrastructure as Code for the deployment of Drupal-based web applications. Before we dive into that, let’s see how IaC helps in efficient software delivery.

Solving environment drift in the release pipeline

Illustration showing sheets of paper on the left-hand side and a lot of computers on the right-hand side

Infrastructure as Code refers to the governance of infrastructure (networks, virtual machines, load balancers, connection topology) in a descriptive model by leveraging the same versioning as DevOps team uses for source code. In a similar principle of the same source code generating the same binary, an IaC model generates the same environment whenever it is applied. It is an integral DevOps practice and is used in combination with Continuous Delivery.

Infrastructure as Code refers to the governance of infrastructure in a descriptive model by leveraging the same versioning as DevOps team uses for source code

IaC evolved to solve the environment drift in the release pipeline because:

  • The teams must maintain the settings of separate deployment environments without IaCs.
  • Over a period of time, each environment becomes a snowflake. In other words, it leads to a unique configuration that cannot be reproduced automatically.
  • Inconsistent environments incur deployment obstacles.
  • With snowflakes, management and maintenance of infrastructure constitute manual processes which were difficult to track and contributed to errors.

Idempotence, a principle of IaC, is the property in which no matter what the environment’s starting state is, deployment command always sets the target environment into the same configuration. It is attained by either automatically configuring an existing target or through the recreation of a fresh environment by discarding the existing environment.
With IaC, DevOps teams can test applications in production-like environments early in the development cycle. These teams expect to provision several test environments and on-demand. Infrastructure represented as code can also be validated and tested for avoiding common deployment challenges. Simultaneously, the cloud dynamically provisions and tears down environments based on IaC definitions.
Implementing Infrastructure as code helps in delivering stable environments faster and at scale. By representing the desired state of their environments via code, teams avoid manual configuration of environments and enforce consistency. Infrastructure deployments are repeatable and safeguard against runtime issues that are caused by configuration drift or missing dependencies. DevOps teams can work in combination with a unified set of practices and tools for delivering applications and their supporting infrastructure quickly, reliably and at scale.

Benefits of Infrastructure as Code

Graphical representation showing horizontal bars in light and dark blue colours to depict benefits of Infrastructure as Code


  • Minimising Shadow IT: Allowing a fast response to new IT requirements through IaC assisted deployment ensures higher security, compliance with corporate IT standards and helps with budgeting and cost allocation.
  • Satisfying Customers: Delivering a quality service component with a short time period leads to customer satisfaction and enhanced perception of IT within an organisation.
  • Reducing operational expenditures: An enterprise can configure and deploy a completely tested and compliant new IT infrastructure asset in just a matter of few minutes either with minimal or no human intervention at all This saves a superabundance amount of work time and security-related financial risk potential.
  • Reducing capital expenditure: A developer accomplishing the task of several team members on his own, particularly in the context of DevOps, highly benefits the project capital expenditure.
  • Standardisation: When the creation of new infrastructure is coded, there is consistency in the set of instructions and standardisation.
  • Safer change handling: Standardisation assurance allows safer alterations to take place with lower deviation rates.

Challenges of using Infrastructure as Code

  • Organisational resistance to change: Largest organisational challenges stem from budget limitations as it can deter an organisation’s ability to hire or retrain staff lead to an overall resistance to change.
  • The dearth of expertise in-house: Lack of in-house expertise can pose a technical hurdle.
  • Shortage of tools, skills and the fear of loss of control: As IaC languages are more code-like than script-like, so developers are more comfortable with them in general but this poses issues for Ops team. Ops is more concerned with configuration control conflicts as they have traditionally had all control over configurations.

'Key recommendations' written inside a box at the top and some bullet points follows after that to explain Infrastructure as Code

Infrastructure as Code tools

Infographics showing a quarter of a circle inside a square-shaped box with several small circles inside it denoting the market presence of Infrastructure as Code toolsSource: Forrester Wave™: Configuration Management Software For Infrastructure Automation, Q4 ’17
  • The Puppet open source engine emphasises on supporting configuration management on numerous platform such that if a system is reachable by IP then it must be configurable.
  • Puppet Enterprise augments the open source Puppet providing a web-based UI to enable visibility into configurations, dependencies and events.
  • The Chef open source engine leverages an imperative approach with support for several operating systems, containers and cloud services.
  • Chef Automate builds on the Chef open source automation engine which incorporates respective projects of Habitat and InSpec and offers a web-based GUI and dashboard for compliance visibility.
  • The Salt open source project provides the option to run the modular software with or without agents and using push or pull processes.
  • SaltStack Enterprise builds on the open source Salt offering that gives you an enterprise GUI and API for integration.
  • Normation Professional Services sells plug-ins for Window/AIX support, auditing and HTTP data sourcing integration
  • Rudder is an open source automation platform that emphasises on continuous reliability.
  • Ansible open source project emphasises on minimalism and easy usage. It does not require any agents and relies on SSH and WinRM to remotely control member nodes which limits the resource usage and potential network traffic.
  • Ansible Tower is an enterprise solution for Ansible that emphasises on improving the open source project’s analytics and compliance capabilities.
  • Microsoft Azure Automation is a SaaS-based suite for process automation.
  • Microsoft PowerShell DSC is a configuration management execution engine which is developed primarily for Windows with support for Linux and MacOS added recently.
  • CFEngine Community Edition is an open source automation engine which is considered the father of modern-day configuration management.
  • The Enterprise version of CFEngine offers GUI/dashboard to manage and monitor node health, user-based and role-based management, richer reporting, asset management capabilities, and modules to support AIX and Windows 

Infrastructure as Code for Drupal

A digital agency showed how to automate the whole deployment process from the start to finish by leveraging Ansible. Ansible, being agentless, has a great ecosystem, the YAML syntax is easy to read, understand and maintain. This could be automated using any other provisional tool like Chef or Puppet as well.

Black background with ‘Ansible + Drupal’ written in blue and ‘ A fortuitous DevOps Match’ written in white below it

Project involved making the Ansible playbooks a part of their codebase. It will live alongside the Drupal code. Also, it is considered an industry-wide good practice to have infrastructure and deployment as a part of the code. It is still not technically 100% Infrastructure-as-Code setup as they only had the provisioning scripts checked in and not the code to spin the actual servers. The playbooks assume that the servers are already present with Docker, Docker compose is installed and having SSH access.

This setup made the deployment process consistent and repeatable as any developer with necessary permissions in the team could run the script and get the same results all the time. Moreover, when the build fails, it fails loud and clear where exactly things went wrong.

Challenges in the project

They did not guarantee a rollback for this process. If for instance, you perform a deployment and it fails, you would have to manually perform the rollback to the previous state. But it does store DB backups. So, it would not be an arduous task to add a rollback mechanism with the tag rollback and some parameters like what commit to rollback to, which DB to reset to etc.

Steps to be performed

A significant precursor to automating is to document and have a script for each step. They split the tasks into two categories namely

  • Setting up the system like creating DB backup directories
  • Running the DB updates via Drush

Ansible has the concept of tags for which 2 tags were defined namely ‘setup’ and ‘deploy’.

The listicle of setup only tasks included:

  • Creation of a directory for DB files to persist
  • Creation of a directory for storing DB backups
  • Creation of a directory for storing file backups

The listicle of tasks for both setup and deployment included:

  • Creation of a backup  of files and DB
  • Cloning the correct code, that is, specified branch or bleeding edge.
  • Creating .env file
  • Building and booting the latest containers for all services
  • Running composer install and DB updates, importing config from files and clearing cache (Drupal specific)

It is important to secure your servers prior to the deployment of the application. Ansible helps in storing the sensitive information in an encrypted fashion like DB credentials, the SSH key pair and the server user credentials. This setup enables you to easily build production replicas or non-production environment. 

In the years to come

IaC has a bright future with its ability in provisioning and managing computing resources. While it does come its own set of implementation barriers, the benefits that it delivers far exceeds the challenges it currently faces.
As the tools the frameworks that are associated with Infrastructure as Code mature, it has the potential of becoming the default standard to deploy and govern infrastructure.

Infographics showing statistics on Infrastructure as Code (IaC) using bar graphs, pie charts and relevant icons.

Technavio analysts forecast the global DevOps platform market to post a CAGR of more than 20% during the period of 2018 to 2022.  One of the major trends that are being seen in the global DevOps platform market 2018-2022 is the increase in the adoption rates of Infrastructure as Code. DevOps tools are being implemented by the organisations to shift from manual configuration of IT infrastructure to programmable IT infrastructure.

Increase in the adoption rates of Infrastructure as Code is a major trend in the global DevOps platform market

The report goes on to state that one of most significant reasons contributing to the growth in the global DevOps platform market is the need for reducing the time to market. Asia-Pacific region is projected to see the maximum enhancement in the market share of global DevOps platform. The Americas region and Europe-the Middle East-Africa region, which holds a large market share currently, will witness a decline in the market share over the forecast period.


Customer-obsessed technology puts the broader charter of service design on the infrastructure and operations team. I&O leaders should own the design for the full system of interacting parts that are sourced from a rich and dynamic software-defined ecosystem. Infrastructure as Code holds a great potential in disruption of traditional infrastructure architecture strategy and can be efficacious for Drupal deployments.

With years of expertise in Drupal Development, Opensense Labs has been providing a wondrous digital experience to its partners.

Talk to our Drupal experts at [email protected] to know how can we implement Infrastructure as Code with Drupal to power your digital transformation endeavours.

Nov 01 2018
Nov 01

As you already learned in a previous tutorial, CKEditor, the default WYSIWYG Editor for Drupal 8, can be enhanced through the installation of different plugins. They add buttons to the editor with additional features.

Content editors often need to embed accordion tabs into their articles, for example, to present a group of Frequently Asked Questions with their answers or to visually divide a topic into several subtopics.

The CKEditor Accordion module for Drupal 8 allows editors to insert an accordion directly into the WYSIWYG Editor (and therefore into the node) without the need to configure additional modules or even views.

This tutorial will explain the usage of this module. Let’s start!

Step #1. Install the required modules

  • Open your terminal window and type:

composer require drupal/ckeditor_accordion

Install Composer using your terminal

This will download the latest stable version of the module (currently 1.1.0) to your modules folder.

  • On your Drupal installation click Extend.
  • Search for the module name, click the checkbox.
  • Click Install.

Click Install

Step #2. Configure the Module

  • Click Configuration > CKEditor Accordion Configuration.
  • Check Collapse all tabs by default, if not already checked.
  • Click Save configuration.

Click Save Configuration

  • Click Configuration > Text formats and editors.

Click Text formats and editors

  • Locate the Full HTML format and click Configure.

Click Configure

  • Scroll down and click the Add group button in order to add a new button group.
  • If you don’t see the Add group button, click the link Show group names on the right.

Click the link Show group names

Click the link Show group names

  • Give this button group a proper name, for example, "Accordion".
  • Drag the "Accordion" button and drop it into the newly created group.

Drag the Accordion button and drop it into the newly created group

  • Scroll down to the Enabled filters section.
  • Check Limit allowed HTML tags and correct faulty HTML.

Check Limit allowed HTML tags and correct faulty HTML

  • This will display a vertical tab at the end of the screen.
  • Locate the dl HTML tag and replace it with <dl class>.
  • Click Save configuration.

Click Save configuration

This allows the module to inject the required CSS class, in order to give the accordion the proper styling.

Step #3. Create the Content

  • Click Content > Add Content > Basic Page.
  • Make sure that you select Text format HTML.
  • Click the Accordion button.

The module displays an accordion with two tabs by default. In order to add a third tab do the following:

  • Right-click inside the accordion element.
  • Select Add accordion tab after.

Select accordion tab

There are now 3 accordion tabs.

  • Write a title and some text for each of them.
  • Click Save.

You should see the accordion with three collapsed tabs.

You should see the accordion with three collapsed tabs

  • If you want to show the first tab displayed by default, go back to Configuration > CKEditor Accordion and uncheck the Collapse all tabs option.

Step #4. Styling the Accordion

The module adds class=”styled” to the dl tag containing all the elements of the accordion. So you have to target this class, in order to style the accordion.

For example:

dl.styled > > a {
background-color: red;

How to integrate Accordion Tabs into CKEditor for Drupal 8


The CKEditor accordion module lets you insert an accordion at any place of your node with the help of the CKEditor WYSIWYG Editor.

Thanks for reading!

About the author

Jorge lived in Ecuador and Germany. Now he is back to his homeland Colombia. He spends his time translating from English and German to Spanish. He enjoys playing with Drupal and other Open Source Content Management Systems and technologies.
Oct 31 2018
Oct 31

Did you miss MidCamp 2018? Or are you just ready to get hyped for 2019? While the MidCamp team is busy getting things ready for 2019, you can re-live all of the amazing sessions from 2018 by checking out our playlist on YouTube. Take time to check out the reaction roundup to find out what others thought of MidCamp 2018 or read through our Fearless Leader’s musings on MidCamp 2018.

If you haven’t been keeping in touch with friends you met at MidCamp, then be sure to join other Drupal Slacks and reconnect with all the folks you met.

Get involved for 2019!

If you’re interested in getting involved with MidCamp 2019, we’re on MidCamp Slack and we’d adore the opportunity to welcome you to our team. We completely empathize with being busy, so don’t feel bad if you can’t contribute as much as you’d like - every little bit helps! You can also contribute by telling us what topics you’re interested in seeing in the 2019 program. 

Join the conversation


About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web