Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Nov 09 2020
Nov 09

[embedded content]

Sarah Durham (00:02):

Hey everybody. Welcome to today’s webinar. I am Sarah Durham, and I am going to briefly introduce my colleagues. They will talk a little bit more in a minute and also we’d love you to introduce yourself as people start to arrive. If you are comfortable doing so, you’ll see a chat panel. And if you could chat in to us your name, the name of your organization, your pronouns, and where you are, where you are geographically, so who you are and where you are, would be great. Theresa, you want to say hi. 

Theresa Gutierrez Jacobs (00:50):

Hi, I’m Theresa Gutierrez Jacobs. I am a project manager at Advomatic. And for today, I’m going to just quickly chat my email. If you have any, I don’t know, tech issues or questions or anything like that, that is more tech related to this webinar. Feel free to reach out to me via my email. Otherwise, you can always chat or ask questions, particularly for this webinar here. And Dave, you want to say a quick hi, before we get rolling.

Dave Hansen-Lange (01:18):

Hello. I’m Dave Hansen-Lange and where I am, I’m about an hour from Toronto. I’m the director of technical strategy at Advomatic. I’ve been with Advomatic for about 13 years. And I’ve been doing work with nonprofits in the web for maybe about 15 or 17 years.

Sarah Durham (01:42):

Okay. So we’ve got a bunch of people who are already with us, a few more people who might join us in the next couple of minutes, but just to keep the ball rolling and use your time thoughtfully, we’re going to dig into our content for today. And as I said a little bit earlier, I will reintroduce myself. I’m Sarah Durham, I’m the CEO of Advomatic and also Advomatic sister agency, Big Duck. Some of you may have noticed that the Zoom we’re using today is a Big Duck/Advomatic shared Zoom. So if you’re wondering what the connection is, there’s some common leadership across both companies. For those of you who might know Big Duck, but don’t know Advomatic, Advomatic builds sturdy sites that support change. We build, and we support websites in Drupal and in WordPress. And Advomatic has been around now for, I think almost 15 years, although it’s partnership and collaboration with Big Duck and my coming into the company is relatively new.

Sarah Durham (02:43):

It’s about, I’ve been in it about two years. And so Dave is going to really take us through our topic today. And Dave, you could advance to your next slide, if you would like, which is this, what should you do with your Drupal 7 website? So Dave’s gonna talk us through why this is an issue and a few other things in a minute. What I am going to do throughout this conversation is I am going to be monitoring both the chat that you can see in the bottom of your screen, a little button that says chat. And if you click on that, you have the ability to either chat privately to the panelists. So if you want to ask a question confidentially, or you don’t want everybody who’s here to see it, just chat to the panelists and only Dave and Theresa and I will see it.

Sarah Durham (03:26):

If you want to chat to everybody and share who you are, like shout out to Rick, who’s already done that. He’s from the National Council of Nonprofits and he’s in the DC area. If you want to share your information with the panelists or to everybody, you can chat to all attendees. Also, you have the ability to specifically ask questions. There’s a Q&A feature in Zoom Webinar. And that will give me the ability to keep an eye on your questions. And some of them I can type back to you and others will be addressed verbally. So throughout the presentation, I’ll be monitoring all of that and we will address your questions perhaps as we go on, certainly at the end if it doesn’t make sense to do so in the webinar. So don’t hesitate to chat, don’t hesitate to ask questions. We are recording today’s session and Theresa will be sending out an email with a link to that recording and the transcript and the resources we’re mentioning later this week or early next week. So you will have all of this and you can share it with any colleagues if that is useful. So with that, we are going to get rolling over to you, Dave, and thanks, Theresa.

Dave Hansen-Lange (04:44):

Okay. Thank you, Sarah. All right. So to kick things off before we get into the details of all the different things that you can do with your website and what might be best for you I thought we should start with some backstory about like, why we’re at this spot and like, what does end of life even mean? Like, it’s software, how can software… and it really all comes down to security. And just to explain a little bit about how security in Drupal works, there is the Drupal security team, and that’s a team of about a dozen people all across the world. And then there’s a group of people even wider than that who contribute things to the team and say, Oh, this could be a problem. We should look into this. And people on the security team, you know, a lot of their time is paid for by their employers or their clients, but a lot of their time they’re just volunteering for free.

Dave Hansen-Lange (05:50):

And you know, there’s a lot of commitment there. Like, they have weeks on call and stuff like that, because security is very important to the Drupal community. And so we don’t want to have those people working forever for free. So the Drupal community at large has decided, okay, thank you for your time of service, people on the Drupal security team, we will let you go after this date. Some of those people work on AAA too. But people are generally committed for like Drupal 7. And so the original date for the end of Drupal 7 was going to be November, 2021. But then COVID happened and the Drupal community decided, okay, there’s this extenuating circumstance. We’ll give everybody one more year to figure out what they’re going to do. So now that the end of life date for Drupal 7 is November 2022, two years from now. 

Dave Hansen-Lange (06:56):

Drupal 8, just as an aside, it’s not really what we’re talking about today. Drupal 8, the end of life is November 2021, a year from now. That’s not what we’re talking about today. And thankfully, if you do have any AAA sites, the situation is a lot simpler. And if you want to get into that a little bit more possibly we could at the end of the presentation. Okay. So today we are going to first cover: these are the options that you have in dealing with your Drupal 7 websites. Then we’re going to look at some example scenarios. And by that, I mean like, okay, here’s an organization, they have a website like this, and because of that, they might consider scenario x. And then I’m going to pass things over to Sarah. And Sarah is going to dive into more of the organizational things, like, how do you plan for this and how do you work with this within your organization? All right. 

Sarah Durham (08:15):

Hang on one second, Dave, before we dig into this, I also just want to remind everybody feel free to chat in questions and comments as you go, and we’re going to take pauses in between each of these sections. So if you have, as Dave goes through the options, if you have a specific question about one of the options, and it seems like it’s universal to some of the other people who are participating today, I’ll probably pop in and ask that otherwise we’ll save Q&A for the end. Alright, sorry for the interruption.

Dave Hansen-Lange (08:41):

No, no, all good. I’m also going to be muting every now and then to take a sip of tea. I’ve got a sore throat. It’s not, COVID, it’s just a cold. And yeah, so I’ll be pausing too, as I go. Okay. So what are your options? So I’ve grouped these into four main options, and these are listed in terms of most expensive, to least expensive, most expensive option being start from scratch and build a new website for most people with a Drupal 7 website your main options are move to Drupal 9 or create something in WordPress. There’s some other options that you might consider, but those are the two that are applicable to most people. Option B is upgrade to Drupal 9 and immediately you’re probably thinking what is upgrading to Drupal 9? How is that different from building a website and Drupal 8? And I’ll explain that when we get there, another option is to switch to something called Backdrop. Many of you have probably never heard of Backdrop. And so I’ll start us out by what exactly that means. Or you could just stay on Drupal 7. And even though it has end of life, that there still are ways to keep going on, on your Drupal 7 website.

Dave Hansen-Lange (10:15):

So moving to a new website like I mentioned the main options for most people are Drupal 9 or WordPress. And so just by saying those two names in the same sentence, we immediately get into the topic of like what’s better Drupal or WordPress and what is right for me? I will touch on this a little bit now, and sort of back up a little bit and say that for starters, it’s really hard to make an unbiased and fair assessment of the two. But in a general sense, Drupal 9 is really great for people that, or on websites and organizations that want to do something a little bit more complicated, a little bit more ambitious, a little bit more technological, with more moving parts. And WordPress is generally more applicable to the organizations whose website, in many ways might be similar to other websites. And yeah, that is a little bit vague. I don’t want to dive too deeply into this topic right now. 

Dave Hansen-Lange (11:54):

If you want, we can come back to this in the Q&A at the end. And we also have another webinar that we did a couple months ago on this topic more generally. And if you’re just, if you can, we can send along a link to that as well. One last thing on this, though, I will say that when most people compare Drupal and WordPress, they’re not really comparing Drupal and WordPress, they’re comparing the website that someone built for them in Drupal or the website that someone built for them in WordPress. And because of that, they’re often comparing the skills of those people who built the website and not necessarily the underlying technology. And that’s part of the reason why this is such a sticky, thorny issue with a lot of people being on one side or the other there about moving to a new website. You don’t have to do the whole entire thing. You can find ways to do this in bits and pieces. I’ll show some examples of that later, but we’re at this point of rethinking what should we generally do with our Drupal website. It’s a great time to think, okay, this section, do we need it anymore? Should it be here? Is there a better way to do this then when we created this website however many years ago?

Dave Hansen-Lange (13:30):

Since many of you may not have seen modern Drupal I’m going to show you, or we’re pressed, I’m going to show you some slides here. So on the left, what we see is I am editing a page on a website and I want to add a new component which is a common term that we use these days, a new component to the page. I can browse through this library of available components and then add one.

Dave Hansen-Lange (14:00):

Or how it’s going to appear on the page. There’s many ways to do this in Drupal. Drupal is kind of known for having many ways to solve a problem. What we see in this screenshot is a tool called paragraphs. That’s a tool that we’ve been using for this problem pretty successfully on several websites. There’s other tools within Drupal 9. You may have heard the term layout builder and there was a couple of smaller ones as well on the right side. We see the administrative listing of all the content on your website for each site, it’s going to be a little bit different, what you decide to list here. But this is just one example of how it looks and comparing this to WordPress on the left. This is also how WordPress looks when you want to add a new component to the page. And so the right column there, we see, the available components that you have, again, on the right, a screenshot that’s WordPress of a list of all the content on the website.

Dave Hansen-Lange (15:20):

Looking at these two sets of screenshots, there’s a couple things that might sort of immediately come to mind. WordPress, the administrative interface generally looks a little bit more polished.

Dave Hansen-Lange (15:39):

In some ways WordPress can be a little bit all over the place in that each plugin or each new thing that you add to your website tends to design things its own way and do its thing its own way and it’s WordPress. Compared to Drupal, each new thing works in a very consistent manner. So it’s easy to move around from section to section on the website. All that to say is really either is probably a big step forward from where you are with your Drupal 7 website.

Dave Hansen-Lange (16:18):

All right, so which Drupal 7 website is this going to be most applicable to, or maybe you shouldn’t at all consider this option? If you are really frustrated with any part of your website, be that like how the content of this is organized, or just the general backend experience the design of the website, if there’s anything about it that you’d just want to just toss and start again fresh, this is a good option to consider. But like I mentioned, when I listed these four main options, creating a new website is going to be the most expensive of the options. And in the age of COVID, many of you are probably dealing with some tight budgets. So one of the other options may be the better choice. Also, this might not be a good choice for you if your existing site is very complex. And one way to think about this is like you built your website so many years ago, let’s say it was five years ago. And you put all this work into doing that initial build, but then over those five years, you’ve also put in some work, to make the website more and more better. And in this new version of the website that you’re gonna create, you want to encompass all of that. 

Dave Hansen-Lange (17:52):

It’s going to be a pretty big project. And so it’s just one way to consider looking at your options.

Okay. Option B, I don’t have a handsome, single flat you can upgrade to Drupal net. So how is this different from just creating a new websiteIn AAA? Drupal 9 has these built-in tools that can take your Drupal 7 website and take all those, all that content, all the content structure all the menus, everything that’s stored in the backend of the website and upgrade it and make it work in a new Drupal 9 website. But what you don’t get is any of the, how that content is presented to visitors, all of that stuff. If you go through this upgrade process, you still need to come up with or you still need to rebuild the way that it’s presented to visitors. Maybe, maybe you’re happy with the design of your Drupal 7 website. And so you can just redo that same design in Drupal 9 or another option since we’re here and we’re creating a new website and Drupal 9, you might want to take advantage of that and do a new design.

Dave Hansen-Lange (19:31):

And so, because of all those things, it’s going to be still a big chunk of work, not as big as just doing a clean slate and starting from scratch. But still a lot of work involved. One thing you do need to look into before you get too far down this road is like, are there any ways in which we solve the problem in Drupal 7, that just there’s no equivalent in Drupal 9. And that has sometimes happened because the Drupal 7 way of solving a problem, one example would be locations. Let’s say you got a content type in Drupal 7 called offices of your organization and they’re storing their address and location. That’s almost certainly done in a very different way in Drupal 9. And there isn’t a way to directly go from one to the other, at least not directly in the same sense of this upgrade process that I talked about before. There may be these situations like that, and you’ll have to do something custom or something else. That’s a little bit more complicated. It’s just important that, you know, these things happen upfront before you get into moving down this road.

Dave Hansen-Lange (21:00):

So who is this good for? I mentioned, you’re going to get the same stuff in the backend as you have now. So it’s, if you’re happy with that, great, consider this option. I mentioned that the visual presentation, you’ve got to redo that. So if you want a fresh design, this might be an option for you. Again, avoid if budget is tight, like I mentioned, it’s still a fairly complicated procedure. All right. A third option is to switch to Backdrop.

Dave Hansen-Lange (21:39):

So Backdrop, I said earlier that your main options are WordPress or Drupal. What’s this, what’s this new Backdrop thing? Backdrop is kind of like a different flavor of Drupal. And in the technical parlance, Backdrop is a fork of Drupal 7. And what does cutlery have to do with software? Absolutely nothing. So by fork, we mean fork in the road. You may know that Drupal and WordPress are open-source software. And that means that anybody, anybody really who has the time available to do it, can jump into the project. You got a problem with the way something works, you want to make it better, you can just do that and you can contribute something and get it rolled into the software. But what that also means is that if you don’t like how something works, you can just take it, copy it, and roll with it.

Dave Hansen-Lange (22:42):

And that’s what’s happened with, with Backdrop so well. Drupal 8 was being developed. There were many people in the community who thought, “Oh, no, like Drupal 8 is looking great and all, but it’s going to be really hard for websites that are on Drupal 7 to get to Drupal 8 and whatever it comes in the future. And they were right. That’s, that’s why we’re here. That’s why we’re having this webinar. And so what they did was they took Drupal 7, copied it, called it Backdrop and started to evolve it and evolve it in some of the same ways that AAA has evolved only keeping with the Drupal 7 way of doing things and the Drupal 7 styles. And so you have an option to take your website and sort of just take that fork in the road and start moving down the Backdrop trail.

Dave Hansen-Lange (23:42):

What this is going to look like for your website is that you’re still gonna have the existing content structure things in the backend of the website, just like that, upgrading to Drupal 9 option. It’s all going to look very similar, if not identical, but different from that upgrade to Drupal 9 option. You can still keep the visitor-facing portion of the website. If it’s going to need a little bit of tweaking to get onto that Backdrop fork in the road. But that is going to be relatively much smaller, a much lighter lift. Not to say that you must keep your existing design, you can make some changes and revisions. You might even consider doing a full redesign. But yeah, you don’t have to, as you’ve heard me describe this, you may be thinking fundamentally that the steps involved, it’s pretty similar to the upgrade to Drupal 9.

Dave Hansen-Lange (25:00):

It is, but still, it is almost certainly cheaper than upgrading to Drupal 9. And mainly the reason is because like I mentioned, it is just Drupal 7 evolved. So the changes that you have to make to your existing website are just immensely smaller, some increased risk. So what I mean by this well, like anybody who works with websites for a nonprofit is probably going to know WordPress, and probably getting to know the word Drupal, probably not going to know the word Backdrop, because it is such a much smaller community. Where there might be that there’s about half a million Drupal websites out there. There may be like a few thousand Backdrop websites out there. And because of that, there’s enough momentum in the community that we know that Backdrop will be here for two years, maybe four years, but it’s harder to sort of see deeper into the future. Whereas Drupal, you know, half a million websites. We know it’s, there’s a lot of people working on this, a lot of organizations, big and small, it’s going to be here for probably at least another 10 years, if not longer. Backdrop, much smaller community. There’s just not as much certainty about the future. 

Dave Hansen-Lange (26:44):

But with that said, Backdrop has committed to like the same sort of upgrade structure that Drupal 8 and Drupal 9 have committed to being. We’re not going to do a huge change again in the future. We’re going to make all these incremental changes that will make it much easier for you to stay up to date and evolve your website over time.

Dave Hansen-Lange (27:12):

Great. I thought it important to show some visuals about what Backdrop looks like and looking at these, you might be thinking, “Oh, this looks pretty similar to my Drupal 7 website, but the colors and fonts are more contemporary”. And you are a hundred percent correct in thinking that like I mentioned, it really is Drupal 7 evolved. But there is more to it. There are some easier things on the technical side of how to work with Backdrop compared to Drupal 7. There’s some different ways of managing page layouts. There’s other new features in Backdrop that Drupal 7 doesn’t have. But the thing is, if you take this sort of upgrade from Drupal 7 to backdrop trajectory, you’re not going to get those things all of a sudden. If you want to take advantage of Backdrop’s fancier ways of laying out content on a page then you’re going to have to have a small project to enable that feature. At first, you’re still going to be working in the same paradigms as you are with Drupal 7. So who is Backdrop great for? Anyone who has a lot of custom code. I was talking earlier about like, why you might want to avoid building a new website and Drupal 9 is if you’ve got a lot of custom stuff. Here in this option, and this would be a good option for you because all that custom stuff probably doesn’t need to change very much, probably needs to change a little. But if it’s not going to be all that significant, this is a good option for you.

Dave Hansen-Lange (29:16):

If you are happy with your existing design that’s going to need a little bit of touch-ups to move to Backdrop. I was trying to be consistent here and come up with a reason why you should avoid Backdrop. I couldn’t really come up with one. I think everyone should at least consider this option. It’s kind of like the middle of the road option. You might not choose this option if you’re wanting to do a full redesign, but if all the rest of the things line up for you, then you could do a full redesign in Backdrop. It would be fine. I guess the only reason that I can think of now is that if you are super concerned about keeping the website that you have the same fundamentally as it is now, four, five years into the future, 7 years into the future—because the future is a little less defined for Backdrop—you may want to avoid it in that case.

Dave Hansen-Lange (30:35):

All right. And the last option stay on Drupal 7. I mentioned even though Drupal 7 has reached end of life, there are ways to continue on with it. If you had any websites that were on Drupal 6 and you were in this sort of situation for Drupal 6’s website, when it reached its end of life, there was a program started called the extended support for Drupal 6. This Drupal 7 version of that program is fundamentally identical. And what this is is that I mentioned that many of the security team are volunteering their time. And so this program gets around by trying to force people to volunteer their time by saying it’s a paid program. The Drupal community has vetted several Drupal agencies to offer this extended support service. And what that means is that as security issues come up, maybe there’s a security issue that comes up in Drupal 8 that might also apply to Drupal 7 this, this team of extended support people work on fixing that problem in Drupal 7.

Dave Hansen-Lange (32:11):

And so there’s kind of two ways to take advantage of this: Number one, you sign up with one of the extended support vendors. You’ll be able to find that list through some links that we’re going to send at the end. One of the mandates of this is that they release all of their fixes publicly. It’s happened for Drupal 6 as well. And so if you are technically savvy or you’ve got someone at your disposal who’s technically savvy and can sort out the details and apply these fixes as they come up, this could be a good option for you, too.

Dave Hansen-Lange (33:08):

I think it’s important though, to like, take a step back at this point and talk about why you might think about security in different ways. And one way to think about security is kind of like two groups of websites on the internet—those who security is really important for, for whatever reason. Maybe they’re doing something that some people find controversial and they have people who are trying to hack into their website. Maybe you are processing credit cards on your website and you, you know, someone might want to try and break in and steal those credit cards. Maybe you are a news outlet and you get hundreds or hundreds of thousands of people viewing your content every day. And if someone could break in and get some sort of message out to those people, that might be an incentive as well. So that’s like one group of websites, people who have some sort of special security concern. And then there’s kind of everybody else—everybody who knows that security is fundamentally important, but it’s not more important than it is for everyone else in this group.

Dave Hansen-Lange (34:33):

It’s just the nature of how I described that most organizations are going to be in this group where security is important, but not more important than anyone else. Some are going to be in this heightened group of security. And for those people, they need to think about things more than just like, am I getting the bare necessity basics? Or am I really doing all that I’m responsible for ensuring the security is as good as it can be. And for those people, this may not be the best option in that you’re not on the most recent and currently secure thing you were on, this thing that’s on extended support. And whether that rationale is purely technical, or if it’s purely optics in that if something were to ever happen to your website and it was discovered, “Oh, they’re running this version of Drupal that was created 10 years ago”. 

Dave Hansen-Lange (35:38):

How can that be responsible? And then there’s all sorts of politics involved. I mean, it’s a situation you want to completely avoid, but for those of us who are in the group of security as important, but not more important than anyone else, this can be a very reasonable option to consider. So stay on Drupal 7, if you have a really tight budget. And I admit that budget is in the eye of the beholder. For some of you a roomy budget would be a tight budget and vice versa. Like I was talking about, if you don’t have any special security requirements avoid, if your site needs a facelift or if you’re frustrated with the backend. So like I mentioned, this is keeping the same website and keeping it the same. And so if you want to rip something out and try again, this is probably not the option for you.

Sarah Durham (36:56):

Okay. So, Dave, I’m just going to jump in here for a second before we continue with your sample scenarios. We’ve got about 20 minutes left in our time together, so we’re going to need to move pretty quickly through our sample scenarios and through the make a plan section. But we did get a really good question that I’d love you to try to answer for us before we continue on. It’s from our friend, Rita, and Rita asks, if you choose to migrate or upgrade to Backdrop, what would that mean for your future options to upgrade to Drupal 9?

Dave Hansen-Lange (37:29):

I don’t think it really changes the landscape for that at all. Whether you’re upgrading from Drupal 7 or from Backdrop, it’s fundamentally the same thing. It is technically almost identical and that’s because well, Backdrop has gone on this new trail at a foundational level. The way the content is stored, it’s fundamentally the same. And so if you want to pull that content out of either version of those websites into a new Drupal 9 website, it’s going to be the same process. That could change though, as it’s a fork in the road. So Backdrop could go further one way, while well, Drupal 7 is not moving anywhere at this point, but it could continue to move on in a way that’s more different from Drupal 7. But in my opinion, it’s unlikely to change all that much for the foreseeable year or two.

Sarah Durham (38:37):

Okay, great. So, so back over to you.

Dave Hansen-Lange (38:40):

Okay. So like I mentioned, those options, they were great in theory, but now let’s try and put some of this to practice. I’m going to show, I think, four, maybe five example websites and what is unique or different about those websites and why they might choose one option over the other. As you’re looking through this, you might think, “Oh, that’s nothing like my website”. But I’m going to try and pull some things out here that hopefully are going to apply or at least show some things that you should consider. And you also might recognize some of these websites. Don’t focus on that. We’re going to focus on what is it about these websites? I’m also not going to tell you anything about these websites that isn’t something… Sorry, everything that I’m going to tell you about these websites is something that you could just go to the website, look at and figure out for yourself.

Dave Hansen-Lange (39:45):

So there’s not going to be any sort of like private information here that I’m gonna show either. So in this first example, we’re going to look at the ACLU. On the left here, we see what their website homepage used to look like. On the right side, we’re going to see what the homepage looks like now. And the prior version of the website, that was Drupal 7. The homepage, and I say that specifically, the homepage, is now WordPress. You may remember back when I talked about the option of creating a new website that you don’t have to do the whole thing. Here’s just the homepage. And they’ve actually done the same thing with the blog section. It used to be Drupal on the left. Now it’s WordPress on the right. You don’t have to do with everything.

Dave Hansen-Lange (40:44):

So this is an example of a case on the ACLU website. And like, this is just one really long page here that is cut up into three pieces. See at the top, this is all just fairly straightforward content. But then in this section, things start to get more complicated. Like there’s all these other bits of content elsewhere on the website that are related to this case. That’s something that you can do in WordPress, but the more complicated those relationships get, the more awkward it gets to do in WordPress. Then down here at the bottom of the page, things get super complicated. Visually it doesn’t look too bad, but that’s because I think the design was done well. There’s hundreds of legal documents that relate to this case, all in these groupings and hierarchy and get super complicated. WordPress is not the best tool for this kind of job. And so this part of the website is still on Drupal. It’s still going to be on Drupal for now. It might evolve in the future, but that’s where it is for now.

Dave Hansen-Lange (42:03):

Another section of the website, there is this sort of intermediary thing where you could show an action within like an article or a blog post or something to say, “Okay, come take this action”. And during the redesign or in moving bits to WordPress, you know, if you’ve stepped back and thought, is this useful? Is this complicated? Is there a way to do this simpler? And this sort of intermediary thing was just checked and now there’s just links to actions and there’s other ways to show actions without this complicated section of the website. Please consider for your website: What should I get rid of? There’s almost always something. 

Dave Hansen-Lange (43:11):

Looking at a different organization, here is one that’s a Drupal 7 website. But you might be thinking, “Oh, this design, it looks fairly current”. And you’d be correct because this organization went through a redesign, I want to say, like, two years ago. And so because of that, looking at those four main options, they can probably throw the create-a-new-website option out because the design still looks great. As long as they’re happy with how the content works on the backend, they could really choose any of the other three options. And, yeah, so consider that.

Dave Hansen-Lange (43:47):

Next, we have a municipality. When I was talking about the option of staying on Drupal 7, that’s maybe not the best option for a municipality in the news all the time. We hear stories of like such-and-such municipality, their website has been hacked, or their computer systems have been taken over by ransomware. And so just the optics of staying on Drupal 7 might not be the best choice for them. The design looks, doesn’t look as fresh as those first two options that we showed. But let me guess a municipality kind of has different requirements in that the number one goal is not a flashy design, it’s getting information out to its residents.

Dave Hansen-Lange (44:32):

And so there may be a way for them to choose one of the non-design related options. And at the same time, maybe consider how it can do any sort of restructuring to better present the information that people need to find. Here’s another organization. In looking at the screenshot, you might be thinking the same things that this organization thinks about this website and that the design is very text-heavy, and it is not quite as engaging as they would really like it to be. And so for this organization, one of the first two options is probably the best choice: creating a new website completely or upgrading this to Drupal 9 with a new design.

Dave Hansen-Lange (45:43):

Lastly, we’re going to look here at, this is not so much a website, but a web platform. AFT has 1,300 websites on this one platform for States and Locals within a state. And the center one up top here, this is for a campaign website. And this is an example of a few things: One, it’s not their primary website, it’s not aft.org. And so if you’ve got more than one website, you don’t have to choose the same option for all of them. You can choose different options. Number two, there’s a lot of custom stuff involved here, as you might imagine. Some stuff around creating a new website, around connecting the information altogether. So because of that, you might lean more to one of the options that works better for custom stuff and doesn’t require recreating all of their custom stuff in a brand new website.

Sarah Durham (47:07):

Thank you, Dave. So a quick question, before we talk about where you go from here. Just want to confirm the ACLU, the sections of the ACLU site that are still in Drupal, or are those WordPress? 

Dave Hansen-Lange (47:22):

That is in Drupal. Yes. 

Sarah Durham (47:26):

Okay, so Dave is going to be advancing some slides for me. So I will ask you, Dave, to go onto the next slide. And basically, before we flip over to your questions and discussion, and in the remaining time we have together, what I want to get you thinking about is how to make a plan. And it’s interesting we’re doing this today because actually I had a call with somebody at a higher ed institution this morning, who’s got an old site and they are debating what their options are. They were describing a lot of feelings of being overwhelmed. I think that, you know, these days with the reality of what’s going on in the world with COVID, with elections, all that kind of stuff, tackling these kinds of big projects is feeling pretty daunting. So I wrote an article about planning and we’ll share links to that article and a bunch of other things.

Sarah Durham (48:20):

Dave has also written a really helpful post about Drupal 7’s end-of-life. At the end of this webinar and also in the follow-up email, we’ll send you one of the things I wrote. The first step is to make a plan and you don’t have to have all the answers. You’ve just got to begin by getting your team on the same page about the implications. I think that’s one of the big barriers that a lot of people are facing is that they’ve got these Drupal sites and there is a real challenge coming up, a real cliff coming up for many of you that you’ve got to begin to get your team aligned around so that you can budget and plan appropriately. Next slide please, Dave. So I recommend that you come up with a plan, which you could do in five slides or in two pages.

Sarah Durham (49:05):

And the intention of this plan is actually to give you an internal document you can use to get your team on the same page and build some buy-in. So you can see first you’d start by outlining the situation. I think we’ve given you some of the ammunition for that conversation and in today’s session or in the articles we’ll share with you, and what the risk is to your organization. You might want to outline some options if it’s clear to you and the people on your team where you should go from Drupal 7. You might go forward with outlining some options or making a recommendation, but honestly, if you’re not sure which way to go, a good partner should help you get there, too. So if you don’t have the answers already in mind, if it’s not clear to you which way to go, it might be that you map out a few options.

Sarah Durham (49:52):

But your recommendation might be more to find a partner to help you navigate that. Of course Advomatic can do that. We would love to help you make a decision about this, and we do regularly do that as part of our work. There are many people you could work with who could do that. I think one of the things that’s also really important in your plan is mapping out a timeline, not so much for the build or the upgrade that you might do, but all the things leading up to it. If you are looking ahead and thinking what you really need to do is rebuild your website or do a significant upgrade, that’s going to take time and a lot of work, and you’re going to want to get your team on the same page about when the budget needs to be approved, and when you’re going to get rolling so that you’re doing it hopefully well in advance of some of the deadlines that are going to be important within your organization and within the Drupal 7 end-of-life timeline.

Sarah Durham (50:49):

You know, in the non-profit sector, one of the key pieces that is in my experience kind of do-or-die for many big projects is building buy-in. So with that plan in mind, I would encourage you to have some conversations, share it, get it into the budgeting process and kind of keep it alive because very often you know, you mentioned these things once or twice, but there’s so many things going on that are taking up so much attention and energy for the leaders of organizations today that I think you’re going to have a little bit of work to do to keep it alive, which is the next step. My next slide. Also, keeping it alive is about not just writing this plan and sending it to people, but keep nudging and keep bringing it up. If you know what your milestones are when people are talking about budgets or budgets are getting approved, you know, those are great opportunities to research, collate your plan and go from there.

Sarah Durham (51:47):

Now, many organizations that we work with and talk to are already doing this, and they’re already talking to us and other people about what they’re doing. And a partner can also help you figure out your timeline. So there are a lot of ways to do this. You don’t have to do the heavy lifting on your own. But what you don’t want to do is you don’t want to wait until you’re, you know, a couple of months away from these deadlines if they pose significant risks or implications for your organization. So we have a few minutes left to go before the top of our hour. And I want to hear a little bit from you. So if you’ve got questions or comments, you can either use the Q&A feature, which you will see at the bottom of your screen, or you can chat them in to Dave and I, as we go. And we’re going to stop sharing our screen. Now we’ll take a few questions and while you chat those in, I also want to just remind everybody that we are going to be sending out a follow-up link to the recording here. And Theresa is also going to chat out a couple of the articles we mentioned. Dave has written a really helpful article about D7 end-of-life. He’s also written an article about D8 and there’s an article I’ve written that’s about how you, how you plan for this change. So Theresa will chat those all out.

Sarah Durham (53:17):

Okay, Dave, first question for you. Somebody is chatting in about administrators and they’re thinking, well, actually, this is sort of a double-barreled question. Let’s take it in two parts. First in option A, you talked about building a new site as option A. You specifically talked about WordPress and Drupal. Both of those are open source technologies. Why are you talking just about WordPress and Drupal and not any other systems?

Dave Hansen-Lange (53:46):

One of the things that I also talked about was like, kind of the momentum of these projects, like Drupal is large. WordPress is ginormous. And there’s lots of movement in those projects. There’s lots of momentum as soon as someone has a new idea or a new technology pops up on the internet, like things move quickly. And there’s a way to do it on your website in short order. And I also talked about the security group, that’s not the official title, but like there’s ways like that in which you’re getting the benefits of someone else volunteering their time for your website, which you just don’t get in in some of the other options that you have.

Sarah Durham (54:37):

Okay, thank you. And the second part of this question was about comparing WordPress and Drupal about administrators and the options there. This person is talking about how there’s lots of different people in their organization, who right now have different layers of access in Drupal 7. And they’re wondering if there are any recommendations you have for new platforms based on that kind of complexity.

Dave Hansen-Lange (55:01):

Yeah, so like the area of editorial permissions and controls, like that’s one of the big differentiators between Drupal and WordPress. WordPress has some basic systems around this role can do this, or this role can do that. In Drupal, we can make things a whole lot more complicated, like people who manage this section of the website, they can upload images. Other people can use those images, but only the original group of people can edit them or ways of more complicated things that you can do in Drupal.

Sarah Durham (55:38):

Okay, so there’s a question here about the difference between a Drupal new build and a Drupal upgrade in terms of cost. And actually, would you mind just bringing it up again, cause somebody chatted to me that they arrived a bit late and they didn’t see your slide. I think it’s your slide number six, which outlines all the options. Let’s just quickly go back to that slide for a second and share that. And I think that the question that just got chatted into me relates to this. So on slide six, you mapped out a bunch of different options ranging from building a new site to staying on Drupal 7. And those were ranked, as you talked about them from most expensive to least expensive. So you said building a new site is the most expensive, staying on Drupal 7 is the least expensive, and then the upgrade or the switching to Backdrop were in between. So the question is about the cost differential between building a new site in Drupal 9 and upgrading in Drupal 9. I assume that there are additional costs for design, for UX, things like that, and building a new website, but how significant is that differential? What other variables inform the cost difference there?

Dave Hansen-Lange (57:06):

Yeah, so I talked about sort of in any of these higher options… well, no, let me rephrase that. In the two middle options, you have the option of how much redesign you want to do, of course. And that’s probably the biggest thing that affects how big or small upgrading to Drupal 9, that project is going to be. But let’s say you wanted to redesign and compare upgrading to Drupal 9 versus creating a new website in Drupal 9. It’s difficult to be put on the spot, but I don’t know, 80%, 90% since you’re doing a full redesign. Upgrading to Drupal 9 and moving to a new website, they start to become more similar. The more you’re redesigning, the similar in cost.

Sarah Durham (58:01):

Okay, thank you. That sounds like what we were expecting. So I am just skimming through your questions and it looks like a couple of other questions that we have here are pretty unique to specific organizations, so I’m going to follow up directly with those organizations since we are just about out of time. I want to thank Theresa and Dave for joining us today. Dave, thank you for imparting your wisdom on this topic. And I want to thank everybody who took the time to log in and watch this. I hope this has been helpful for you. If you have specific questions or concerns or things you want to pick our brain about, you can always email us at [email protected] or [email protected]. We’d be happy to get on the phone with you, talk a little bit about your situation if that is of use to you. And again, Theresa will be sending out a link to these articles and the recording to you in just a few days. So thank you, all. And thank you all for the excellent work you do to make the world a better place. Be well, thanks.

Oct 22 2020
Oct 22

A common frustration for Drupal 8 (and 9) site builders is the inability to change text fields from plain text to filtered text through the administrative interface. This was something that was easy to do in Drupal 7 by editing the field’s settings and changing the value for Text processing.

Sometimes requirements for a field change, both during the build phase and long after a site has been in production, and it would be convenient to toggle on a rich text editor and text filtering with minimal effort.

In Drupal 8, there are five types of text fields in core:

  • Text (plain)
  • Text (plain, long)
  • Text (formatted)
  • Text (formatted, long)
  • Text (formatted, long, with summary)

The first two are actually string fields and don’t allow any formatting. If a user enters HTML tags, they are ignored and displayed as plain text.

The last three will allow HTML tags, depending on the settings for the Text Format that the user chooses when entering content. However, only the last two will show the WYSIWYG editor (if it’s associated with the selected text format).

But what happens if midway through your build process, or months after your site has launched, the requirements for that text field change? Your client or designer decides they now want to allow some formatting in a field that was originally Text (plain) or Text (plain, long). And they want their editors to be able to use a WYSIWYG, so they don’t have to deal with HTML code. What do you do?

The long way: create new field, migrate data, reconfigure

One solution is to write an update hook that will create a new field, migrate existing data to it, and delete the old field. If the field is renamed, you also have to consider reconfiguring any views, entity references, display modes, etc. that referred to the old field. Changing a field type this way is entirely possible, but more time consuming and error prone.

Wouldn’t it be nice if you could simply enable a WYSIWYG on that plain text field and be done with it? Especially if your client is on a tight budget. It’s actually possible to do this in a custom module, with a few lines of code.

The short way part 1: add a form alter

First you’ll need to add a form_alter function in your custom module, most likely in the .module file. There are many ways to add a form_alter in Drupal 8, and those are documented elsewhere. See the entry about hook_form_alter in the Drupal API.

You may also need to add some conditions so that the field is only changed on certain forms or for certain content types–this is also beyond the scope of this article.

In this example, I’m adding a general hook_form_alter() that will apply to all forms regardless of entity type. If you have a Text (formatted) field, you may want to enable a WYSIWYG on it to make it easier for editors to create the content. Because it’s already a formatted text field, the form alter is very simple.

use Drupal\Core\Form\FormStateInterface;

function mymodule_form_alter(&$form, FormStateInterface $form_state, $form_id) {
  if(isset($form['field_myformattedtextfield'])) {
    $form['field_myformattedtextfield']['widget']['0']['#base_type'] = 'textarea';
  }
}

We are only changing one value associated with the widget: we change the base_type to textarea. The editors will see a WYSIWYG, and the data will be saved and displayed as formatted text.

If you want to add a WYSIWYG widget on a Text (plain) or Text (plain, long) field, it’s a little trickier. There are a few more widget attributes to alter.

use Drupal\Core\Form\FormStateInterface;

function mymodule_form_alter(&$form, FormStateInterface $form_state, $form_id) {
  if(isset($form['field_myplaintextfield'])) {
    // Fetch the entity object.
    $entity = $form_state->getFormObject()->getEntity();
    // Get the current value stored for the field.
    $value = $entity->field_myplaintextfield->getString();
    // Change the base type for this field to a textarea.
    $form['field_myplaintextfield']['widget']['0']['#base_type'] = 'textarea';
    // Change the type of field to formatted text.
    $form['field_myplaintextfield']['widget']['0']['#type'] = 'text_format';
    // Recommended: set a default text format. When rendering
// you’ll have to manually set this to make the field use
// formatting (see next section).
    $form['field_myplaintextfield']['widget']['0']['#format'] = 'full_html';
    // Set the default value to the currently stored value.
    $form['field_myplaintextfield']['widget']['0']['#default_value'] = $value;
  }
}

This will give us the WYSIWYG where we want it, but the value is still stored and displayed as plain text. We need to add another function to transform it for output.

The short way part 2: let the fields render as formatted text

For Text (plain) or Text (plain, long) fields, we have to tell Drupal to run the stored value through one of the Text Format filters and render it as formatted text. This involves two functions and a configuration setting.

In your custom module, add a hook_field_formatter_info_alter to allow the plain text field types to use the default text formatter:

function mymodule_field_formatter_info_alter(array &$info) {
  // Let the string field types use the text formatter.
  $info['text_default']['field_types'][] = 'string';
  $info['text_default']['field_types'][] = 'string_long';
}

Then, add a template_preprocess_field function to tell Drupal which text format to use when that field is displayed as filtered text. Since this isn’t a regular formatted text field, Drupal doesn’t store that information the way it does for the standard formatted text field types. Be sure to use the same text format that you used in the hook_form_alter().

function mymodule_preprocess_field(&$variables, $hook) {
  if ($variables['field_name'] == 'field_myplaintextfield') {
    $variables['items']['0']['content']['#format'] = 'full_html';
  }
}

Lastly, we go to the “Manage display” configuration for the field in question, and tell Drupal to use the “Default” format for that field. If you are using Configuration Synchronization, you’ll notice this affects the “type” in the “Entity view display” configuration for this field.

...
  field_myplaintextfield:
    weight: 100
    label: above
    settings: {  }
    third_party_settings: {  }
    type: text_default
    region: content
...

Voila! That should be it. View your content and check that the plain text field is now being rendered with HTML formatting.

Conclusion

With just a few lines of code, we added a WYSIWYG editor to a plain text field and enabled Drupal to display its contents as formatted text. This technique will work for both Drupal 8 and Drupal 9. You can refine this approach depending on your needs to display formatted text only for certain view modes, entity types, field instances, etc.

Want to add this functionality to your site? Contact us if you’d like to hear more about our services.

Sep 23 2020
Sep 23

Working in digital design and development, you grow accustomed to the rapid pace of technology. For example: After much anticipation, the latest version of Drupal was released this summer. Just months later, the next major version is in progress.

At July’s all-virtual DrupalCon Global, the open-source digital experience conference, platform founder Dries Buytaert announced Drupal 10 is aiming for a June 2022 release. Assuming those plans hold, Drupal 9 would have the shortest release lifetime of any recent major version.

For IT managers, platform changes generate stress and uncertainty. Considering the time-intensive migration process from Drupal 7 to 8, updating your organization’s website can be costly and complicated. Consequently, despite a longtime absence of new features, Drupal 7 still powers more websites than Drupal 8 and 9 combined. And, as technology marches on, the end of its life as a supported platform is approaching.

Fortunately, whatever version your website is running, Drupal is not running away from you. Drupal’s users and site builders may be accustomed to expending significant resources to update their website platform, but the plan for more frequent major releases alleviates the stress of the typical upgrade. And, for those whose websites are still on Drupal 7, Drupal 10 will continue offering a way forward.

The news that Drupal 10 is coming sooner rather than later might have been unexpected, but you still have no reason to panic just yet. However, your organization shouldn’t stand still, either.

Image via Dri.es

The End for Drupal 7 Is Still Coming, but Future Upgrades Will Be Easier

Considering upgrading to Drupal 8 involves the investment of building a new site and migrating its content, it’s no wonder so many organizations have been slow to update their platform. Drupal 7 is solid and has existed for nearly 10 years. And, fortunately, it’s not reaching its end of life just yet.

At the time of Drupal 9’s release, Drupal 7’s planned end of life was set to arrive late next year. This meant the community would no longer release security advisories or bug fixes for that version of the platform. Affected organizations would need to contact third-party vendors for their support needs. With the COVID-19 pandemic upending businesses and their budgets, the platform’s lifespan has been extended to November 28, 2022.

Drupal’s development team has retained its internal migration system through versions 8 and 9, and it remains part of the plan for the upcoming Drupal 10 as well. And the community continues to maintain and improve the system in an effort to make the transition easier. If your organization is still on Drupal 7 now, you can use the migration system to jump directly to version 9, or version 10 upon its release. Drupal has no plans to eliminate that system until Drupal 7 usage numbers drop significantly.

Once Drupal 10 is ready for release, Drupal 7 will finally reach its end of life. However, paid vendors will still offer support options that will allow your organization to maintain a secure website until you’re ready for an upgrade. But make a plan for that migration sooner rather than later. The longer you wait for this migration, the more new platform features you’ll have to integrate into your rebuilt website.

Initiatives for Drupal 10 Focus on Faster Updates, Third-Party Software

In delivering his opening keynote for DrupalCon Global, Dries Buytaert outlined five strategic goals for the next iteration of the platform. Like the work for Drupal 9 that began within the Drupal 8 platform, development of Drupal 10 has begun under the hood of version 9.

A Drupal 10 Readiness initiative focuses on upgrading third-party components that count as technological dependencies. One crucial component is Symfony, which is the PHP framework Drupal is based upon. Symfony operates on a major release schedule every two years, which requires that Drupal is also updated to stay current. The transition from Symfony 2 to Symfony 3 created challenges for core developers in creating the 8.4 release, which introduced changes that impacted many parts of Drupal’s software.

To avoid a repeat of those difficulties, it was determined that the breaking changes involved in a new Symfony major release warranted a new Drupal major release as well. While Drupal 9 is on Symfony 4, the Drupal team hopes to launch 10 on Symfony 6, which is a considerable technical challenge for the platform’s team of contributors. However, once complete, this initiative will extend the lifespan of Drupal 10 to as long as three or four years.

Other announced initiatives included greater ease of use through more out-of-the-box features, a new front-end theme, creating a decoupled menu component written in JavaScript, and, in accordance with its most requested feature, automated security updates that will make it as easy as possible to upgrade from 9 to 10 when the time comes. For those already on Drupal 9, these are some of the new features to anticipate in versions 9.1 through 9.4.

Less Time Between Drupal Versions Means an Easier Upgrade Path

The shift from Drupal 8 to this summer’s release of Drupal 9 was close to five years in the making. Fortunately for website managers, that update was a far cry from the full migration required from version 7. While there are challenges such as ensuring your custom code is updated to use the most recent APIs, the transition was doable with a good tech team at your side.

Still, the work that update required could generate a little anxiety given how comparatively fast another upgrade will arrive. But the shorter time frame will make the move to Drupal 10 easier for everybody. Less time between updates also translates to less deprecated code, especially if you’re already using version 9. But if you’re not there yet, the time to make a plan is now.

Jul 10 2020
Jul 10

Things aren’t going the way you planned. We’re now in a recession, the pandemic has caused unexpected challenges, and your budgets have been cut. Welcome to the summer of 2020. Still, your website is more important than ever. Your donors, clients, members, and advocates are expecting it to be up-to-date, easy-to-use, and bug-free.

To add insult to injury: Drupal 7 (D7) end-of-life is now looming on the horizon. Although the good folks at Drupal.org notified us in early 2019, most nonprofits haven’t had the time or budget to get rolling yet. And if you’re working on a team that’s had its resources cut it may feel like an impossible set of circumstances to navigate. 

But don’t panic. The Drupal community just announced we’ve got an extra year. As Advomatic’s Dave Hansen-Lange pointed out in an earlier article about D7 end-of-life, there are lots of options that can help you not only manage this proactively but help you come out on top looking like a tech rockstar. Here’s a guide. 


Step one: go from abstract fear to tangible plans. 

In an ideal world you would be building a new website in 2021 that’s ready to go live early in 2022. But it’s not essential that you make the move off of D7 before the November 2022 end-of-life date. What is essential is that you have a plan that you and your leadership feel works. 

To get started, make sure that you or your team have a clear understanding of the implications of keeping your site on D7 after November 2022. It may help to facilitate a conversation within your website team asap about how unique and mission-critical your website’s security is, for instance, and this article will give you a useful overview of the risks. Consider making some background reading a requirement for participation in that conversation so your team gets better informed. 

Next, get on the same page about what’s holding your organization back from leaving D7.  Is it the cost of building a brand new site? A lack of understanding or focus on D7 end-of-life and its implications with your leadership? Is it confusion about whether to stay in Drupal or consider a move to WordPress, Backdrop, or another CMS? Is it your staff’s limited capacity to manage a new website build right now? These are the common scenarios most nonprofits are facing — and they all have solutions, ranging from doing some internal educating, doubling down on support for your D7 site later, or finding a partner who can do more for you now. 

Finally, draft a pragmatic plan for your organization. Now that you’ve got a grip on what D7 means for your organization and your team’s ability to navigate it in the near term you can craft a plan. Your plan should take the folks who must support it on a journey of understanding and, ideally, keep them out of the weeds if this isn’t their job. 

We recommend crafting a plan in Google Slides, PowerPoint, or other presentation deck with just five slides: 

  • The situation: A sentence or two explaining D7’s end-of-life.
  • The risk for your organization: A sentence or two explaining what the implications are for your org.
  • The options: Bullets that outline the top 2-3 options for your organization specifically. 
  • Our recommendation: Bullets detailing your recommendation. 
  • Proposed timeline: A high-level timeline detailing when decisions must be made and actions taken to fulfill your recommendation.

Can’t say it all in a few slides? Use the notes area to add examples or detail if necessary. But try to resist creating lots of text-based slides with millions of bullets or detail. You’ll be more successful at getting support from your leadership if you can simplify the complexity of this issue for them and demonstrate you’ve already thought it through well, so they can trust your recommendation. 

Step two: educate your colleagues and get their buy-in for your plan. 

If you’ve completed Step One, odds are good your plan was crafted collaboratively with any colleagues who work on the web team at your organization– but if not, this is a good time to review it with them and make sure everyone feels good about it. You’ll want everyone aligned and on-board so there’s no confusion or mixed signals communicated downstream and so your recommendations can be integrated into your next budgeting cycle. 

The next step is to take your plan to your leadership. In most nonprofits this will involve presenting it to your Executive Director or CEO and/or COO. While you may not always present formally, I recommend you plan to do so here. Take a few minutes to practice walking through your slide deck, perhaps with family or friends first, so you can present it quickly and with confidence. The prep time you invest will not only make the meeting go smoother it might very well save you time and energy in subsequent conversations. 

If your greatest barrier is the budget and you are suggesting that your organization consider reinstating some funds for a new website, you may also need to present or share your deck with the board. 

When you present, be sure to leave time to answer questions and ask directly for feedback on your recommendations and proposed timeline. If you can leave the meeting with a clear sense of what is working for them and what isn’t, you’ll be better equipped to revise your plan if needed or put other balls into motion. If you’re presenting your plan via Zoom consider recording it. If you are able to get through it in 10 minutes or less, sharing the video with colleagues or board members may be a faster and easier way to educate and build buy-in for your plan. 

Step three: keep it top-of-mind

Experts have studied and written about the importance of repetition in reaching and getting people to remember new things. Take a page from their playbook and plan to repeat your concerns, suggestions, and timeline proactively. Consider setting reminders to follow up with colleagues at key decision-making junctures, bringing your plan up again in management meetings, asking for updates from your E.D., or whatever feels appropriate to your organization’s culture and practices. 

A key moment to keep your recommendations top-of-mind will be when you’re budgeting for your next fiscal year. The more you’ve got folks on the bus already, the more likely you will be to get this project supported during the lean year(s) ahead. 

 

Step four: sleep well at night

Regardless of whether things turned out exactly as you hoped, you’ll sleep better at night knowing that you proactively addressed Drupal 7’s end-of-life and led your organization through a thoughtful process to manage it. You might also have inspired your colleagues to see your leadership and management skills in a new light, too. 

Mar 20 2020
Mar 20

On June 24, 2020 Drupal.org announced that Drupal 7’s end of life has been extended until November 2022 because of the impact of COVID-19 on budgets and capacity. This article still remains relevant– but please note that the dates have been pushed back a year

If you have a Drupal 7 website, you might have already heard that the official end-of-life date for Drupal 7 has been officially set for November 2021. Many organizations should upgrade their Drupal 7 sites before then. But that might not be required. Here’s how you figure out what you need to do.

“What does Drupal 7 End-of-Life mean?”

First let’s talk about what EOL means for Drupal. The main thing is security updates. 

Drupal has a highly regarded security team who manage security for both core Drupal and thousands of public modules, themes and distributions that add additional features. When a security problem is found with Drupal core, the team fixes the problem and publishes advisories that explain vulnerabilities, along with steps to mitigate them. All of this is contributed publicly and freely, just like you would expect from open source software. 

The security team supports versions of Drupal until they reach their end-of-life. 

But after the EOL, the baton is passed along to an Extended Security Support team. This team is composed of pre-vetted Drupal agencies, and they are commercially funded by those clients who want to pay for the extended security support. They are mandated to publicly release fixes for most of the security vulnerabilities that they find. 

“Hold on — What level of security support do I need?”

Before we talk about what you should do about D7 EOL, you first need to think about how important security is for your website.

  • Are there people who are actively trying to attack your website (maybe because of your strong stance on a particular issue)?
  • Does your website process commercial transactions? (Most non-profit websites these days use third-party websites to process donations and event registrations.)
  • Does your website collect a lot of personally identifiable information (PII)? This relates back to the first point: if there’s lots of valuable PII, an attacker will be more interested in trying to steal it. 

If you answered “yes” to any of these questions, then security is of extra importance for you. 

“I won’t have the budget for a big website rebuild before November 2021”

It’s going to be okay, we’ve got a few options for you. You’ll fall into one of the following categories:

1. “Security is really important for our website, we need Extended Security Support”

Regardless of whether you are an existing client, or someone we’ve never worked with before, please reach out to us and let us know if we can help.

2. “Security is just as important to our website as it is for every other website, but not in an extra special way”

If your website does not have a reason for someone to actively try to attack it, then you only need to be guarded from publicly known security vulnerabilities. That way, you’re protected against the automated attacks that hit every website. Typically those kinds of automated attacks are either trying to use your web servers to mine bitcoin, or lock up your website and demand a ransom. 

When Drupal 6 reached end-of-life in 2016 we continued to support our Drupal 6 clients using the publicly released updates from the Extended Security Support team. Our last Drupal 6 client just got a new website a few months ago! 

We’ll do the same when Drupal 7 reaches end-of-life. When a Drupal 7 update is released, we’ll update your website, just like we already do for all of our Drupal and WordPress support and maintenance clients.

3. “Help, I have no idea what I need!”

No problem. We can help here too. Just let us know. 

 

Conclusion

Regardless of where you’re at — or where you’re going next — we’re here to help. Drop us a line.

Mar 12 2020
Mar 12

argument-open-source

If you engaged in a word association game, one of the first things people would respond when you say “open source” is that it’s free. If any of those people are in the position of purchasing software licenses for a business or organization, that makes open source (a.k.a., free) definitely a benefit worth exploring. Open source has the potential to save thousands of dollars or more, depending on the software and the size of the organization. 

Even though eliminating a budget line item for licensing costs may be enough to convince some organizations that open source is the way to go, it’s actually only one of several compelling reasons to migrate from proprietary platforms to open-source architecture. 

In a debate on open-source vs. licensed platforms, the affirmative argument will include these four, additional points: 

Development Freedom

When businesses provide workstations for their employees, they choose (often inadvertently) the framework on which their organizations operate. For example, if a business buys Dell computers, it will operate within the Microsoft Windows framework. This isn’t necessarily a bad thing. A business with limited IT and development resources won’t have to worry about how to keep its operating system working or whether business applications or security solutions are available. Microsoft has a line of solutions and partnerships that can provide what they’re looking for. 

With a system built on an open-source platform, on the other hand, it may take more resources and work to keep it running and secure, but it gives developers the freedom to do exactly what the end user needs. You aren’t limited by what a commercial platform enables you to do. 

In some markets, foregoing the status quo for developmental freedom sounds like risk. It’s a major reason that government users lag behind the commercial space in technology. They’re committed to the old systems that they know are robust, secure, and predictable at budget time — even though they’re outdated. When those organizations take a closer look, however, they quickly realize they can negate development costs through greater visibility, efficiency, and productivity that a platform that specifically supports their operations can provide. 

Open-source platforms are also hardware agnostic, giving organizations more latitude when it comes to the computers, mobile devices, and tools they can use, rather than being locked into limited, sometimes expensive, options for hardware. 

Moreover, development freedom delivers more ROI than merely decreasing current costs. Open-source platforms give developers the freedom to customize systems and innovate. If your system enabled you to expand your reach, better control labor costs, and support new revenue streams, what impact could that have on your business?

Interoperability

Enterprises and manufacturers have traditionally guarded their proprietary systems, which gave them an edge in their markets and control over complementary solutions and peripherals end users needed. Those same proprietary systems, however, could now be a business liability. Many markets are moving toward open source to provide greater interoperability, and businesses continuing to use proprietary platforms will increasingly be viewed as less desirable partners. 

Military avionics is a prime example. This industry is migrating to the Future Airborne Capability Environment (FACE) Technical Standard. Administered by the FACE Consortium, this open standard aims to give the U.S. Department of Defense the ability to acquire systems more easily and affordably and to integrate them more quickly and efficiently.  

You’ll also find a preference for open-source architecture in some segments of the tech industry as well, such as robotics. The Robot Operation System (ROS) is a set of open resources of tools, libraries, and conventions that standardizes how robots communicate and share information. ROS simplifies the time-consuming work of creating robotic behaviors, and ROS 2 takes that objective further by giving industrial robot developers support for multirobot systems, safety, and security. 

As Internet of Things (IoT) technology adoption grows, more operations are experiencing roadblocks connecting legacy equipment and enabling the free flow of data — which open-source architecture can overcome. Furthermore, IoT based on open-source components allow networks to expand beyond the four walls of a facility to connect with business partners, the supply chain, and end users. The Linux Foundation’s Zephyr Project, for example, promotes open-source, real-time operating systems (RTOS) that enables developers to build secure, manageable systems more easily and quickly. 

Faster Time to Market

Open source projects can also move more quickly than developing on a proprietary platform. You may be at the mercy of the vendor during the development process if you require assistance, and certifying hardware or applications occur on their timelines. 

That process moves much more quickly in an open source community. Additionally, members of the community share. Some of the best developers in the industry work on these platforms and often make their work available to other developers so they don’t need to start from scratch to include a feature or function their end user requires. A modular system can include components that these developers have created, tested, and proven — and that have fewer bugs than a newly developed prototype. 

Developers, using prebuilt components and leveraging an open source community’s expertise, can help you deploy your next system more quickly than starting from ground zero. 

Business Flexibility

Open-source architecture also gives a business or organization advantages beyond the IT department. With open source, you have more options. The manager of a chain of resorts facing budget cuts, for example, could more easily find ways to decrease operating expenses if her organization’s system runs on an open-source platform. A chain that operates on a commercial platform, however, may have to find other options, such as reducing staff with lay-offs.  

Open source architecture also decreases vendor lock-in. In a world that’s changing at a faster and faster pace, basing your systems open-source architecture gives you options if a vendor’s company is acquired and product quality, customer service, and prices change. It also gives you flexibility if industry standards or regulations require that you add new features or capabilities that your vendor doesn’t provide, decreasing the chances you’ll need to rip and replace your IT system.

The Price of Open Source

To be perfectly honest in the open source vs. commercial platform debate, we have to admit there is a cost associated with using these platforms. They can’t exist without their communities’ contributions of time, talent, and support. 

At Mobomo, for example, we’re an active part of the Drupal open-source content management system (CMS) platform. Our developers are among the more than 1 million members of this community that have contributed more than 30,000 modules. We also take the opportunity to speak at Drupal community events and give back to the community in other ways. 

Regardless of how much we contribute to the community, however, it’s never exceeded the payback. It’s enabled lower total cost of ownership (TCO) for us and our clients, saving millions of dollars in operating expenses. It has ramped up our ability to create and innovate. It’s also allowed us to help build more viable organizations and valuable partnerships. 

The majority of our industry agrees with us. The State of Enterprise Open Source report in 2019 from Red Hat asked nearly 1,000 IT leaders around the world how strategically important open source is to an enterprises’ infrastructure software plans. Among respondents, 69 percent reported that it is extremely important, citing top benefits as lower TCO, access to innovation, security, higher-quality software, support, and the freedom to customize. 

Only 1 percent of survey respondents said it wasn’t important at all. 

Which side of the open-source vs. commercial platforms argument do you come down on?

Contact us to drop us a line and tell us about your project.

Jan 23 2020
Jan 23

In the Drupal support world, working on Drupal 7 sites is a necessity. But switching between Drupal 7 and Drupal 8 development can be jarring, if only for the coding style.

Fortunately, I’ve got a solution that makes working in Drupal 7 more like working in Drupal 8. Use this three-part approach to have fun with Drupal 7 development:

  • Apply Xautoload to keep your PHP skills fresh, modern, and compatible with all frameworks and make your code more reusable and maintainable between projects. 
  • Use the Drupal Libraries API to use third-party libraries. 
  • Use the Composer template to push the boundaries of your programming design patterns. 

Applying Xautoload

Xautoload is simply a module that enables PSR-0/4 autoloading. Using Xautoload is as simple as downloading and enabling it. You can then start using use and namespace statements to write object-oriented programming (OOP) code.

For example:

xautoload.info

name = Xautoload Example
description = Example of using Xautoload to build a page
core = 7.x package = Midcamp Fun

dependencies[] = xautoload:xautoload

xautoload_example.module

<?php use Drupal\xautoload_example\SimpleObject; function xautoload_example_menu() { $items['xautoload_example'] = array( 'page callback' => 'xautoload_example_page_render', 'access callback' => TRUE, ); return $items; } function xautoload_example_page_render() { $obj = new SimpleObject(); return $obj->render(); } use Drupal\xautoload_example\SimpleObject;function xautoload_example_menu() {  $items['xautoload_example'] = array(    'page callback' => 'xautoload_example_page_render',    'access callback' => TRUE,  return $items;function xautoload_example_page_render() {  $obj = new SimpleObject();  return $obj->render();

src/SimpleObject.php

<?php namespace Drupal\xautoload_example; class SimpleObject { public function render() { return array( '#markup' => "<p>Hello World</p>", ); } } namespace Drupal\xautoload_example;class SimpleObject {  public function render() {    return array(      '#markup' => "

Hello World

"
,    );

Enabling and running this code causes the URL /xautoload_example to spit out “Hello World”. 

You’re now ready to add in your own OOP!

Using Third-Party Libraries

Natively, Drupal 7 has a hard time autoloading third-party library files. But there are contributed modules (like Guzzle) out there that wrap third-party libraries. These modules wrap object-oriented libraries to provide a functional interface. Now that you have Xautoload in your repertoire, you can use its functionality to autoload libraries as well.

I’m going to show you how to use the Drupal Libraries API module with Xautoload to load a third-party library. You can find examples of all the different ways you can add a library in xautoload.api.php. I’ll demonstrate an easy example by using the php-loremipsum library:

1. Download your library and store it in sites/all/libraries. I named the folder php-loremipsum. 

2. Add a function implementing hook_libraries_info to your module by pulling in the namespace from Composer. This way, you don’t need to set up all the namespace rules that the library might contain.

function xautoload_example_libraries_info() { return array( 'php-loremipsum' => array( 'name' => 'PHP Lorem Ipsum', 'xautoload' => function ($adapter) { $adapter->composerJson('composer.json'); } ) ); } function xautoload_example_libraries_info() {  return array(    'php-loremipsum' => array(      'name' => 'PHP Lorem Ipsum',      'xautoload' => function ($adapter) {        $adapter->composerJson('composer.json');      }

3. Change the page render function to use the php-loremipsum library to build content.

use joshtronic\LoremIpsum; function xautoload_example_page_render() { $library = libraries_load('php-loremipsum'); if ($library['loaded'] === FALSE) { throw new \Exception("php-loremipsum didn't load!"); } $lipsum = new LoremIpsum(); return array( '#markup' => $lipsum->paragraph('p'), ); } use joshtronic\LoremIpsum;function xautoload_example_page_render() {  $library = libraries_load('php-loremipsum');  if ($library['loaded'] === FALSE) {    throw new \Exception("php-loremipsum didn't load!");  $lipsum = new LoremIpsum();  return array(    '#markup' => $lipsum->paragraph('p'),

Note that I needed  to tell the Libraries API to load the library, but I then have access to all the namespaces within the library. Keep in mind that the dependencies of some libraries are immense. You’ll very likely need to use Composer from within the library and commit it when you first start out. In such cases, you might need to make sure to include the Composer autoload.php file.

Another tip:  Abstract your libraries_load() functionality out in such a way that if the class you want already exists, you don’t call libraries_load() again. Doing so removes libraries as a hard dependency from your module and enables you to use Composer to load the library later on with no more work on your part. For example:

function xautoload_example_load_library() { if (!class_exists('\joshtronic\LoremIpsum', TRUE)) { if (!module_exists('libraries')) { throw new \Exception('Include php-loremipsum via composer or enable libraries.'); } $library = libraries_load('php-loremipsum'); if ($library['loaded'] === FALSE) { throw new \Exception("php-loremipsum didn't load!"); } } } function xautoload_example_load_library() {  if (!class_exists('\joshtronic\LoremIpsum', TRUE)) {    if (!module_exists('libraries')) {      throw new \Exception('Include php-loremipsum via composer or enable libraries.');    $library = libraries_load('php-loremipsum');    if ($library['loaded'] === FALSE) {      throw new \Exception("php-loremipsum didn't load!");

And with that, you’ve conquered the challenge of using third-party libraries!

Setting up a New Site with Composer

Speaking of Composer, you can use it to simplify the setup of a new Drupal 7 site. Just follow the instructions in the Readme for the Composer Template for Drupal Project. From the command line, run the following:

composer create-project drupal-composer/drupal-project:7.x-dev --no-interaction

This code gives you a basic site with a source repository (a repo that doesn’t commit contributed modules and libraries) to push up to your Git provider. (Note that migrating an existing site to Composer involves a few additional considerations and steps, so I won’t get into that now.)

If you’re generating a Pantheon site, check out the Pantheon-specific Drupal 7 Composer project. But wait: The instructions there advise you to use Terminus to create your site, and that approach attempts to do everything for you—including setting up the actual site. Instead, you can simply use composer create-project  to test your site in something like Lando. Make sure to run composer install if you copy down a repo.

From there, you need to enable the Composer Autoload module , which is automatically required in the composer.json you pulled in earlier. Then, add all your modules to the require portion of the file or use composer require drupal/module_name just as you would in Drupal 8.

You now have full access to all the  Packagist libraries and can use them in your modules. To use the previous example, you could remove php-loremipsum from sites/all/libraries, and instead run composer require joshtronic/php-loremipsum. The code would then run the same as before.

Have fun!

From here on out, it’s up to your imagination. Code and implement with ease, using OOP design patterns and reusable code. You just might find that this new world of possibilities for integrating new technologies with your existing Drupal 7 sites increases your productivity as well.

Dec 09 2019
Dec 09

With Drupal 9 set to be released later next year, upgrading to Drupal 8 may seem like a lost cause. However, beyond the fact that Drupal 8 is superior to its predecessors, it will also make the inevitable upgrade to Drupal 9, and future releases, much easier. 

Acquia puts it best in this eBook, where they cover common hangups that may prevent migration to Drupal 8 and the numerous reasons to push past them.

The Benefits of Drupal 8

To put it plainly, Drupal 8 is better. Upon its release, the upgrade shifted the way Drupal operates and has only improved through subsequent patches and iterations, most recently with the release of Drupal 8.8.0

Some new features of Drupal 8 that surpass those of Drupal 7 include improved page building tools and content authoring, multilingual support, and the inclusion of JSON:API as part of Drupal core. We discussed some of these additions in a previous blog post

Remaining on Drupal 7 means hanging on to a less capable CMS. Drupal 8 is simply more secure with better features.

What Does Any of This Have to Do With Drupal 9?

With an anticipated release date of June 3, 2020, Drupal 9 will see the CMS pivot to an iterative release model, moving away from the incremental releases that have made upgrading necessary in the past. That means that migrating to Drupal 8 is the last major migration Drupal sites will have to undertake. As Acquia points out, one might think “Why can’t I just wait to upgrade to Drupal 9?” 

While migration from Drupal 7 or Drupal 8 to Drupal 9 would be essentially the same process, Drupal 7 goes out of support in November 2021. As that deadline approaches, upgrading will only become an increasingly pressing necessity. By migrating to Drupal 8 now, you avoid the complications that come with a hurried migration and can take on the process incrementally. 

So why wait? 

To get started with Drupal migration, be sure to check out our Drupal Development Services, and come back to our blog for more updates and other business insights. 
 

Dec 06 2019
Dec 06

You may have read our previous articles about how to plan for Drupal 6 or Drupal 7 End-of-Life. The important thing to know is that the Drupal 8 End-of-Life is nothing like those. In fact, “End of Life” is completely the wrong idea. Instead, it’s more like one of those spa treatments where you get a full body scrub to get rid of the dead skin cells. You walk out feeling rejuvenated and refreshed. 

Drupal 9 — Same as Drupal 8, But Without The Old Stuff

Drupal release timeline

In each new minor version of Drupal 8 there are some new features, and some old code is marked as “deprecated” (that just means that it’s time to stop using this, because it’s going to go away some day). After nine minor versions over almost five years, there’s now an accumulation of deprecated code. This deprecated code is like those dead skin cells that you go to the spa to get rid of.  So Drupal 9.0 will be the same as Drupal 8.9, just without the deprecated code. The two might even be released at the same time. 

Then, in Drupal 9.1, we see the cycle starting again: some new features, and some old code is marked as deprecated.

Don’t Rely on Deprecated Code

In the graphic above, you’ll notice that 8.9 does not have any more deprecated code than 8.8. That means that once a website is upgraded to 8.8, we can then start the process of ensuring that the site isn’t using any deprecated code. 

If you are an Advomatic client, we’ll create a ticket in your queue to clean out all uses of deprecated code. In fact, if you’ve done a project with us recently, we’ve already started doing this as part of the Q/A process in our two-week sprints. 

A Window of Almost Two Years for This Cleanup

Drupal timeline by quarters

This is the timeline for the next several versions of Drupal.  We’ve got about 2 years to make this change — more than enough time. 

Alternating Minor Versions

We handle all the technical stuff for you. But the purpose of the website is not for us to have a technical toy to play with, it’s to advance the mission of your non-profit. So we want to devote most of our time and effort towards your web strategy. While we could upgrade your website to the newest version every six months, it’s not the best use of your money or time. So we alternate versions. That means that your Drupal 8 website is either always on an even minor version, or an odd minor version. 

We’ll likely continue that pattern as we cross the threshold into Drupal 9. That means that this process could be delayed by 6 months from what you see here.

Flipping the Switch

Once we’ve cleaned up all the deprecated code, then we’re ready to upgrade the site to Drupal 9.  Remember: this is nothing like past major upgrades in Drupal. Instead it’s just like the minor upgrades from Drupal 8.6 → 8.7 → 8.8 etc.

Conclusion

The key takeaway is that this whole process should be almost seamless. We’ll create a few tickets in the queue to prep for the upgrade, and then for the upgrade itself.  But the majority of our time will still be spent on advancing your mission. Over the years to come the website content and its presentation will be able to continually evolve, all without a costly major upgrade. 

Thanks to Amanda Luker for the charts!

Nov 07 2019
Nov 07
Image courtesy of Acquia

Crafting a user experience that feels customized — but not forced or invasive — is key to successfully creating targeted website content. If you work with Drupal, Acquia Lift is one obvious solution to this challenge. Few other complete solutions for personalizing the user experience integrate with Drupal so well. 

Personalize Drupal content

Acquia Lift is a personalization tool that can help serve up custom content by tracking a variety of user activities on your site:

  • Pages visited
  • Content types or taxonomies viewed
  • Data that relates to location and browser information 

Lift works with Drupal versions 7 and 8, so the tool is available for use by many active sites on the Acquia platform. As an Acquia tool, Lift also integrates with the rest of the Acquia Cloud hosting platform. 

The tool uses criteria that you enter into its Profile Manager to place website visitors into segments. You then use Profile Manager to set up campaigns that target, track, and provide personalized experiences to specific segments. Segment criteria can be as simple as device type or as complex as specific combinations of interactions with the website. This level of granular control improves the quality of your content customization — and in turn, your engagement metrics.

After setting up campaigns and segments, you can select site sections and content to personalize. For example, you might replace a generic hero banner image with a location-specific graphic or swap out promotional content to reflect a visitor’s account status. When used correctly, this type of personalization can elevate the user experience and improve conversion rates. And because Lift integrates with Drupal so closely as a module, the tool has direct access to site content. This approach helps to ensure that the content visitors see doesn’t feel jarring or out of place. 

Focus on personalization, not code

From a technical perspective, installing and using Acquia Lift is not too difficult. Acquia provides a contributed module and library that work well with Drupal. In my opinion, the documentation could be improved, but it provides enough information for most developers to install Lift without trouble. The personalization and swapping of content all happen client-side (outside of Drupal), so after Lift is installed there isn’t much to configure within Drupal. Some handling with CSS is necessary to reduce visual indications of changing content, but the process is included in the documentation and the Acquia team is willing to assist if necessary.

Aside from these specifics, the magic that happens with personalization and setup occurs within the Acquia Lift interfaces. So after Lift is set up, you don’t need a high level of technical expertise for its regular use. The analytics, user data, segments, campaigns, and targeting all occur in the user-friendly Profile Manager. 

Before you decide

Lift does have a few limitations that you need to be aware of. The most significant: Dynamic or “live” content can be tricky to replace or personalize.  Because Lift bases replacements on static versions of content (i.e., rendered HTML), Lift might not replicate or replace carousels, slideshows, and other views with custom displays. Content that relies on data that might change or on JavaScript for rendering might need to be recreated in a different way to generate the same effect. 

Second, differences in documentation between the two available versions of Lift (versions 3 and 4) can cause a bit of confusion during setup confirmation. For example, Version 4 is an integrated tool inside Profile Manager, whereas version 3 works as a sort of pop-up that occurs within Drupal.

Also note that despite the ease of installation and deep documentation, the Acquia team will probably need to be involved to finish the installation. The team is fairly responsive, but this necessity could cause a slowdown in implementation, so you need to be aware of it if you’re deploying Lift close to launch. The extra assistance is even more likely to be necessary when using version 4 because of its more limited documentation and active development. We hope to see improvements in this area in the near future as development on version 4 continues.

Lastly, setting up test campaigns after you’ve installed and configured Lift can be difficult if you only have a few content items and some Lorem Ipsum. This service needs real content and real slots into which to swap that content to be tested properly. You won’t have a problem when implementing Lift into an existing site, but if you plan to use the tool on a new site, be aware that you’ll need to delay implementation until just before or after launch.

A complete personalization experience

All-in-all, few options out there work directly with a Drupal site and provide such a complete experience for personalization. Targeted and personalized content doesn’t need to be difficult to set up or irritating for the user. Tools like Acquia Lift can help make websites feel just a bit more personal. Check out Lift’s informational pages for more details.

Jul 03 2019
Jul 03

When someone thinks of Drupal, they think about scalability, customization, and the oh-so-steep learning curve. What isn’t thought of as much is the smooth and simple editing experience. Though there are many efforts out there to improve the editing experience in Drupal, they tend to still require a bit of tinkering before they are ready for the end user. That isn’t to say that the stock experience is unusable, but for those that are not familiar with Drupal it can be a bit complicated and confusing. We aren’t here to complain about all of that though because there is another solution to this that is gaining traction and we should most definitely talk about it. 

Introducing Gutenberg

Those that are at all familiar with WordPress will already know about the Gutenberg editing experience. The classic editor for WordPress is basically just a rich text editor that is great for writing content, and more specifically blogs posts, but requires some more understanding of markup if you want to build an actual page with it. That is where the idea for Gutenberg comes in. With this editor you build your content with blocks. These blocks handle most things you could want, such as: basic paragraphs, headings, layout, embedded videos, and more. Since Gutenberg is included and set as the default editor mode in WordPress 5.0+, everyone is going to be making use of this soon.

This really changes the way the typical WordPress page is going to look without a lot of custom work. Something that has been missing from the WordPress experience has been a way to make things look more consistent between pages. Gutenberg blocks provide this consistency while also allowing editors to build pages without much restriction.

Gutenberg in WordPress still makes use of the same shortcodes that could be entered manually to save the layout configuration and data. It just does this automatically for the user while they configure the page visually. This reduces the barrier for making interesting and varied pages that used to require a lot more intimate knowledge of the platform. 

Since Gutenberg is largely built with React, it is also very flexible and extendable. The community has put together a large collection of ready-to-use blocks in the Gutenberg Cloud plugin. This means that very little work should be required to create a wide array of pages with this editor and anything that is missing should be fairly easy to develop.

Why bring Gutenberg to Drupal?

There has been a concerted effort in Drupal 8 to bring more focus to the editing experience and overall UI of the platform. As I mentioned before, they are lacking still despite major strides in version 8 from 7. This leads to a great deal of curiosity into what other CMS are doing to solve this problem. It wasn’t too long before whispers of a port of the Gutenberg plugin came around and an alpha version of Gutenberg for Drupal came to be. 

On the surface, it can seem a bit duplicative to have something like Gutenberg in Drupal. While block-based layout and reusability may be a game changer for something like WordPress, it is old hat for the Drupal ecosystem. Creating pages using different sorts of blocks and widgets is practically one of Drupal’s claims to fame as a CMS. Not to mention, other existing solutions and modules, like paragraphs, already address something like this in a much more Drupal-y sort of way. Why both bringing in another solution with language around blocks and tokens to a system that all but invented it? 

Though it may seem redundant, it actually makes a lot of sense to do this. For one, familiarity is a luxury that Drupal can rarely offer to those new to the platform. Everything in the UI is almost entirely originated from the Drupal ecosystem and can be a bit unintuitive. Having an offering that was made popular in the CMS with a market share of somewhere around a quarter of the internet is a good way to do that. Another reason for it to be on Drupal is because the idea behind the tool isn’t a wholly WordPress specific need. Editors want to be able to build these pages without a need for expert level platform knowledge. They want to be able to create content and not worry about tokens, shortcodes, blocks, fields, etc. WordPress has a deceptively simple problem and Drupal has that for engineers by engineers feeling that isn’t always welcoming to editors.

That isn’t to say that Gutenberg for Drupal is the answer to our UI prayers, however. It is a nice solution, but it is one that comes with caveats as these things usually do. At the moment, the module has an all or nothing approach for the editing experience. The content types that have the Gutenberg experience enabled will need to have a long text field on them for it to work. It will then use that field for storage and all other fields will be stashed in the Gutenberg sidebar on the edit page. (Currently, the Gutenberg module will look for whatever field matches the description and use the first one it finds for this. Something to watch out for it you add this to an existing content type.) It fully takes over the display of the edit form as it will now be replaced with the React-based experience that Gutenberg brings.

Another thing to note on this is that the Gutenberg module does support some Drupal core blocks out of the box and should mostly support custom Drupal blocks as well. The cool part about this is that in theory you can set up things like ads, content embeds, and views ahead of time for use in many content pieces. The less cool part is that not all blocks will work, and it isn’t clear which ones do and why others do not at this time. Keep an eye on the Supported Blocks page on Drupal.org for more details as they are available.

Is Gutenberg ready for use?

Gutenberg for Drupal 8 is a very promising module that solves a lot of problems in the traditional Drupal editing experience and it is making great strides. At the time of this writing, 8.x-1.2 has been released and solves many of the issues that previously would have made this a harder module to recommend. The better question right now would be if it is worth checking out and I would say that it absolutely is. Whether you enjoy building content in Drupal or not, the Gutenberg experience is something worth trying and considering for your editors.

May 23 2019
May 23

Sometimes clients ask for the wrong thing. Sometimes developers build the wrong thing, because they didn’t ask the right questions. If you’re solving the wrong problem, it doesn’t matter how elegant your solution is.

One of the most important services that we as developers and consultants can provide is being able to help guide our clients to what they need, rather than simply giving them what they want. Sometimes those two things are aligned, but more often than not, figuring out the right thing to build takes some discovering.

Why don’t wants and needs match? It might be because the client hasn’t spent enough time thinking about the question, or because they haven’t approached it from the right angle. If that’s the case, we can help them to do that, either by asking the right questions or by acting as their rubber duck, providing a sounding board for their ideas. Alternatively, it might be because, as a marketing or content specialist, they lack sufficient awareness of the potential technological solutions to the question, and we can offer that.

Once you’ve properly understood the problem, you can start to look for a solution. In this article, I’ll talk about some examples of problems like this that we’ve recently helped clients to solve, and how those solutions led us to contribute two new Drupal modules.

There must be a module for that

Sometimes the problems are specific to the client, and the solutions need to be bespoke. Other times the problems are more general, and there’s already a solution. One of the great things about open source is that somebody out there has probably faced the same problem before, and if you’re lucky, they’ve shared their solution.

In general, I’d prefer to avoid writing custom code, for the same reasons that we aren’t rolling our own CMS. There are currently over 43,000 contributed modules available for Drupal, some of which solve similar problems, so sometimes the difficult part is deciding which of the alternatives to choose.

Sometimes there isn’t already a solution, or the solution isn’t quite right for your needs. Whenever that’s the case, and the problem is a generic one, we aim to open source the solutions that we build. Sometimes it’s surprising that there isn’t already a module available. Recently on my current project we came across two problems that felt like they should have been solved a long time ago, very generic issues for people editing content for the web - exactly the sort of thing that you’d expect someone in the Drupal community to have already built.

How hard could it be?

One area that sometimes causes friction between clients and vendors is around estimates. Unless you understand the underlying technology, it isn’t always obvious why some things are easy and others are hard.

tasks comicXKCD -tasks

Even experienced developers sometimes fail to grasp this - here’s a recent example where I did exactly that.

We’re building a site in Drupal 8, making heavy use of the Paragraphs module. When adding a webform to a paragraph field, there’s a select list with all forms on the site, sorted alphabetically. To improve usability for the content editors, the client was asking for the list to be sorted by date, most recently created first. Still thinking in Drupal 6 and 7 mode, I thought it would be easy. Use a view for selection, order the view by date created, job done - probably no more than half an hour’s work. Except that in Drupal 8, webforms are no longer nodes - they’re configuration entities, so there is no creation date to order by. What I’d assumed would be trivial would in fact require major custom development, the cost of which wouldn’t be justified by the business value of the feature. But there’s almost always another way to do things, which won’t be as time-consuming, and while it might not be what the client asked for, it’s often close enough for what they need.

What’s the real requirement?

In the example above, what the content editors really wanted was an easy way to find the relevant piece of content. The creation date seemed like the most obvious way to find it. If you jump to a solution before considering the problem, you can waste it going down blind alleys. I spent a while digging around in the code and the database before I realised sorting the list wouldn’t be feasible. By enabling the Chosen module, we made the list searchable - not what the client had asked for, but it gave them what they needed, and provided a more general solution to help with other long select lists. As is so often the case, it was five minutes of development work, once I’d spent hours going down a blind alley.

This is a really good example of why it’s so important to validate your assumptions before committing to anything, and why we should value customer collaboration over contract negotiation - for developers and end users to be able to have open conversations is enormously valuable to a smooth relationship, and it enables the team to deliver a more usable system.

Do you really need square pegs?

One area where junior developers sometimes struggle is in gauging the appropriate level of specificity to use in solving a problem. Appropriate specificity is particularly relevant when working with CSS, but also in terms of development work more generally. Should we be building something bespoke to solve this particular problem, or should we be thinking about it as one instance of a more generic problem? As I mentioned earlier, unless your problem is specific to your client’s business, somebody has probably already solved it.

With a little careful thought, a problem that originally seemed specific may actually be general. For example, try to avoid building CMS components for one-off pieces of a design. If we make our CMS components more flexible, it makes the system more useful for content editors, and may even mean that the next requirement can be addressed without any extra development effort.

Sometimes there can be a sense that requirements are immutable, handed down from on high, carved into stone tablets. Because a client has asked for something, it becomes a commandment, rather than an item on a wish list. Requirements should always be questioned The further the distance between clients and developers, the harder it can be to ask questions. Distance isn’t necessarily geographical - with good remote collaboration, and open lines of communication, developers in different time zones can build a healthy client relationship. Building that relationship enables developers to ask more questions and find out what the client really needs, and it also helps them to be able to push back and say no.

Work with the grain

It can be tempting to imagine that the digital is infinitely malleable; that because we’re working with the virtual, anything is possible. When clients ask “can we do X?, I usually answer that it’s possible, but the more relevant question is whether it’s feasible.

Just as the web has a grain, most technologies have a certain way of working, and it’s better to work with your framework rather than against it. Developers, designers and clients should work together to understand what’s simple and what’s complicated within the constraints. Is the extra complexity worth it, or would it be better to simplify things and deliver value quicker?

Sometimes that can feel like good cop, bad cop, where the designers offer the world, and developers say no. But the point isn’t that I don’t want to do the work, or that I want to charge clients more money. It’s that I would much rather deliver quick wins by using existing solutions, rather than having developers spend time on tasks that don’t bring business value, like banging their heads against the wall trying to bend a framework to match a “requirement” that nobody actually needs. It’s better for everyone if developers are able to work on more interesting things.

Time is on my side

As an example of an issue where a little technical knowledge went a long way, we were looking at enabling client-side sorting of tables. Sometimes those tables would include dates. We found an appropriate module, and helped to get the Drupal 8 version working, but date formats can be tricky. What is readable to a human in one cultural context isn’t necessarily easy for another, or for a computer, so it’s useful to add some semantic markup to provide the relevant machine-readable data.

Drupal has pretty good facilities for managing date and time formats, so surely there must be a module already that allows editors to insert dates into body text? Apparently not, so I built CKEditor Datetime.

With some helpful tips from the community on Drupal Slack, I found some CKEditor examples, and then started plumbing it in to Drupal. Once I’d got that side of things sorted, I got some help from the plugin maintainer to get the actual sorting sorted. A really nice example of open source communities in action.

Every picture tells a story

Another challenge that was troubling our client’s content team was knowing what their images would look like when they’re rendered. Drupal helpfully generates image derivatives at different sizes, but when the different styles have different aspect ratios, it’s important to be able to see what an image will look like in different contexts. This is especially important if you’re using responsive images, where the same source image might be presented at multiple sizes depending on the size of the browser window.

To help content editors preview the different versions of an image, we built the Image Styles Display module. It alters the media entity page to show a preview of every image style in the site, along with a summary of the effects of that image style. If there are a lot of image styles, that might be overwhelming, and if the aspect ratio is the same as the original, there isn’t much value in seeing the preview, so each preview is collapsible, using the summary/details element, and a configuration form controls which styles are expanded by default. A fairly simple idea, and a fairly simple module to build, so I was surprised that it didn’t already exist.

I hope that these modules will be useful for you in your projects - please give them a try:

If you have any suggestions for improvement, please let me know using the issue queues.

Apr 19 2019
Apr 19

What we learned from our fellow Drupalists

Lisa Mirabile

On April 7th, our team packed up our bags and headed off to Seattle for one of the bigger can’t miss learning events of the year, DrupalCon.

“Whether you’re C-level, a developer, a content strategist, or a marketer — there’s something for you at DrupalCon.” -https://events.drupal.org/

As you may have read in one of our more recent posts, we had a lot of sessions that we couldn’t wait to attend! We were very excited to find new ideas that we could bring back to improve our services for constituents or the agencies we work with to make digital interactions with government fast, easy, and wicked awesome. DrupalCon surpassed our already high expectations.

At the Government Summit, we were excited to speak with other state employees who are interested in sharing knowledge, including collaborating on open-source projects. We wanted to see how other states are working on problems we’ve tried to solve and to learn from their solutions to improve constituents’ digital interactions with government.

One of the best outcomes of the Government Summit was an amazing “birds of a feather” (BOF) talk later in the week. North Carolina’s Digital Services Director Billy Hylton led the charge for digital teams across state governments to choose a concrete next step toward collaboration. At the BOF, more than a dozen Massachusetts, North Carolina, Georgia, Texas, and Arizona digital team members discussed, debated, and chose a content type (“event”) to explore. Even better, we left with a meeting date to discuss specific next steps on what collaborating together could do for our constituents.

The learning experience did not stop at the GovSummit. Together, our team members attended dozens of sessions. For example, I attended a session called “Stanford and FFW — Defaulting to Open” since we are starting to explore what open-sourcing will look like for Mass.gov. The Stanford team’s main takeaway was the tremendous value they’ve found in building with and contributing to Drupal. Quirky fact: their team discovered during user testing among high-school students that “FAQ” is completely mysterious to younger people: they expect the much more straightforward “Questions” or “Help.”

Another session I really enjoyed was called “Pattern Lab: The Definitive How-to.” It was exciting to hear that Pattern Lab, a tool for creating design systems, has officially merged its two separate cores into a single one that supports all existing rendering engines. This means simplifying the technical foundation to allow more focus on extending Pattern Lab in new and useful ways (and less just keeping it up and running). We used Pattern Lab to build Mayflower, the design system created for the Commonwealth of Massachusetts and implemented first on Mass.gov. We are now looking at the best ways to offer the benefits of Mayflower — user-centeredness, accessibility, and consistent look and feel — to more Commonwealth digital properties. Some team members had a chance to talk later to Evan Lovely, the speaker and one of the maintainers of Pattern Lab, and were excited by the possibility of further collaboration to implement Mayflower in more places.

There were a variety of other informative topics. Here are some that my peers and I enjoyed, just to name a few:

Our exhibit hall booth at DrupalCon 2019Talking to fellow Drupalists at our booth

On Thursday we started bright and early to unfurl our Massachusetts Digital Service banner and prepare to greet fellow Drupalists at our booth! We couldn’t have done it without our designer, who put all of our signs together for our first time exhibiting at DrupalCon (Thanks Eva!)

It was remarkable to be able to talk with so many bright minds in one day. Our one-on-one conversations took us on several deep dives into the work other organizations are doing to improve their digital assets. Meeting so many brilliant Drupalists made us all the more excited to share some opportunities we currently have to work with them, such as the ITS74 contract to work with us as a vendor, or our job opening for a technical architect.

We left our table briefly to attend Mass.gov: A Guide to Data-Informed Content Optimization, where team members Julia Gutierrez and Nathan James shared how government agencies in Massachusetts are now making data-driven content decisions. Watch their presentation to learn:

  1. How we define wicked awesome content
  2. How we translate indicators into actionable metrics
  3. The technology stack we use to empower content authors

To cap it off, Mass.gov, with partners Last Call Media and Mediacurrent, won Best Theme for our custom admin theme at the first-ever Global Splash awards (established to “recognize the best Drupal projects on the web”)! An admin theme is the look and feel that users see when they log in. The success of Mass.gov rests in the hands of all of its 600+ authors and editors. We’ve known from the start of the project that making it easy and efficient to add or edit content in Mass.gov was key to the ultimate goal: a site that serves constituents as well as possible. To accomplish this, we decided to create a custom admin theme, launched in May 2018.

A before-and-after view of our admin theme

Our goal was not just a nicer looker and feel (though it is that!), but a more usable experience. For example, we wanted authors to see help text before filling out a field, so we brought it up above the input box. And we wanted to help them keep their place when navigating complicated page types with multiple levels of nested information, so we added vertical lines to tie together items at each level.

Last Call Media founder Kelly Albrecht crosses the stage to accept the Splash award for Best Theme on behalf of the Mass.gov Team.All the Splash award winners!

It was a truly enriching experience to attend DrupalCon and learn from the work of other great minds. Our team has already started brainstorming how we can improve our products and services for our partner agencies and constituents. Come back to our blog weekly to check out updates on how we are putting our DrupalCon lessons to use for the Commonwealth of Massachusetts!

Interested in a career in civic tech? Find job openings at Digital Service.
Follow us on Twitter | Collaborate with us on GitHub | Visit our site

Apr 01 2019
Apr 01

Vienna, VA March 19, 2019—Mobomo,

Mobomo, LLC is pleased to announce our award as a prime contractor on the $25M Department of Interior (DOI) Drupal Developer Support Services BPA . Mobomo brings an experienced and extensive Drupal Federal practice team to DOI.  Our team has launched a large number of award winning federal websites in both Drupal 7 and Drupal 8, to include www.nasa.gov, www.usgs.gov, and www.fisheries.noaa.gov.,These sites have won industry recognition and awards including the 2014, 2016, 2017 and 2018 Webby Award; two 2017 Innovate IT awards; and the 2018 MUSE Creative Award and the Acquia 2018 Public Sector Engage award.

DOI has been shifting its websites from an array of Content Management System (CMS) and non-CMS-based solutions to a set of single-architecture, cloud-hosted Drupal solutions. In doing so, DOI requires Drupal support for hundreds of websites that are viewed by hundreds of thousands of visitors each year, including its parent website, www.doi.gov, managed by the Office of the Secretary. Other properties include websites and resources provided by its bureaus  (Bureau of Indian Affairs, Bureau of Land Management, Bureau of Ocean Energy Management, Bureau of Reclamation, Bureau of Safety and Environmental Enforcement, National Park Service, Office of Surface Mining Reclamation and Enforcement, U.S. Fish and Wildlife Service, U.S. Geological Survey) and many field offices.

This BPA provides that support. The period of performance for this BPA is five years and it’s available agency-wide and to all bureaus as a vehicle for obtaining Drupal development, migration, information architecture, digital strategy, and support services. Work under this BPA will be hosted in DOI’s OpenCloud infrastructure, which was designed for supporting the Drupal platform.

Mar 13 2019
Mar 13

Note: This post refers to Drupal 8, but is very applicable to Drupal 7 sites as well

Most Drupal developers are experienced building sitewide search with Search API and Views. But it’s easy to learn and harder to master. These are the most common mistakes I see made when doing this task:

Not reviewing Analytics

Before you start, make sure you have access to analytics if relevant. You want to get an idea of how much sitewide search is being used and what the top searches are. On many sites, sitewide search usage is extremely low and you may need to explain this statistic to stakeholders asking for any time-consuming search features (and yourself before you start going down rabbit holes of refinements).

Take a look for yourself at how the sitewide search is currently performing for the top keywords users are giving it. Do the relevant pages come up first? You’ll take this into account when configuring boosts.

Using Solr for small sites

Drupal 8 Search API comes with database search included. Search API DB has come a long way over the years and is likely to have the features you need for smaller sites. Using a Solr backend is going to add complexity that may not be worth it for the amount of value your sitewide search is giving. Remember, if you use a Solr backend you have to have Solr running on all environments used in the project and you’ll have to reindex when you sync databases.

Not configuring all environments for working Solr

Which takes us to this one. If you do use Solr (or another server-side index) you need to also make sure your team has Solr running on their local environments and has an index for the site. 

Your settings.php needs to be configured to connect to the right index on each environment. We use Probo for review sandboxes so we need to configure our Probo builds to use the right search index and to index it on build.

Missing fields in index or wrong type

Always included the ‘Rendered HTML’ field in your search index rather than trying to capture every text field on all your content types and then having to come back to add more every time you add a field. Include the title field as well, but don’t forget to use ‘Fulltext’ as its field type. Only ‘Fulltext’ text fields are searchable by word.

Not configuring boosts

In your Processor settings, use Type-specific boosting and Tag-boosting via HTML filter. Tag boosting is straightforward: boost headers. For type-specific boosting you’re not necessarily just boosting the most important content types, but also thinking about what’s in the index and what people are likely looking for. Go back to your analytics for this. 

For example, when someone searches for a person’s name, are they likely wanting the top result to be the bio and contact info, a news posting mentioning that person, or a white paper authored by the person? So, even if staff bios are not the most important content on the site, perhaps they will need to be boosted high in search, where they are very relevant.

Not ordering by relevance

Whoops. This is a very common and devastating mistake. All your boost work be damned if you forget this. The View you make for search results needs to order results by Relevance: Descending.

Using AJAX

Don’t use the setting to ‘Use AJAX’ on your search results View. Doing so would mean that search results don’t have unique URLs, which is bad for user experience and analytics. It’s all about the URLs not about the whizzbang.

Not customizing the query string

Any time you configure a View with an exposed filter, take the extra second to customize the query string it is going to use. ‘search’ is a better query string than ‘search_api_fulltext’ for the search filter. URLs are part of your user interface.

No empty text

Similarly, when you add an exposed filter to a search you should also almost always be adding empty text. “No results match your search” is usually appropriate.

Facets that don’t speak to the audience

Facets can be useful for large search indexes and certain types of sites. But too many or too complex facets just create confusion. ‘Content-type’ is a very common facet, but if you use it, make sure you only include in its options the names of content types that are likely to make sense to visitors. For example, I don’t expect my visitors to understand the technical distinction between a ‘page’ and a ‘landing page’ so I don’t include facet links for these.

A screen shot of facets in DrupalYou can exclude confusing facet options 

Making search results page a node

I tell my team to make just about every page a visitor sees a node. This simplifies things for both editors and developers. It also ensures every page is in the search index: If you make key landing pages like ‘Events Calendar’ as Views pages or as custom routes these key pages will not be found in your search results. 

One important exception is the Search Results page itself. You don’t want your search results page in the search index: this can actually make an infinite loop when you search. Let this one be a Views page, not a Views block you embed into a node.

Important page content not in the ‘content’

Speaking of blocks and nodes, the way you architect your site will determine how well your search works. If you build your pages by placing blocks via core Block Layout, these blocks are not part of the page ‘content’ that gets indexed in the ‘Rendered HTML.’ Anything you want to be searchable needs to be part of the content. 

You can embed blocks in node templates with Twig Tweak, or you can reference blocks as part of the content (I use Paragraphs and Block Field.)

Not focusing on accessibility

The most accessible way to handle facets is to use ‘List of Links’ widget. You can also add some visually hidden help text just above your facet links. A common mistake is to hide the ‘Search’ label on the form. Instead of display: none, use the ‘visually-hidden’ class.

Feb 01 2019
Feb 01

In this article we will see how to update data models in Drupal 8, how to make the difference between model updating and content updating, how to create default content, and finally, the procedure to adopt for successful deployments to avoid surprises in a continuous integration/delivery Drupal cycle.

Before we start, I would encourage you to read the documentation of the hook hook_update_N() and to take into account all the possible impacts before writing an update.

Updating the database (executing hook updates and/or importing the configuration) is a very problematic task during a Drupal 8 deployment process, because the updating actions order of structure and data is not well defined in Drupal, and can pose several problems if not completely controlled.

It is important to differentiate between a contributed module to be published on drupal.org aimed at a wide audience, and a custom Drupal project (a set of Drupal contrib/custom modules) designed to provide a bespoke solution in response to a client’s needs. In a contributed module it is rare to have a real need to create instances of configuration/content entities, on the other hand deploying a custom Drupal project makes updating data models more complicated. In the following sections we will list all possible types of updates in Drupal 8.

The Field module allows us to add fields to bundles, we must make difference between the data structure that will be stored in the field (the static schema() method) and all the settings of the field and its storage that will be stored as a configuration. All the dependencies related to the configuration of the field are stored in the field_config configuration entity and all the dependencies related to the storage of the field are stored in the field_storage_config configuration entity. Base fields are stored by default in the entity’s base table.  

Configurable fields are the fields that can be added via the UI and attached to a bundle, which can be exported and deployed. Base fields are not managed by the field_storage_config configuration entities and field_config.

To update the entity definition or its components definitions (field defintions for example if the entity is fieldable) we can implement hook_update_N(). In this hook don’t use the APIs that require a full Drupal bootstrap (e.g. database with CRUD actions, services, …), to do this type of update safely we can use the methods proposed by the contract EntityDefinitionUpdateManagerInterface (e.g. updating the entity keys, updating a basic field definition common to all bundles, …)

To be able to update existing data entities or data fields in the case of a fieldable entity following a modification of a definition we can implement hook_post_update_NAME(). In this hook you can use all the APIs you need to update your entities.

To update the schema of a simple, complex configuration (a configuration entity) or a schema defined in a hook_schema() hook, we can implement hook_update_N().

In a custom Drupal project we are often led to create custom content types or bundles of custom entities (something we do not normally do in a contributed module, and we rarely do it in an installation profile), a site building action allows us to create this type of elements which will be exported afterwards in yml files and then deployed in production using Drupal configuration manager.

A bundle definition is a configuration entity that defines the global schema, we can implement hook_update_N() to update the model in this case as I mentioned earlier. Bundles are instances that persist as a Drupal configuration and follow the same schema. To update the bundles, updated configurations must be exported using the configuration manager to be able to import them into production later. Several problems can arise:

  • If we add a field to a bundle, and want to create content during the deployment for this field, using the current workflow (drush updatedb -> drush config-import) this action is not trivial, and the hook hook_post_update_NAME() can’t be used since it’s executed before the configuration import.
  • The same problem can arise if we want to update fields of bundles that have existing data, the hook hook_post_update_NAME() which is designed to update the existing contents or entities will run before the configuration is imported. What is the solution for this problem? (We will look at a solution for this problem later in this article.)

Now the question is: How to import default content in a custom Drupal project?

Importing default content for a site is an action which is not well documented in Drupal, in a profile installation often this import is done in the hook_install() hook because always the data content have not a complex structure with levels of nested references, in some cases we can use the default content module. Overall in a module we can’t create content in a hook_install() hook, simply because when installing a module the integrity of the configuration is still not imported.

In a recent project i used the drush php-script command to execute import scripts after the (drush updatedb -> drush config-import) but this command is not always available during deployment process. The first idea that comes to mind is to subscribe to the event that is triggered after the import of the configurations to be able to create the contents that will be available for the site editors, but the use of an event is not a nice developer experience hence the introduction of a new hook hook_post_config_import_NAME() that will run once after the database updates and configuration import. Another hook hook_pre_config_import_NAME() has also been introduced to fix performance issues.

A workflow that works for me

To achieve a successful Drupal deployment in continuous integration/delivery cycles using Drush, the most generic workflow that I’ve found at the moment while waiting for a deployment API in core is as follows :

  1. drush updatedb
    • hook_update_N() : To update the definition of an entity and its components
    • hook_post_update_N() : To update entities when you made an entity definition modification (entity keys, base fields, …)
  2. hook_pre_config_import_NAME() : CRUD operations (e.g. creating terms that will be taken as default values when importing configuration in the next step)
  3. drush config-import : Importing the configuration (e.g. new bundle field, creation of a new bundle, image styles, image crops, …)
  4. hook_post_config_import_NAME(): CRUD operations (e.g. creating contents, updating existing contents, …)

This approach works well for us, and I hope it will be useful for you. If you’ve got any suggestions for improvements, please let me know via the comments.

Jan 28 2019
Jan 28

SDSU Extension is a South Dakota State University organization that provides educational outreach programs for the citizens of South Dakota. They provide farmers, ranchers, agri-business people, communities, families, and youth with the research-based information they need to succeed.

Solutions Four Kitchens Provided

SDSU Extension wanted to improve content creation and editing workflow on their website and provide a better navigation experience for users. They wanted to assure that content was presented in a consistent manner and that all information was the most up to date.

SDSU Extension took full advantage of Four Kitchens UX and strategy expertise to get them on the right track for a new site. This was a necessary step in the project that proved to be a critical component of delivering a solid end product for the project.

SDSU recently rebuilt their main site (sdstate.edu) with Drupal 8 and SDSU Extension was ready to follow suit. SDSU Extension was on a very old ExpressionEngine install that became difficult to manage content and users. Four Kitchens built a Drupal 8 solution that solved their primary concern; more versatile content management while keeping a consistent look and feel for their content. We took the time to define a content architecture plan to match the needs discovered with our UX and strategy work.

The Drupal 8 build was based on the Thunder distribution which provided a great starting point and gets us an improved administrative experience out of the box. We used CircleCi to build, test, and deploy the application on Acquia Cloud where the site is hosted. We took advantage of paragraphs and custom entities and to give SDSU Extension the tools they need to create consistent and rich page layouts for all content. We built a custom site search solution based on Solr with facets to allow users to search by text and filter results.

SDSU Extension content is generated and created by SDSU Extension faculty and staff. Most of these users already have profile information on sdstate.edu. We worked with their IT staff to help them expose this profile information with JSON API so that we could consume the data and display it on their profile on the new SDSU Extension site. This allows them to have a canonical point of data entry for the profile information while displaying it on both sites.

Kudos to team members

The SDSU Extension team was lead by Alex Hicks as the project manager and Adam Erickson as the tech lead who also molded the architecture plan for the project. Donna Habbersat led the UX strategy for the project and shared the project owner responsibilities with Adam. Randy Oest crafted a beautiful new design for the project and assisted Evan Willhite who lead the charge on implementing the design and provided some frontend magic. Joel Travieso handled much of the backend heavy lifting while contributing valuable input to architecture improvements. This group was like a ‘well oiled machine” as quoted by the client. A huge shout out to Lindsey Gerard and the SDSU Extension staff for all the hard work and efforts on their side to make this project a success.

Jan 28 2019
Jan 28

At Four Kitchens, we take accessibility very seriously. It is why we choose tools like Drupal that take it seriously as well. We are always looking for ways to improve our efforts on this front, which is why on a recent project for SDSU Extension, we implemented automated testing in our prototyping tool and Drupal 8 theme, Emulsify. This means that every custom component we now build in Emulsify gets automatically tested and any errors/warnings are flagged to the developer immediately. Awesome! Let’s dig into some of the details.

SDSU Extension

Education, among some other fields, has strict requirements around accessibility. We work hard with these clients to ensure their project meets the expected standard, but we also expect a certain baseline of accessibility on any project we deliver regardless of whether accessibility is a stated goal. For the SDSU Extension project, we needed to meet and/or exceed WCAG 2.0AA standards. Drupal core and many of the contributed modules put such a heavy focus on meeting that standard that an enormous amount of foundational work is done for us. However, when it comes to us designing and coding custom components, it is up to us to also deliver up to that standard. This is where automated testing comes in.

Automated Testing Using Pa11y

Even with the best intentions, vendors and clients often find themselves putting off best practices (accessibility, performance, documentation) in favor of deadlines. This is why automated testing is so important. With automated tests, accessibility issues are surfaced immediately during the build process and ideally then fixed within the scope of that same deliverable. This ensures the component is not only accessible upon delivery but also avoids the stress of last-minute fixes in the final phase of the project.

There are many tools out there for running automated tests, but one of the most popular is pa11y. Having had good experiences with their testing suite on a recent React project for PRI.org, we decided to use it in Emulsify as well. The results were fantastic! We were able to set a baseline of WCAG 2.0 AA but offer that and many more options in configuration to make it easy to change that for a given project. See below for more technical details.

Technical Details

Unlike much of Emulsify Gulp, instead of writing a Gulp command to accomplish a task, we instead created a function to simply run a pa11y test (code). This has been added to our CSS and Pattern Lab Gulp (watch) tasks, so basically it is run whenever a component stylesheet or Twig file is saved. The test is run not just against the component code but the rendered Pattern Lab url that is generated from that component. This means that it tests the final code but also catches visual errors such as color contrast. Besides errors, we also show notices and warnings by default, exposing the handy recommendations that pa11y gives for things that automated tests can’t verify.  However, these settings and many more (including the standard itself) are available to be changed via the configuration file (see here for instructions on creating a local config in Emulsify).

Final Note on Accessibility Testing and the Future

Automated testing is a great tool in verifying the accessibility of a project but is only one tool and it does not guarantee a fully accessible product on its own. That said, we are so excited to have this in place for all our Drupal projects moving forward (and to offer it back to the community) as an important step towards building more accessible and responsible products for our clients. Also, we hope to add even more automated tests into our toolchain soon. Here’s to building a more accessible web!

Jan 28 2019
Jan 28

Begin with the end in mind—defining our goals

Our collaboration with South Dakota State University’s (SDSU) outreach arm, SDSU Extension, began by defining the user experience and branding issues that the previous site had. The visual design was in need of an update, the team wanted to make information easier for people to find, and mobile users were forced to view the desktop version of the site.

With these issues defined, we put together a series of goals that fell into two major groups—user experience and branding. For the user experience goals, we defined a user-centered approach to ensure that the work we were doing was going to help people using the site engage more with the site and more easily find what they were looking for. For the branding goals, we wanted an improved, modern look and feel that felt like a part of the larger South Dakota State University brand.

Creating a palette to work from (e.g. creating Style Tiles)

Every design project at Four Kitchens starts with a visual alignment in the form of style tiles, a design deliverable showing colors, fonts, and elements that helps create a common visual language for the project.

These are presented to everyone using InVision Freehand so that as we discuss the options we can add notes directly on the style tiles. For SDSU Extension we had two rounds of style tiles, landing quickly on one that we all agreed was the right direction.

Figuring out what we’ll need (e.g. wire-framing all the things)

Design systems are all the rage in the industry and with good reason. They allow projects to move more quickly by having a library of reusable parts that are ready to go. So at this point in the process for SDSU Extension, it was time to define what those parts needed to be.

We did this by reviewing the current site and discovery document to suss out what was going to be important for the new site. As a group—Four Kitchens and SDSU Extension—had discussions to detail what sorts of things would be vital and what would be nice-to-haves.

From there we worked up a series of wireframes that showed both a component library—a page with every possible thing on it, like cards, quotes, and video callouts—and a few samples of how the new pages could be assembled from these parts.

This process worked out the kinks for trickier components, like the many-level deep navigation on mobile while minimizing effort. The cycle of posting, review, and implementing feedback was quick leading us to a final collection of wireframes.

Making it come to life (e.g. comps)

As soon as wireframes were approved we moved into the next step—breathing life into them. We took the visual language that was defined in the style tile and applied it to the wireframes. The designs included all of the components at small, medium, and large screen sizes.

These components were then quickly assembled into mock pages to show what they would look like when the site was done. Having a wealth of work already done in the form of style tiles and wireframes, we hit on the right direction quickly. Once the first few comps were finalized there was a flood of comps as we built them out faster and faster using previously approved components.

A great collaboration

Working with SDSU Extension on this project was marvelous and we’re happy that it is live and shared with the rest of the world.

Dec 14 2018
Dec 14

A lot of people have been jumping on the headless CMS bandwagon over the past few years, but I’ve never been entirely convinced. Maybe it’s partly because I don’t want to give up on the sunk costs of what I’ve learned about Drupal theming, and partly because I’m proud to be a boring developer, but I haven’t been fully sold on the benefits of decoupling.

On our current project, we’ve continued to take an approach that Dries Buytaert has described as “progressively decoupled Drupal”. Drupal handles routing, navigation, access control, and page rendering, while rich interactive functionality is provided by a JavaScript application sitting on top of the Drupal page. In the past, we’d taken a similar approach, with AngularJS applications on top of Drupal 6 or 7, getting their configuration from Drupal.settings, and for this project we decided to use React on top of Drupal 8.

There are a lot of advantages to this approach, in my view. There are several discrete interactive applications on the site, but the bulk of the site is static content, so it definitely makes sense for that content to be rendered by the server rather than constructed in the browser. This brings a lot of value in terms of accessibility, search engine optimisation, and performance.

A decoupled system is almost inevitably more complex, with more potential points of failure.

The application can be developed independently of the CMS, so specialist JavaScript developers can work without needing to worry about having a local Drupal build process.

If at some later date, the client decides to move away from Drupal, or at the point where we upgrade to Drupal 9, the applications aren’t so tightly coupled, so the effort of moving them should be smaller.

Having made the decision to use this architecture, we wanted a consistent framework for managing application configuration, to make sure we wouldn’t need to keep reinventing the wheel for every application, and to keep things easy for the content team to manage.

The client’s content team want to be able to control all of the text within the application (across multiple languages), and be able to preview changes before putting them live.

There didn’t seem to be an established approach for this, so we’ve built a module for it.

As we’ve previously mentioned, the team at Capgemini are strongly committed to supporting the open source communities whose work we depend on, and we try to contribute back whenever we can, whether that’s patches to fix bugs and add new features, or creating new modules to fill gaps where nothing appropriate already exists. For instance, a recent client requirement to promote their native applications led us to build the App Banners module.

Aiming to make our modules open source wherever possible helps us to think in systems, considering the specific requirements of this client as an example of a range of other potential use cases. This helps to future-proof our code, because it’s more likely that evolving requirements can be met by a configuration change, rather than needing a code change.

So, guided by these principles, I’m very pleased to announce the Single Page Application Landing Page module for Drupal 8, or to use the terrible acronym that it has unfortunately but inevitably acquired, SPALP.

On its own, the module doesn’t do much other than provide an App Landing Page content type. Each application needs its own module to declare a dependency on SPALP, define a library, and include its configuration as JSON (with associated schema). When a module which does that is installed, SPALP takes care of creating a landing page node for it, and importing the initial configuration onto the node. When that node is viewed, SPALP adds the library, and a link to an endpoint serving the JSON configuration.

Deciding how to store the app configuration and make all the text editable was one of the main questions, and we ended up answering it in a slightly “un-Drupally” way.

On our old Drupal 6 projects, the text was stored in a separate ‘Messages’ node type. This was a bit unwieldy, and it was always quite tricky to figure out what was the right node to edit.

For our Drupal 7 projects, we used the translation interface, even on a monolingual site, where we translated from English to British English. It seemed like a great idea to the development team, but the content editors always found it unintuitive, struggling to find the right string to edit, especially for common strings like button labels. It also didn’t allow the content team to preview changes to the app text.

We wanted to maintain everything related to the application in one place, in order to keep things simpler for developers and content editors. This, along with the need to manage revisions of the app configuration, led us down the route of using a single node to manage each application.

This approach makes it easy to integrate the applications with any of the good stuff that Drupal provides, whether that’s managing meta tags, translation, revisions, or something else that we haven’t thought of.

The SPALP module also provides event dispatchers to allow configuration to be altered. For instance, we set different API endpoints in test environments.

Another nice feature is that in the node edit form, the JSON object is converted into a usable set of form fields using the JSON forms library. This generic approach means that we don’t need to spend time copying boilerplate Form API code to build configuration forms when we build a new application - instead the developers working on the JavaScript code write their configuration as JSON in a way that makes sense for their application, and generate a schema from that. When new configuration items need to be added, we only need to update the JSON and the schema.

Each application only needs a very simple Drupal module to define its library, so we’re able to build the React code independently, and bring it into Drupal as a Composer dependency.

The repository includes a small example module to show how to implement these patterns, and hopefully other teams will be able to use it on other projects.

As with any project, it’s not complete. So far we’ve only built one application following this approach, and it seems to be working pretty well. Among the items in the issue queue is better integration with configuration management system, so that we can make it clear if a setting has been overridden for the current environment.

I hope that this module will be useful for other teams - if you’re building JavaScript applications that work with Drupal, please try it out, and if you use it on your project, I’d love to hear about it. Also, if you spot any problems, or have any ideas for improvements, please get in touch via the issue queue.

Nov 06 2018
Nov 06
Jody's desk

Hardware

After a long run on MacBook Pros, I switched to an LG Gram laptop running Debian this year. It’s faster, lighter, and less expensive. 

If your development workflow now depends on Docker containers running Linux, the performance benefits you’ll get with a native Linux OS are huge. I wish I could go back in time and ditch Mac earlier.

Containers

For almost ten years I was doing local development in Linux virtual machines, but in the past year, I’ve moved to containers as these tools have matured. The change has also come with us doing less of our own hosting. My Zivtech engineering team has always held the philosophy that you need your local environment to match the production environment as closely as possible. 

But in order to work on many different projects and accomplish this in a virtual machine, we had to standardize our production environments by doing our own hosting. A project that ran on a different stack or just different versions could require us to run a separate virtual machine, slowing down our work. 

As the Drupal hosting ecosystem has matured (Pantheon, Platform.sh, Acquia, etc.), doing our own hosting began to make less sense. As we diversified our production environments more, container-based local development became more attractive, allowing us to have a more light-weight individualized stack for each project.

I’ve been happy using the Lando project, a Docker-based local web development system. It integrates well with Pantheon hosting, automatically making my local environment very close to the Pantheon environments and making it simple to refresh my local database from a Pantheon environment. 

Once I fully embraced containers and switched to a Linux host machine, I was in Docker paradise. Note: you do not need a new machine to free yourself from OSX. You can run Linux on your Mac hardware, and if you don’t want to cut the cord you could try a double boot.

Philadelphia City Hall outside Jody's office
A cool office view (like mine of Philly’s City Hall) is essential for development mojo

Editor

In terms of editors/IDEs I’m still using Sublime Text and vim, as I have for many years. I like Sublime for its performance, especially its ability to quickly search projects with 100,000 files. I search entire projects constantly. It’s an approach that has always served me well. 

I also recommend using a large font size. I’m at 14px. With a larger font size, I make fewer mistakes and read more easily. I’m not sure why most programmers use dark backgrounds and small fonts when it’s obvious that this decreases readability. I’m guessing it’s an ego thing.

Browser

In browser news, I’m back to Chrome after a time on Firefox, mainly because the LastPass plugin in Firefox didn’t let me copy passwords. But I have plenty of LastPass problems in any browser. When working on multiple projects with multiple people, a password manager is essential, but LastPass’s overall crappiness makes me miserable.

Wired: Linux, git, Docker, Lando
Tired: OSX, Virtual machines, small fonts
Undesired: LastPass, egos

Terminal

I typically only run the browser, the text editor, and the terminal, a few windows of each. In the terminal, I’m up to 16px font size. Recommend! A lot of the work I do in the terminal is running git commands. I also work in the MySQL CLI a good deal. I don’t run a lot of custom configuration in my shell – I like to keep it pretty vanilla so that when I work on various production servers I’m right at home.

Terminal screenshot

Git

I get a lot of value out of my git mastery. If you’re using git but don’t feel like a master, I recommend investing time into that. With basic git skills you can quickly uncover the history of code to better understand it, never lose any work in progress, and safely deploy exactly what you want to.

Once I mastered git I started finding all kinds of other uses for it. For example, I was recently working on a project in which I was scraping a thousand pages in order to migrate them to a new CMS. At the beginning of the project, I scraped the pages and stored them in JSON files, which I added to git.  At the end of the project, I re-scraped the pages and used git to tell me which pages had been updated and to show me which words had changed. 

On another project, I cut a daily import process from hours to seconds by using git to determine what had changed in a large inventory file. On a third, I used multiple remotes with Jenkins jobs to create a network of sites that run a shared codebase while allowing individual variations. Git is a good friend to have.

Hope you found something useful in my setup. Have any suggestions on taking it to the next level?
 

Oct 29 2018
Oct 29

At this year's BADCamp, our Senior Web Architect Nick Lewis led a session on Gatsby and the JAMstack. The JAMStack is a web development architecture based on client-side JavaScript, reusable APIs, and prebuilt Markup. Gatsby is one of the leading JAMstack based static page generators, and this session primarily covers how to integrate it with Drupal. 

Our team has been developing a "Gatsby Drupal Kit" over the past few months to help jump start Gatsby-Drupal integrations. This kit is designed to work with a minimal Drupal install as a jumping off point, and give a structure that can be extended to much larger, more complicated sites.

This session will leave you with: 

1. A base Drupal 8 site that is connected with Gatsby.  

2. Best practices for making Gatsby work for real sites in production.

3. Sane patterns for translating Drupal's structure into Gatsby components, templates, and pages.

This is not an advanced session for those already familiar with React and Gatsby. Recommended prerequisites are a basic knowledge of npm package management, git, CSS, Drupal, web services, and Javascript. Watch the full session below. 

Oct 23 2018
Oct 23

PHP 5.6 will officially be no longer supported through security fixes on December 31, 2018. This software has not been actively developed for a number of years, but people have been slow to jump on the bandwagon. Beginning in the new year, no bug fixes will be released for this version of PHP. This opens the door for a dramatic increase in security risks if you are not beginning the new year on a version of PHP 7. PHP 7 was released back in December 2015 and PHP 7.2 is the latest version that you can update to. PHP did skip over 6; so don’t even try searching for it.

Drupal 8.6 is the final Drupal version that will support PHP 5.6. Many other CMS’s will be dropping their support for PHP 5.6 in their latest versions as well. Simply because it is supported in that version does not mean that you will be safe from the security bugs; you will still need to upgrade your PHP version before December 31, 2018. In addition to the security risks, you have already been missing out on many improvements that have been made to PHP.

What Should You Do About This?

You are probably thinking “Upgrade, I get it.” It may actually be more complicated than that and you will need to refactor. 90-95% of your code should be fine. The version your CMS is may affect the complexity of your conversion. Most major CMS’s will handle PHP 7 right out of the box in their most recent versions.

By upgrading to a version of PHP 7, you will see a variety of performance improvements; the most dramatic being speed. The engine behind PHP, Zend Technologies, ran performance tests on a variety of PHP applications to compare the performance of PHP 7 vs PHP 5.6. These tests compared requests per second across the two versions. This relates to the speed at which code is executed, and how fast queries to the database and server are returned. These tests showed that PHP 7 runs twice as fast and you will see additional improvements in memory consumption.

How Can Mobomo Help?

Mobomo’s team is highly experienced, not only in assisting with your conversion, but with the review of your code to ensure your environment is PHP 7 ready.  Our team of experts will review your code and uncover the exact amount of code that needs to be converted. There are a good number of factors that could come into play and affect your timeline. The more customizations and smaller plugins that your site contains, the more complex your code review and your eventual conversion could be. Overall, depending on the complexity of the code, your timeline could vary but this would take a maximum of 3 weeks.

Important Things to Know:

  1. How many contributed modules does your site contain?
  2. How many custom modules does your site contain?
  3. What does your environment look like?
Sep 18 2018
Sep 18

A slick new feature was recently added to Drupal 8 starting with the 8.5 release  — out of the box off-canvas dialog support.

Off-canvas dialogs are those which slide out from off page. They push over existing content in order to make space for themselves while keeping the existing content unobstructed, unlike a traditional dialog popup. These dialogs are often used for menus on smaller screens. Most Drupal 8 users are familiar with Admin Toolbar's use of an off-canvas style menu tray, which is automatically enabled on smaller screens.

Admin toolbar off-canvas

Drupal founder Dries posted a tutorial and I finally got a chance to try it myself.

In my case, I was creating a form for reviewers to submit reviews of long and complicated application submissions. Reviewers needed to be able to easily access the entire application while entering their review. A form at the bottom of the screen would have meant too much scrolling, and a traditional popup would have blocked much of the content they needed to see. Therefore, an off-canvas style dialog was the perfect solution. 

Build your own

With the latest updates to Drupal core, you can now easily add your own off-canvas dialogs.

Create a page for Your off-canvas content 

The built in off-canvas integration is designed to load Drupal pages into the dialog window (and only pages as far as I can tell). So you will need either an existing page, such as a node edit form, or you'll need to create your custom own page through Drupal's routing system, which will contain your custom form or other content. In my case, I created a custom page with a custom form.

Create a Link

Once you have a page that you would like to render inside the dialog, you'll need to create a link to that page. This will function as the triggering element to load the dialog.

In my case, I wanted to render the review form dialog from the application full node display itself. I created an "extra field" using hook_entity_extra_field_info(), built the link in hook_ENTITY_TYPE_view(), and then configured the new link field using the Manage Display tab for my application entity. 

/*
 * Implements hook_entity_extra_field_info().
 */
function custom_entity_extra_field_info() {
  $extra['node']['application']['display']['review_form_link'] = array(
    'label' => t('Review Application'),
    'description' => t('Displays a link to the review form.'),
    'weight' => 0,
  );
  return $extra;
}

/**
 * Implements hook_ENTITY_TYPE_view().
 */
function custom_node_view(array &$build, Drupal\Core\Entity\EntityInterface $entity, Drupal\Core\Entity\Display\EntityViewDisplayInterface $display, $view_mode) {
  if ($display->getComponent('review_form_link')) {
    $build['review_link'] = array(
      '#title' => t('Review Application'),
      '#type' => 'link',
      '#url' => Url::fromRoute('custom.review_form', ['application' => $entity->id()]),
    );
  }
}

Add off-canvas to the link

Next you just need to set the link to open using off-canvas instead of as a new page.

There are four attributes to add to your link array in order to do this:

      '#attributes' => array(
        'class' => 'use-ajax',
        'data-dialog-renderer' => 'off_canvas',
        'data-dialog-type' => 'dialog',
        'data-dialog-options' => '{"width":"30%"}'
      ),
      '#attached' => [
        'library' => [
          'core/drupal.dialog.ajax',
        ],
      ],

The first three attributes are required to get your dialog working and the last is recommended, as it will let you control the size of the dialog.

Additionally, you'll need to attach the Drupal ajax dialog library. Before I added the library to my implementation, I was running into an issue where some user roles could access the dialog and others could not. It turned out this was because the library was being loaded for roles with access to the Admin Toolbar.

The rendered link will end up looking like:

Review Application

And that's it! Off-canvas dialog is done and ready for action.

off-canvas-demo-gif
Jul 26 2018
Jul 26

Intro

In this post, I’m going to run through how I set up visual regression testing on sites. Visual regression testing is essentially the act of taking a screenshot of a web page (whether the whole page or just a specific element) and comparing that against an existing screenshot of the same page to see if there are any differences.

There’s nothing worse than adding a new component, tweaking styles, or pushing a config update, only to have the client tell you two months later that some other part of the site is now broken, and you discover it’s because of the change that you pushed… now it’s been two months, and reverting that change has significant implications.

That’s the worst. Literally the worst.

All kinds of testing can help improve the stability and integrity of a site. There’s Functional, Unit, Integration, Stress, Performance, Usability, and Regression, just to name a few. What’s most important to you will change depending on the project requirements, but in my experience, Functional and Regression are the most common, and in my opinion are a good baseline if you don’t have the capacity to write all the tests.

If you’re reading this, you probably fall into one of two categories:

  1. You’re already familiar with Visual Regression testing, and just want to know how to do it
  2. You’re just trying to get info on why Visual Regression testing is important, and how it can help your project.

In either case, it makes the most sense to dive right in, so let’s do it.

Tools

I’m going to be using WebdriverIO to do the heavy lifting. According to the website:

WebdriverIO is an open source testing utility for nodejs. It makes it possible to write super easy selenium tests with Javascript in your favorite BDD or TDD test framework.

It basically sends requests to a Selenium server via the WebDriver Protocol and handles its response. These requests are wrapped in useful commands and can be used to test several aspects of your site in an automated way.

I’m also going to run my tests on Browserstack so that I can test IE/Edge without having to install a VM or anything like that on my mac.

Process

Let’s get everything setup. I’m going to start with a Drupal 8 site that I have running locally. I’ve already installed that, and a custom theme with Pattern Lab integration based on Emulsify.

We’re going to install the visual regression tools with npm.

If you already have a project running that uses npm, you can skip this step. But, since this is a brand new project, I don’t have anything using npm, so I’ll create an initial package.json file using npm init.

  • npm init -y
    • Update the name, description, etc. and remove anything you don’t need.
    • My updated file looks like this:
{ "name": "visreg", "version": "1.0.0", "description": "Website with visual regression testing", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" } }   "name": "visreg",  "version": "1.0.0",  "description": "Website with visual regression testing",  "scripts": {    "test": "echo \"Error: no test specified\" && exit 1"

Now, we’ll install the npm packages we’ll use for visual regression testing.

  • npm install --save-dev webdriverio chai wdio-mocha-framework wdio-browserstack-service wdio-visual-regression-service node-notifier
    • This will install:
      • WebdriverIO: The main tool we’ll use
      • Chai syntax support: “Chai is an assertion library, similar to Node’s built-in assert. It makes testing much easier by giving you lots of assertions you can run against your code.”
      • Mocha syntax support “Mocha is a feature-rich JavaScript test framework running on Node.js and in the browser, making asynchronous testing simple and fun.”
      • The Browserstack wdio package So that we can run our tests against Browserstack, instead of locally (where browser/OS differences across developers can cause false-negative failures)
      • Visual regression service This is what provides the screenshot capturing and comparison functionality
      • Node notifier This is totally optional but supports native notifications for Mac, Linux, and Windows. We’ll use these to be notified when a test fails.

Now that all of the tools are in place, we need to configure our visual regression preferences.

You can run the configuration wizard by typing ./node_modules/webdriverio/bin/wdio, but I’ve created a git repository with not only the webdriver config file but an entire set of files that scaffold a complete project. You can get them here.

Follow the instructions in the README of that repo to install them in your project.

These files will get you set up with a fairly sophisticated, but completely manageable visual regression testing configuration. There are some tweaks you’ll need to make to fit your project that are outlined in the README and the individual markdown files, but I’ll run through what each of the files does at a high level to acquaint you with each.

  • .gitignore
    • The lines in this file should be added to your existing .gitignore file. It’ll make sure your diffs and latest images are not committed to the repo, but allow your baselines to be committed so that everyone is comparing against the same baseline images.
  • VISREG-README.md
    • This is an example readme you can include to instruct other/future developers on how to run visual regression tests once you have it set up
  • package.json
    • This just has the example test scripts. One for running the full suite of tests, and one for running a quick test, handy for active development. Add these to your existing package.json
  • wdio.conf.js
    • This is the main configuration file for WebdriverIO and your visual regression tests.
    • You must update this file based on the documentation in wdio.conf.md
  • wdio.conf.quick.js
    • This is a file you can use to run a quick test (e.g. against a single browser instead of the full suite defined in the main config file). It’s useful when you’re doing something like refactoring an existing component, and/or want to make sure changes in one place don’t affect other sections of the site.
  • tests/config/globalHides.js
    • This file defines elements that should be hidden in ALL screenshots by default. Individual tests can use this, or define their own set of elements to hide. Update these to fit your actual needs.
  • tests/config/viewports.js
    • This file defines what viewports your tests should run against by default. Individual tests can use these, or define their own set of viewports to test against. Update these to the screen sizes you want to check.

Running the Test Suite

I’ll copy the example homepage test from the example-tests.md file into a new file /web/themes/custom/visual_regression_testing/components/_patterns/05-pages/home/home.test.js. (I’m putting it here because my wdio.conf.js file is looking for test files in the _patterns directory, and I like to keep test files next to the file they’re testing.)

The only thing you’ll need to update in this file is the relative path to the globalHides.js file. It should be relative from the current file. So, mine will be:

const visreg = require('../../../../../../../../tests/config/globalHides.js'); const visreg = require('../../../../../../../../tests/config/globalHides.js');

With that done, I can simply run npm test and the tests will run on BrowserStack against the three OS/Browser configurations I’ve specified. While they’re running, we can head over to https://automate.browserstack.com/ we can see the tests being run against Chrome, Firefox, and IE 11.

Once tests are complete, we can view the screenshots in the /tests/screenshots directory. Right now, the baseline shots and the latest shots will be identical because we’ve only run the test once, and the first time you run a test, it creates the baseline from whatever it sees. Future tests will compare the most recent “latest” shot to the existing baseline, and will only update/create images in the latest directory.

At this point, I’ll commit the baselines to the git repo so that they can be shared around the team, and used as baselines by everyone running visual regression tests.

If I run npm test again, the tests will all pass because I haven’t changed anything. I’ll make a small change to the button background color which might not be picked up by a human eye but will cause a regression that our tests will pick up with no problem.

In the _buttons.scss file, I’m going to change the default button background color from $black (#000) to $gray-darker (#333). I’ll run the style script to update the compiled css and then clear the site cache to make sure the change is implemented. (When actively developing, I suggest disabling cache and keeping the watch task running. It just makes things easier and more efficient.)

This time all the tests fail, and if we look at the images in the diff folder, we can clearly see that the “search” button is different as indicated by the bright pink/purple coloring.

If I open up one of the “baseline” images, and the associated “latest” image, I can view them side-by-side, or toggle back and forth. The change is so subtle that a human eye might not have noticed the difference, but the computer easily identifies a regression. This shows how useful visual regression testing can be!

Let’s pretend this is actually a desired change. The original component was created before the color was finalized, black was used as a temporary color, and now we want to capture the update as the official baseline. Simply Move the “latest” image into the “baselines” folder, replacing the old baseline, and commit that to your repo. Easy peasy.

Running an Individual Test

If you’re creating a new component and just want to run a single test instead of the entire suite, or you run a test and find a regression in one image, it is useful to be able to just run a single test instead of the entire suite. This is especially true once you have a large suite of test files that cover dozens of aspects of your site. Let’s take a look at how this is done.

I’ll create a new test in the organisms folder of my theme at /search/search.test.js. There’s an example of an element test in the example-tests.md file, but I’m going to do a much more basic test, so I’ll actually start out by copying the homepage test and then modify that.

The first thing I’ll change is the describe section. This is used to group and name the screenshots, so I’ll update it to make sense for this test. I’ll just replace “Home Page” with “Search Block”.

Then, the only other thing I’m going to change is what is to be captured. I don’t want the entire page, in this case. I just want the search block. So, I’ll update checkDocument (used for full-page screenshots) to checkElement (used for single element shots). Then, I need to tell it what element to capture. This can be any css selector, like an id or a class. I’ll just inspect the element I want to capture, and I know that this is the only element with the search-block-form class, so I’ll just use that.

I’ll also remove the timeout since we’re just taking a screenshot of a single element, we don’t need to worry about the page taking longer to load than the default of 60 seconds. This really wasn’t necessary on the page either, but whatever.

My final test file looks like this:

const visreg = require('../../../../../../../../tests/config/globalHides.js'); describe('Search Block', function () { it('should look good', function () { browser .url('./') .checkElement('.search-block-form', {hide: visreg.hide, remove: visreg.remove}) .forEach((item) => { expect(item.isWithinMisMatchTolerance).to.be.true; }); }); }); const visreg = require('../../../../../../../../tests/config/globalHides.js');describe('Search Block', function () {  it('should look good', function () {    browser      .url('./')      .checkElement('.search-block-form', {hide: visreg.hide, remove: visreg.remove})      .forEach((item) => {        expect(item.isWithinMisMatchTolerance).to.be.true;      });

With that in place, this test will run when I use npm test because it’s globbing, and running every file that ends in .test.js anywhere in the _patterns directory. The problem is this also runs the homepage test. If I just want to update the baselines of a single test, or I’m actively developing a component and don’t want to run the entire suite every time I make a locally scoped change, I want to be able to just run the relevant test so that I don’t waste time waiting for all of the irrelevant tests to pass.

We can do that by passing the --spec flag.

I’ll commit the new test file and baselines before I continue.

Now I’ll re-run just the search test, without the homepage test.

npm test -- --spec web/themes/custom/visual_regression_testing/components/_patterns/03-organisms/search/search.test.js

We have to add the first set of -- because we’re using custom npm scripts to make this work. Basically, it passes anything that follows directly to the custom script (in our case test is a custom script that calls ./node_modules/webdriverio/bin/wdio). More info on the run-script documentation page.

If I scroll up a bit, you’ll see that when I ran npm test there were six passing tests. That is one test for each browser for each test. We have two test, and we’re checking against three browsers, so that’s a total of six tests that were run.

This time, we have three passing tests because we’re only running one test against three browsers. That cut our test run time by more than half (from 106 seconds to 46 seconds). If you’re actively developing or refactoring something that already has test coverage, even that can seem like an eternity if you’re running it every few minutes. So let’s take this one step further and run a single test against a single browser. That’s where the wdio.conf.quick.js file comes into play.

Running Test Against a Subset of Browsers

The wdio.conf.quick.js file will, by default, run test(s) against only Chrome. You can, of course, change this to whatever you want (for example if you’re only having an issue in a specific version of IE, you could set that here), but I’m just going to leave it alone and show you how to use it.

You can use this to run the entire suite of tests or just a single test. First, I’ll show you how to run the entire suite against only the browser defined here, then I’ll show you how to run a single test against this browser.

In the package.json file, you’ll see the test:quick script. You could pass the config file directly to the first script by typing npm test -- wdio.conf.quick.js, but that’s a lot more typing than npm run test:quick and you (as well as the rest of your team) have to remember the file name. Capturing the file name in a second custom script simplifies things.

When I run npm run test:quick You’ll see that two tests were run. We have two tests, and they’re run against one browser, so that simplifies things quite a bit. And you can see it ran in only 31 seconds. That’s definitely better than the 100 seconds the full test suite takes.

Let’s go ahead and combine this with the technique for running a single test to cut that time down even further.

npm run test:quick -- --spec web/themes/custom/visual_regression_testing/components/_patterns/03-organisms/search/search.test.js

This time you’ll see that it only ran one test against one browser and took 28 seconds. There’s actually not a huge difference between this and the last run because we can run three tests in parallel. And since we only have two tests, we’re not hitting the queue which would add significantly to the entire test suite run time. If we had two dozen tests, and each ran against three browsers, that’s a lot of queue time, whereas even running the entire suite against one browser would be a significant savings. And obviously, one test against one browser will be faster than the full suite of tests and browsers.

So this is super useful for active development of a specific component or element that has issues in one browser as well as when you’re refactoring code to make it more performant, and want to make sure your changes don’t break anything significant (or if they do, alert you sooner than later). Once you’re done with your work, I’d still recommend running the full suite to make sure your changes didn’t inadvertently affect another random part of the site.

So, those are the basics of how to set up and run visual regression tests. In the next post, I’ll dive into our philosophy of what we test, when we test, and how it fits into our everyday development workflow.

Jul 23 2018
Jul 23
Moshe Weitzman

I recently worked with the Mass.gov team to transition its development environment from Vagrant to Docker. We went with “vanilla Docker,” as opposed to one of the fine tools like DDev, Drupal VM, Docker4Drupal, etc. We are thankful to those teams for educating and showing us how to do Docker right. A big benefit of vanilla Docker is that skills learned there are generally applicable to any stack, not just LAMP+Drupal. We are super happy with how this environment turned out. We are especially proud of our MySQL Content Sync image — read on for details!

Pretty docks at Boston Harbor. Photo credit.

The heart of our environment is the docker-compose.yml. Here it is, then read on for a discussion about it.

Developers use .env files to customize aspects of their containers (e.g. VOLUME_FLAGS, PRIVATE_KEY, etc.). This built-in feature of Docker is very convenient. See our .env.example file:

The most innovative part of our stack is the mysql container. The Mass.gov Drupal database is gigantic. We have tens of thousands of nodes and 500,000 revisions, each with an unholy number of paragraphs, reference fields, etc. Developers used to drush sql:sync the database from Prod as needed. The transfer and import took many minutes, and had some security risk in the event that sanitization failed on the developer’s machine. The question soon became, “how can we distribute a mysql database that’s already imported and sanitized?” It turns out that Docker is a great way to do just this.

Today, our mysql container builds on CircleCI every night. The build fetches, imports, and sanitizes our Prod database. Next, the build does:

That is, we commit and push the refreshed image to a private repository on Docker Cloud. Our mysql image is 9GB uncompressed but thanks to Docker, it compresses to 1GB. This image is really convenient to use. Developers fetch a newer image with docker-compose pull mysql. Developers can work on a PR and then when switching to a new PR, do a simple ahoy down && ahoy up. This quickly restores the local Drupal database to a pristine state.

In order for this to work, you have to store MySQL data *inside* the container, instead of using a Docker Volume. Here is the Dockerfile for the mysql image.

Our Drupal container is open source — you can see exactly how it’s built. We start from the official PHP image, then add PHP extensions, Apache config, etc.

An interesting innovation in this container is the use of Docker Secrets in order to safely share an SSH key from host to the container. See this answer and mass_id_rsa in the docker-compose.yml above. Also note the two files below which are mounted into the container:

Configure SSH to use the secrets file as private keyAutomatically run ssh-add when logging into the container

Traefik is a “cloud edge router” that integrates really well with docker-compose. Just add one or two labels to a service and its web site is served through Traefik. We use Traefik to provide nice local URLs for each of our services (www.mass.local, portainer.mass.local, mailhog.mass.local, …). Without Traefik, all these services would usually live at the same URL with differing ports.

In the future, we hope to upgrade our local sites to SSL. Traefik makes this easy as it can terminate SSL. No web server fiddling required.

Our repository features a .ahoy.yml file that defines helpful aliases (see below). In order to use these aliases, developers download Ahoy to their host machine. This helps us match one of the main attractions of tools like DDev/Lando — their brief and useful CLI commands. Ahoy is a convenience feature and developers who prefer to use docker-compose (or their own bash aliases) are free to do so.

Our development environment comes with 3 fine extras:

Of course, we are never satisfied. Here are a couple issues to tackle:

Jul 11 2018
Jul 11

Someone recently asked the following question in Slack. I didn’t want it to get lost in Slack’s history, so I thought I’d post it here:

Question: I’m setting a CSS background image inside my Pattern Lab footer template which displays correctly in Pattern Lab; however, Drupal isn’t locating the image. How is sharing images between PL and Drupal supposed to work?

My Answer: I’ve been using Pattern Lab’s built-in data.json files to handle this lately. e.g. you could do something like:

footer-component.twig:

... {% footer_background_image = footer_background_image|default('/path/relative/to/drupal/root/footer-image.png') %} ... {% footer_background_image = footer_background_image|default('/path/relative/to/drupal/root/footer-image.png') %}

This makes the image load for Drupal, but fails for Pattern Lab.

At first, to fix that, we used the footer-component.yml file to set the path relative to PL. e.g.:

footer-component.yml:

footer_background_image: /path/relative/to/pattern-lab/footer-image.png footer_background_image: /path/relative/to/pattern-lab/footer-image.png

The problem with this is that on every Pattern Lab page, when we included the footer copmonent, we had to add that line to the yml file for the page. e.g:

basic-page.twig:

... {% include /whatever/footer-component.twig %} ... {% include /whatever/footer-component.twig %}

basic-page.yml:

... footer_background_image: /path/relative/to/pattern-lab/footer-image.png ... footer_background_image: /path/relative/to/pattern-lab/footer-image.png

Rinse and repeat for each example page… That’s annoying.

Then we realized we could take advantage of Pattern Labs global data files.

So with the same footer-component.twig file as above, we can skip the yml files, and just add the following to a data file.

theme/components/_data/paths.json: (* see P.S. below)

{ "footer_background_image": "/path/relative/to/pattern-lab/footer-image.png" }     "footer_background_image": "/path/relative/to/pattern-lab/footer-image.png"

Now, we can include the footer component in any example Pattern Lab pages we want, and the image is globally replaced in all of them. Also, Drupal doesn’t know about the json files, so it pulls the default value, which of course is relative to the Drupal root. So it works in both places.

We did this with our icons in Emulsify:

_icon.twig

paths.json

End of the answer to your original question… Now for a little more info that might help:

P.S. You can create as many json files as you want here. Just be careful you don’t run into name-spacing issues. We accounted for this in the header.json file by namespacing everything under the “header” array. That way the footer nav doesn’t pull our header menu items, or vise-versa.

example homepage home.twigthat pulls menu items for the header and the footer from data.json files

header.json

footer.json

May 18 2018
May 18

The Content Moderation core module was marked stable in Drupal 8.5. Think of it like the contributed module Workbench Moderation in Drupal 7, but without all the Workbench editor Views that never seemed to completely make sense. The Drupal.org documentation gives a good overview.

Content Moderation requires the Workflows core module, allowing you to set up custom editorial workflows. I've been doing some work with this for a new site for a large organization, and have some tips and tricks.

Less Is More

Resist increases in roles, workflows, and workflow states and make sure they are justified by a business need. Stakeholders may ask for many roles and many workflow states without knowing the increased complexity and likelihood of editorial confusion that results.

If you create an editorial workflow that is too strict and complex, editors will tend to find ways to work around the  system. A good compromise is to ask that the team tries something simple first and adds complexity down the line if needed.

Try to use the same workflow on all content types if you can. It makes a much simpler mental model for everyone.

Transitions are Key

Transitions between workflow states will be what you assign as permissions to roles. Typically, you'll want to lock down who can publish content, allowing content contributors to create new drafts only.

Transitions Image from Drupal.orgTransitions between workflow states must be thought through

You might want some paper to map out all the paths between workflow states that content might go through. The transitions should be named as verbs. If you can't think of a clear, descriptive verb that applies, you can go with 'Set state to %your_state" or "Mark as %your_state." Don't sweat the names of transitions too much though; they don't seem to ever appear in an editor-facing way anyway.

Don't forget to allow editors to undo transitions. If they can change the state from "Needs Work" to "Needs Review," make sure they can change it back to "Needs Work."

You must allow Non-Transitions

Make sure the transitions include non-transitions. The transitions represent which options will be available for the state when you edit content. In the above (default core) example, it is not possible to edit archived content and maintain the same state of archived. You'd have to change the status to published and then back to archived. In fact, it would be very easy to accidentally publish what you had archived, because editing the content will set it back to published as the default setting. Therefore, make sure that draft content can stay as draft when edited, etc. 

Transition Ordering is Crucial

Ordering of the transitions here is very important because the state options on the content editing form will appear as a select list of states ordered by the transition order, and it will default to the first available one.

If an editor misses setting this option correctly, they will simply get the first transition, so make sure that first transition is a good default. To set the right order, you have to map each state to what should be its default value when editing. You may have to add additional transitions to make this all make sense.

As for the ordering of workflow states themselves, this will only affect ordering when states are listed, for example in a Views exposed filter of workflow states or within the workflows administration.

Minimize Accidental Transitions

But why wouldn't my content's workflow state stay the same by default when editing the content (assuming the user has access to a transition that keeps it the same)? I have to set an order correctly to keep a default value from being lost?

Well, that's a bug as of 8.5.3 that will be fixed in the next 8.5 bugfix release. You can add the patch to your composer.json file if you're tired of your workflow states getting accidentally changed.

Test your Workflow

With all the states, transitions, transition ordering, roles, and permissions, there are plenty of opportunities for misconfiguration even for a total pro with great attention to detail like yourself. Make sure you run through each scenario using each role. Then document the setup in your site's editor documentation while it's all fresh and clear in your mind.

What DOES Published EVEN MEAN ANYMORE?

With Content Moderation, the term "published" now has two meanings. Both content and content revisions can be published (but only content can be unpublished).

For content, publishing status is a boolean, as it has always been. When you view published content, you will be viewing the latest revision, which is in a published workflow state.

For a content revision, "published" is a workflow state.

Therefore, when you view the content administration page, which shows you content, not content revisions, status refers to the publishing status of the content, and does not give you any information on whether there are unpublished new revisions.

Where's my Moderation Dashboard?

From the content administration page, there is a tab for "moderated content." This is where you can send your editors to see if there is content with drafts they need to review. Unfortunately, it's not a very useful report since it has neither filtering nor sorting. Luckily work has been done recently to make the Views integration for Content Moderation/Workflows decent, so I was able to replace this dashboard with a View and shared the config.

Using Views for a Moderation DashboardMy Views-based Content Moderation dashboard.

Reviewer Access

In a typical editorial workflow, content editors create draft edits and then need to solicit feedback and approval from stakeholders or even a legal team. To use content moderation, these stakeholders need to have Drupal accounts and log in to look at the "Latest Revision" tab on the content. This is an obstacle for many organizations because the stakeholders are either very busy, not very web-savvy, or both.

You may get requests for a workflow in which content creation and review takes place on a non-live environment and then require some sort of automated content deployment process. Content deployment across environments is possible using the Deploy module, but there is a lot of inherent complexity involved that you'll want to avoid if you can.

I created an Access Latest module that allows editors to share links with an access token that lets reviewers see the latest revision without logging in.

Access Latest lets reviewers see drafts without logging inAccess Latest lets reviewers see drafts without logging in

Log Messages BUG

As of 8.5.3, you may run into a bug in which users without "administer content" permission cannot add a revision log message when they edit content. There are a fewissues related to this, and the fix should be out in the next bugfix release. I had success with this patch and then re-saving all my content types.

Jan 18 2018
Jan 18

One of the things we do on an annual basis for our clients at Advomatic is an annual site audit – a high level kick-the-tires kind of site inspection. For Drupal sites, we check the logs for any glaring errors, check for overrides in Features, run some SEO and accessibility testing, and, of course, take it for a speed test.

If you run speed tests (like Google’s Page Speed Insights), you have probably seen a common, vexing error: “Render-blocking javascript and CSS.” What’s that?

Optimize Images, Eliminate render-blocking Javascript and CSS in above-the-fold content. Your page has 4 blocking script resources and 6 blocking CSS resources. This causes a delay in rendering your page. None of the above-the-fold content on your page could be rendered without waiting for the following resources to load. Try to defer or asynchronously load blocking resources, or inline the critical portions of these resources directly in the HTML.

Pagespeed Insight’s error message for render blocking assets.

Large CSS/JS assets can block rendering “above-the-fold” content. Modern browsers tend to allow concurrent downloading of 6 to 8 files at any given time (a few offer more, now). So developers aggregate and compress CSS and JS files so we have less to load, but that also means front-loading large — though compressed — styles and javascript files. This is a recipe for a log jam.

Here’s how to beat this speed bump. Using the methods described below, I’ve seen Pagespeed Insight scores on Drupal sites increase by 30% or more.

Javascript

For javascript, the solution is fairly easy: ensure the files are in the footer. In Drupal 8, this is already the standard, unless you have added some elsewhere. For your javascript to load in the header, you actually have to set: header: true in your theme’s library.yml file. (Also, be sure you are not loading any javascript that is not needed on the page.) In Drupal 7, you will need to move the files, perhaps using Advanced CSS/JS Aggregation or manually.

If there is specific JS that needs to be there as the page loads, you may want to defer it instead. Again, AdvAgg can help you do that, or you can defer it manually.

CSS

While moving files to the footer technically works for CSS as well, it introduces a new problem: the dreaded FOUC, or “flash of unstyled content.” Put plainly, the content is loading before the styles, so users – especially on slower connections – will see a very ugly site until the page is completely done loading. While not the end of the world, it makes for an unpleasant user experience.

What we do to counter FOUC is load a “Critical CSS” file first. Critical CSS is, as it sounds, any CSS that is crucial to making above-the-fold content appear close to the final product. Think layout, position, readability, sizing – particularly in the header … anything that will smooth a transition to the full CSS loading. These styles will be put in their own file (as straight CSS) and loaded inline, in the page’s .

Place this in your html.html.twig file, in the :

When you view your site, those styles will now load directly .

Now, sifting through your CSS to figure out what is critical is a challenge in and of itself. I’ve tried using automated grunt/gulp tasks (critical, grunt-critical, grunt-criticalcss), that will take “snapshots” of styles that have been called for above-the-fold content only on a specific page, but those tools have their limitations. (For instance, you may build your critical CSS file from a snapshot of a wide view of your page, but then it may miss styles needed for a mobile version of the page. Or there are just too many variations page-to-page with a site with dynamic content.) It’s possible some of these projects have improved since I last checked, so it still may be worth looking into.

If you have (wisely) built your styles using discrete files for base, regions, components, etc, you *could* do a compile just of the your base, layout, header, and header component styles and that would put you in a good position to start building your critical CSS file. You could use a tool like Sassmeister to compile outside your normal workflow.

If you haven’t (or you have inherited a site will less precision), you may just end up taking that final, compiled CSS file, un-compressing it, and doing your best to grab styles for the header and top part of the body (taking special care for the homepage.) Like I said, it’s an imprecise science.

Once the critical CSS is in place, you can try moving the rest of the CSS in the footer. In Drupal 8, this means moving this line in html.html.twig:

Move it down to before the closing   tag.

In most cases, fine tuning what should be in the Critical CSS file takes a little trial and error.

Like with any style changes, make sure you review in a variety of browsers, devices and widths. Also, be sure to try throttling your page speed, under the Google Chrome web inspector’s Network tab. This allows you to simulate a slower network. 

Network tab of Chrome's web inspector

Network tab of Chrome’s web inspector

Note that you will need to maintain this new file. Any time you make changes to things included in the Critical CSS file, you will need to manually adjust it. 

A final word of caution: you may very well be shifting around some structural stuff, and regressions can happen, especially at that level. Testing is very important – by you, and by the client, particularly on inherited sites. In a dream world, we’d have ubiquitous and comprehensive visual regression testing tools to catch anything that might have changed in the underlying load order of styles – but that level of safety netting is rare!

With all this in mind, you can plan for a critical CSS file for the next project that requires stellar performance: start out with all the javascript in the footer, prepare your CSS in compartmentalized Sass files that can be funneled to a critical CSS file, run speed tests throughout the project to see where problems are introduced, and, since you won’t want to maintain it during development, hold off until near-launch to generate your inline critical CSS file. 

Jul 06 2017
Jul 06

The problem

If you are using Drupal’s Configuration Management subsystem to deploy configuration out to the production environment, you’ll run into a problem where the configuration .yml files contain your development settings.  To avoid this you’ll need to use the Configuration Split module

I couldn’t find any good documentation for this, so I had to figure it out by trial and error.  Here’s the results of my investigations.

Development-only modules

In the simplest scenario, we want to enable a few extra modules on development environments (Devel, Kint, Stage File Proxy, Views UI, etc.), but not have these modules enabled on production.  For this we’ll need to create a new Configuration Split Setting for the development environments.

But first:

  1. Ensure that you have no config overrides.
  2. Enable Configuration Split module in a local environment.
  3. Export config, commit and deploy to the live environment as you usually would.
  4. Enable and configure all your development modules in the local environment.

Create a Configuration Split

Navigate to:

Administration » Configuration » Development » Synchronize » Configuration Split Setting » Add configuration split setting

Creating a new split configuration.

There’s a few things that will help you keep your sanity, but aren’t covered in the help text:

  • Keep the Machine name the same as the Folder.
  • The Folder is relative to the Drupal root (it does say this in the help text, I just skimmed right past it the first time).
  • Active should be checked (more on this later).

And add your development modules to the Blacklist.  Stuff listed here will be ignored from the main configuration.  No need to also select their configuration, you only need to select the modules.  Disregard the Greylist (More on this later).

Export your configuration

From this point on, you will never have drush cex again.  For the first time that you export your configuration, use this:

# Create the directory.
mkdir sites/default/config_dev
# Export the development configuration
drush csex config_dev
# Export the main configuration
drush csex

Greylists

As mentioned in the help text, this isn’t a great name, but it’s basically used for configuration that should have different values in different environments.  E.g. Payment processing configuration, Stage File Proxy URLs, Solr URLs, etc..

I’m not a fan of using Config Split for this.  I prefer to keep all this in settings.php.  Then it’s all in one place, you can easily see all the variations between environments, and if you need to make a change you’ll be less likely to forget something.

At the top of settings.php you’ll need some logic to determine which environment you’re in.  We almost exclusively use Pantheon, so we’ve got the following:

// The environment the current site is running on.
// Possible values: local, dev, test, live.
// Configuration further in this file sets different settings for different
// environments.
if (defined('PANTHEON_ENVIRONMENT')) {
 switch (PANTHEON_ENVIRONMENT) {
    case 'kalabox':
      $config['server_environment'] = 'local';
      break;

    case 'dev':
    case 'test':
    case 'live':
      $config['server_environment'] = PANTHEON_ENVIRONMENT;
      break;

    // Multidevs.
    default:
     $config['server_environment'] = 'dev';
 }
}
else {
 $config['server_environment'] = 'local';
}

Pantheon only supports sites/default/settings.php.  But if you are hosted elsewhere, then another method is to use different sites directories for different environments.   

Enabling the split configuration in development environments, but not production

Remember that active checkbox from above?  That’s what defines whether the configuration split is enabled or not.  The trick is that we want to have that setting be different on different environments.  This can be done in settings.php, leveraging the code block above.

// Use development config in dev environments.
if (in_array($config['server_environment'], ['live', 'test'])) {
 $config['config_split.config_split.config_dev']['status'] = FALSE;
}
else {
 $config['config_split.config_split.config_dev']['status'] = TRUE;
}

Make sure to use the same machine name as you configured previously.

Development Workflow

Three things to note:

  1. You don’t need to use any of the following Drush commands, you can still use the UI at:
    Administration » Configuration Development » Synchronize
  2. If you prefer Drupal Console, there’s equivalent commands.
  3. At the time of this writing Pantheon uses 8.1.3.  If you have Drush >= 8.1.10 you’ll be able to use the old cex and cim commands.

Pulling a database from production to a dev environment

The first step is to get the database.  How to do that depends on your hosting and local environments.  We’re fans of Kalabox, and the command is real simple:

kbox pull

Then import configuration

# Clear caches
drush cr
# Then import the development configuration only.
drush csim config_dev
# Check for config overrides from production, and get those back into code.
drush csex

Pushing configuration from dev to production

In the dev environment

After you made some configuration changes:

# Export the configuration.  
# This will update both config and config_dev.
drush csex

In the pre-production / production environment

# Import configuration.
# This will import config; and if active in settings.php, config_dev
drush csim

Next Steps

The above should be able to handle 95% of Drupal sites.  But if you’ve got more complicated requirements, you can always add more splits.

Automate all the things

Running a set of magic commands every time you push code or move a database from one environment to another is error prone, and a bit of a time waste.  You can automate all of this using Pantheon’s Quicksilver.  On Kalabox the automation is a bit trickier (you’d need to create a custom plugin, which isn’t a well-tread path), but we have high hopes for its successor: Lando (currently in alpha). 

Mar 07 2017
Mar 07

This weekend’s DrupalCamp London wasn’t my first Drupal event at all, I’ve been to 3 DrupalCon Europe, 4 DrupalCamp Dublin, and a few other DrupalCamps in Ireland and lots of meetups, but in this case I experienced a lot of ‘first times’ that I want to share.

This was the first time I’d attended a Drupal event representing a sponsor organisation, and as a result the way I experienced it was completely different.

Firstly, you focus more on your company’s goals, rather than your personal aims. In this case I was helping Capgemini UK to engage and recruit people for our open positions. This allowed me to socialise more and try to connect with people. We also had T-shirts so it was easier to attract people if you have something free for them. I was also able to have conversations with other sponsors to see why did they sponsor the event, some were also recruiting, but most of them were selling their solutions to prospective clients, Drupal developers and agencies.

The best of this experience was the people I found in other companies and the attendees approaching us for a T-shirt or a job opportunity.

New member of Capgemini UK perspective

As a new joiner in the Capgemini UK Drupal team I attended this event when I wasn’t even a month old in the company, and I am glad I could attend this event at such short notice in my new position, I think this tells a lot about the focus on training and career development Capgemini has and how much they care about Drupal.

As a new employee of the company this event allowed me to meet more colleagues from different departments or teams and meet them in a non-working environment. Again the best of this experience was the people I met and the relations I made.

I joined Capgemini from Ireland, so I was also new to the London Drupal community, and the DrupalCamp gave me the opportunity to connect and create relationships with other members of the Drupal community. Of course they were busy organising this great event, but I was able to contact some of the members, and I have to say they were very friendly when I approached any of the crew or other local members attending the event. I am very happy to have met some friendly people and I am committed to help and volunteer my time in future events, so this was a very good starting point. And again the best were the people I met.

Non-session perspective

As I had other duties I couldn’t attend all sessions. But I was able to attend some sessions and the Keynotes, with special mention to the Saturday keynote from Matt Glaman, it was very motivational and made me think anyone could evolve as a developer if they try and search the resources to get the knowledge. And the closing keynote from Danese Cooper was very inspirational as well about what Open Source is and what should be, and that we, the developers, have the power to make it happen. And we could also enjoy Malcom Young’s presentation about Code Reviews.

Conclusion

Closing this article I would like to come back to the best part of the DrupalCamp for me this year, which was the people. They are always the best part of the social events. I was able to catch up with old friends from Ireland, engage with people considering a position at Capgemini and introduce myself to the London Drupal community, so overall I am very happy with this DrupalCamp London and I will be happy to return next year. In the meantime I will be attending some Drupal meetups and trying to get involve in the community, so don’t hesitate to contact me if you have any question or you need my help.

Oct 27 2016
Oct 27

In a previous article on this blog, I talked about why code review is a good idea, and some aspects of how to conduct them. This time I want to dig deeper into the practicalities of reviewing code, and mention a few things to watch out for.

Code review is the first line of defence against hackers and bugs. When you approve a pull request, you’re putting your name to it - taking a share of responsibility for the change.

Once bad code has got into a system, it can be difficult to remove. Trying to find problems in an existing codebase is like looking for an unknown number of needles in a haystack, but when you’re reviewing a pull request it’s more like looking in a handful of hay. The difficult part is recognising a needle when you see one. Hopefully this article will help you with that.

Code review shouldn’t be a box-ticking exercise, but it can be helpful to have a list of common issues to watch out for. As well as the important question of whether the change will actually work, the main areas to consider are:

  • Security
  • Perfomance
  • Accessibility
  • Maintainability

I’ll touch on these areas in more detail - I’ll be talking about Drupal and PHP in particular, but a lot of the points I’ll make are relevant to other languages and frameworks.

Security

I don’t claim to be an expert on security, and often count myself lucky that I work in what my colleague Andrew Harmel-Law calls “a creative-inventive market, not a safety-critical one”.

Having said that, there are a few common things to keep an eye out for, and developers should be aware of the OWASP top ten list of vulnerabilities. When working with Drupal, you should bear in mind the Drupal security team’s advice for writing secure code. For me, the most important points to consider are:

Does the code accept user input without proper sanitisation?

In short - don’t trust user input. The big attack vectors like XSS and SQL injection are based on malicious text strings. Drupal provides several types of text filtering - the appropriate filter depends on what you’re going to do with the data, but you should always run user input through some kind of sanitisation.

Are we storing sensitive data anywhere we shouldn’t be?

Security isn’t just about stopping bad guys getting in where they shouldn’t. Think about what kind of data you have, and what you’re doing with it. Make sure that you’re not logging people’s private data inappropriately, or passing it across network in a way you shouldn’t. Even if the site you’re working on doesn’t have anything as sensitive as the Panama papers, you have a legal, professional, and personal responsibility to make sure that you’re handling data properly.

Performance

When we’re considering code changes, we should always think about what impact they will have on the end user, not least in terms of how quickly a site will load. As Google recently reminded us, page load speed is vital for user engagement. Slow, bloated websites cost money, both in terms of mobile data charges and lost revenue.

Does the change break caching?

Most Drupal performance strategies will talk about the value of caching. The aim of the game is to reduce the amount of work that your web server does. Ideally, the web server won’t do any work for a page request from an anonymous user - the whole thing will be handled by a reverse proxy cache, such as Varnish. If the request needs to go to the web server, we want as much of the page as possible to be served from an object cache such as Redis or Memcached, to minimise the number of database queries needed to render the page.

Are there any unnecessary uses of $_SESSION?

Typically, reverse proxy servers like Varnish will not cache pages for authenticated users. If the browser has a session, the request won’t be served by Varnish, but by the web server.

Here’s an illustration of why this is so important. This graph shows the difference in response time on a load test environment following a deployment that included some code to create sessions. There were some other changes that impacted performance, but this was the big one. As you can see, overall response time increased six-fold, with the biggest increase in the time spent by the web server processing PHP (the blue sections on the graphs), mainly because a few lines of code creating sessions had slipped through the net.

Graph showing dramatic increase in PHP evaluation time

Are there any inefficient loops?

The developers’ maxims “Don’t Repeat Yourself” and “Keep It Simple Stupid” apply to servers as well. If the server is doing work to render a page, we don’t want that work to be repeated or overly complex.

What’s the front end performance impact?

There’s no substitute for actually testing, but there are a few things that you can keep an eye out for when reviewing change. Does the change introduce any additional HTTP requests? Perhaps they could be avoided by using sprites or icon fonts. Have any images been optimised? Are you making any repeated DOM queries?

Accessibility

Even if you’re not an expert on accessibility, and don’t know ARIA roles, you can at least bear in mind a few general pointers. When it comes to testing, there’s a good checklist from the Accessibility Project, but here are some things I always try to think about when reviewing a pull request.

Will it work on a keyboard / screen reader / other input or output device ?

Doing proper accessibility testing is difficult, and you may not have access to assistive technology, but a good rule of thumb is that if you can navigate using only a keyboard, it will probably work for someone using one of the myriad input devices. Testing is the only way to be certain, but here are a couple of simple things to remember when reviewing CSS changes: hover and focus should usually go together, and you should almost never use outline: none;.

Are you hiding content appropriately?

One piece of low-hanging fruit is to make sure that text is available to screen readers and other assistive technology. Any time I see display: none; in a pull request, alarm bells start ringing. It’s usually not the right way to hide content.

Maintainability

Hopefully the system you’re working on will last for a long time. People will have to work on it in the future. You should try to make life easier for those people, not least because you’ll probably be one of them.

Reinventing the wheel

Are you writing more code than you need to? It may well be that the problem you’re looking at has already been solved, and one of the great things about open source is that you’re able to recruit an army of developers and testers you may never meet. Is there already a module for that?

On the other hand, even if there is an existing module, it might not always make sense to use it. Perhaps the contributed module provides more flexibility than our project will ever need, at a performance cost. Maybe it gives us 90% of what we want, but would force us to do things in a certain way that would make it difficult to get the final 10%. Perhaps it isn’t in a very healthy state - if so, perhaps you could fix it up and contribute your fixes back to the community, as I did on a recent project.

If you’re writing a custom module to solve a very specific problem, could it be made more generic and contributed to the community? A couple of examples of this from the Capgemini team are Stomp and Route.

One of the jobs of the code reviewer is to help draw the appropriate line between the generic and the specific. If you’re reviewing custom code, think about whether there’s prior art. If the pull request includes community-contributed code, you should still review it. Don’t assume that it’s perfect, just because someone’s given it away for nothing.

Appropriate API usage

Is your team using your chosen frameworks as they were intended? If you see someone writing a custom function to solve a problem that’s already been solved, maybe you need to share a link to the API docs for the existing solution.

Introducing notices and errors

If your logs are littered with notices about undefined variables or array indexes, not only are you likely to be suffering a performance hit from the logging, but it’s much harder to separate the signal from the noise when you’re trying to investigate something.

Browser support

Remember that sometimes, it’s good to be boring. As a reviewer, one of your jobs is to stop your colleagues from getting carried away with shiny new features like ES6, or CSS variables. Tools like Can I Use are really useful in being able to check what’s going to work in the browsers that you care about.

Code smells

Sometimes, code seems wrong. As I learned from Larry Garfield’s excellent presentation on code smells at the first Drupalcon I went to, code smells are indications of things that might be a deeper problem. Rather than re-hash the points Larry made, I’d recommend reading his slides, but it is worth highlighting some of the anti-patterns he discusses.

Functions or objects that do more than one thing

A function should have a function. Not two functions, or three. If an appropriate comment or function name includes “and”, it’s a sign you should be splitting the function up.

Functions that sometimes do different things

Another bad sign is the word “or” in the comment. Functions should always do the same thing.

Excessive complexity

Long functions are usually a sign that you might want to think about refactoring. They tend to be an indicator that the code is more complex than it needs to be. The level of complexity can be measured, but you don’t need a tool to tell you that if a function doesn’t fit on a screen, it’ll be difficult to debug.

Not being testable

Even if functions are simple enough to write tests for, do they depend on a whole system? In other words, can they be genuinely unit tested?

Lack of documentation

There’s more to be said on the subject of code comments than I can go into here, but suffice to say code should have useful, meaningful comments to help future maintainers understand it.

Tight coupling

Modules should be modular. If two parts of a system need to interact, they should have a clearly defined and documented interface.

Impurity

Side effects and global variables should generally be avoided.

Sensible naming

Is the purpose of a function or variable obvious from the name? I don’t want to rehash old jokes, but naming things is difficult, and it is important.

Why would you comment out lines of code? If you don’t need it, delete it. The beauty of version control is that you can go back in time to see what code used to be there. As long as you write a good commit message, it’ll be easy enough to find. If you think that you might need it later, put it behind a feature toggle so that the functionality can be enabled without a code release.

Specificity

In CSS, IDs and !important are the big code smells for me. They’re a bad sign that a specificity arms race has begun. Even if you aren’t going to go all the way with a system like BEM or SMACSS, it’s a good idea to keep specificity as low as possible. The excellent articles on CSS specificity by Harry Roberts and Chris Coyier are good starting points for learning more.

Standards

It’s important to follow coding standards. The point of this isn’t to get some imaginary Scout badge - code that follows standards is easier to read, which makes it easier to understand, and by extension easier to maintain. In addition, if you have your IDE set up right, it can warn you of possible problems, but those warnings will only be manageable if you keep your code clean.

Deployability

Will your changes be available in environments built by Continuous Integration? Do you need to set default values of variables which may need overriding for different environments? Just as your functions should be testable, so should your configuration changes. As far as possible, aim to make everything repeatable and automatable - if a release needs any manual changes it’s a sign that your team may need to be thinking with more of a DevOps mindset.

Keep Your Eyes On The Prize

With all this talk of coding style and standards, don’t get distracted by trivialities - it is worth caring about things like whitespace and variable naming, but remember that it’s much more important to think about whether the code actually does what it is supposed to. The trouble is that our eyes tend to fixate on those sort of things, and they cause unnecessary cognitive load.

Pre-commit hooks can help to catch coding standards violations so that reviewers don’t need to waste their time commenting on them. If you’re on a big project, it will almost certainly be worth investing some time in integrating your CI server and your code review tool, and automating checks for issues like code style, unit tests, mess detection - in short, all the things that a computer is better at spotting than humans are.

Does the code actually solve the problem you want it to? Rather than just looking at the code, spend a couple of minutes reading the ticket that it is associated with - has the developer understood the requirements properly? Have they approached the issue appropriately? If you’re not sure about the change, check out the branch locally and test it in your development environment.

Even if there’s nothing wrong with the suggested change, maybe there’s a better way of doing it. The whole point of code review is to share the benefit of the team’s various experiences, get extra eyes on the problem, and hopefully make the end product better.

I hope that this has been useful for you, and if there’s anything you think I’ve missed, please let me know via the comments.

Sep 18 2016
Sep 18

If you’re migrating from a different CMS platform, the advantages of Drupal 8 seem fairly clear. But what if you’re already on Drupal? There has been a lot of discussion in the Drupal community lately about upgrading to Drupal 8. When is the right time? Now that the contributed module landscape is looking pretty healthy, there aren’t many cases where I’d recommend going with Drupal 7 for a new project. However, as I’ve previously discussed on this blog, greenfield projects are fairly rare.

Future proofing

One of the strengths of an open source project like Drupal is the level of support from the community. Other people are testing your software, and helping to fix bugs that you might not have noticed. Drupal 7 will continue to be supported until Drupal 9 is released, which should be a while away yet. However, if your site is on Drupal 6, there are security implications of remaining on an unsupported version, and it would be wise to make plans to upgrade sooner rather than later, even with the option of long term support. While the level of support from the community will no longer be the same, sites built on older versions of Drupal won’t suddenly stop working, and there are still some Drupal 5 sites out there in the wild.

Technical debt

Most big systems could do with some refactoring. There’s always some code that people aren’t proud of, some decisions that were made under the pressure of a tight deadline, or just more modern ways of doing things.

An upgrade is a great opportunity to start with a blank piece of paper. Architectural decisions can be revisited, and Drupal 8’s improved APIs are ideal if you’re hoping to take a more microservices-oriented approach, rather than ending up with another MySQL monolith.

Drupal’s policy of backward incompatibility means that while you’re upgrading the CMS, you have the chance to refactor and improve the existing custom codebase (but don’t be suckered in by the tempting fallacy that you’ll be able to do a perfect refactoring).

There are no small changes

Don’t underestimate how big a job upgrading will be. At the very least, every custom module in the codebase will need to be rewritten for Drupal 8, and custom themes will need to be rebuilt using the Twig templating system. In a few cases, this will be a relatively trivial job, but the changes in Drupal 8 may mean that some modules will need to be rebuilt from the ground up. It isn’t just about development - you’ll need to factor in the time it will take to define requirements, not to mention testing and deployment. If it’s a big project, you may also need to juggle the maintenance of the existing codebase for some time, while working on the new version.

The sites that we tend to deal with at Capgemini are big. We work with large companies with complex requirements, a lot of third party integrations, and high traffic. In other words, it’s not just your standard brochureware, so we tend to have a lot of custom modules.

If it ain’t broke, don’t fix it

Given the fact that an upgrade is non-trivial, the question has to be asked - what business value will an upgrade bring? If all you’re doing is replacing a Drupal 7 site with a similar Drupal 8 site, is it really a good idea to spend a lot of time and money to build something that is identical, as far as the average end user can tell?

If the development team is focused on upgrading, will there be any bandwidth for bug fixes and improvements? An upgrade will almost certainly be a big investment - maybe that time, energy and money would be better spent on new features or incremental improvements that will bring tangible business value and can be delivered relatively quickly. Besides, some of the improvements in Drupal 8 core, such as improved authoring experience, are also available in the Drupal 7 contrib ecosystem.

On the other hand, it might make more sense to get the upgrade done now, and build those improvements on top of Drupal 8, especially if your existing codebase needs some TLC.

Another option (which we’ve done in the past for an upgrade from Drupal 6 to 7) is to incrementally upgrade the site, releasing parts of the new site as and when they’re ready.

The right approach depends on a range of factors, including how valuable your proposed improvements will be, how urgent they are, and how long an upgrade will take, which depends on how complex the site is.

The upside of an upgrade

Having said all of that, the reasons to upgrade to Drupal 8 are compelling. One big plus for Drupal 8 is the possibility of improved performance, especially for authenticated users, thanks to modern features like BigPipe. The improved authoring experience, accessibility and multilingual features that Drupal 8 brings will be especially valuable for larger organisations.

Not only that, improving Developer Experience (DX) was a big part of the community initiatives in building Drupal 8. Adopting Symfony components, migrating code to object-oriented structures, improving the APIs and a brand new configuration management system are all designed to improve developer productivity and code quality - after the initial learning curve. These improvements will encourage more of an engineering mindset, and drive modern development approaches. The net benefit will be more testable (and therefore more reliable) features, easier deployment and maintenance methods and increase speed of future change.

Decision time

There is no one-size-fits-all answer. Your organisation will need to consider its own situation and needs.

Where does upgrading the CMS version fit into the organisation’s wider digital roadmap? Is there a site redesign on the cards any time soon? What improvements are you hoping to make? What functionality are you looking to add? Does your site’s existing content strategy meet your needs? Is the solution architecture fit for your current and future purposes, or would it make sense to think about going headless?

In summary, while an upgrade will be a big investment, it may well be one that is worth making, especially if you’re planning major changes to your site in the near future.

If the requirements for your upgrade project are “build us the same as what we’ve got already, but with more modern technology” then it’s probably not going to be worth doing. Don’t upgrade to Drupal 8 just because it’s new and shiny. However, if you’re looking further forward and planning to build a solid foundation for future improvements then an upgrade could be a very valuable investment.

Aug 09 2016
Aug 09

If automated testing is not already part of your development workflow, then it’s time to get started. In this post, I’ll show you how to use Behat to test that your Drupal site is working properly.

The post Testing Your Drupal Site with Behat appeared first on php[architect].

Jun 23 2016
Jun 23

These days, it’s pretty rare that we build websites that aren’t some kind of redesign. Unless it’s a brand new company or project, the client usually has some sort of web presence already, and for one reason or another, they’ve decided to replace it with something shiny and new.

In an ideal world, the existing system has been built in a sensible way, with a sound content strategy and good separation of concerns, so all you need to do is re-skin it. In the Drupal world, this would normally mean a new theme, or if we’re still in our dream Zen Garden scenario, just some new CSS.

However, the reality is usually different. In my experience, redesigns are hardly ever just redesigns. When a business is considering significant changes to the website like some form of re-branding or refresh, it’s also an opportunity to think about changing the content, or the information architecture, or some aspects of the website functionality. After all, if you’re spending time and money changing how your website looks, you might as well try to improve the way it works while you’re at it.

So the chances are that your redesign project will need to change more than just the theme, but if you’re unlucky, someone somewhere further back along the chain has decided that it’s ‘just a re-skinning’, and therefore it should be a trivial job, which shouldn’t take long. In the worst case scenario, someone has given the client the impression that the site just needs a new coat of paint, but you’re actually inheriting some kind of nasty mess with unstable foundations that should really be fixed before you even think about changing how it looks. Incidentally, this is one reason why sales people should always consult with technical people who’ve seen under the bonnet of the system in question before agreeing prices on anything.

Even if the redesign is relatively straightforward from a technical point of view, perhaps it’s part of a wider rebranding, and there are associated campaigns whose dates are already expensively fixed, but thinking about the size of the website redesign project happened too late.

In other words, for whatever reason, it’s not unlikely that redesign projects will find themselves behind schedule, or over budget - what should you do in this situation? The received agile wisdom is that time and resources are fixed, so you need to flex on scope. But what’s the minimum viable product for a redesign? When you’ve got an existing product, how much of it do you need to rework before you put the new design live?

This is a question that I’m currently considering from a couple of angles. In the case of one of my personal projects, I’m upgrading an art gallery listings site from Drupal 6 to Drupal 8. The old site is the first big Drupal site I built, and is looking a little creaky in places. The design isn’t responsive, and the content editing experience leaves something to be desired. However, some of the contributed modules don’t have Drupal 8 versions yet, and I won’t have time to do the work involved to help get those modules ready, on top of the content migration, the new theme, having a full-time job and a family life, and all the rest of it.

In my day job, I’m working with a large multinational client on a set of sites where there’s no Drupal upgrade involved, but the suggested design does include some functional changes, so it isn’t just a re-theming. The difficulty here is that the client wants a broader scope of change than the timescales and budget allow.

When you’re in this situation, what can you do? As usual with interesting questions, the answer is ‘it depends’. Processes like impact mapping can help you to figure out the benefits that you get from your redesign. If you’ve looked at your burndown rates, and know that you’re not going to hit the deadline, what can you drop? Is the value that you gain from your redesign worth ditching any of the features that won’t be ready? To put it another way, how many of your existing features are worth keeping? A redesign can (and should) be an opportunity for a business to look at their content strategy and consider rationalising the site. If you’ve got a section on your site that isn’t adding any value, or isn’t getting any traffic, and the development team will need to spend time making it work in the new design, perhaps that’s a candidate for the chop?

We should also consider the Pareto principle when we’re structuring our development work, and start by working on the things that will get us most of the way there. This fits in with an important point made by scrum, which can sometimes get forgotten about: that each sprint should deliver “a potentially shippable increment”. In this context, I would interpret this to mean that we should make sure that the site as a whole doesn’t look broken, and then we can layer on the fancy bits afterwards, similar to a progressive enhancement approach to dealing with older browsers. If you aren’t sure whether you’ll have time to get everything done, don’t spend an excessive amount of time polishing one section of the site to the detriment of basic layout and styling that will make the whole site look reasonably good.

Starting with a style guide can help give you a solid foundation to build upon, by enabling you to make sure that all the components on the site look presentable. You can then test those components in their real contexts. If you’ve done any kind of content audit (and somebody really should have done), you should have a good idea of the variety of pages you’ve got. At the very least, your CMS should help you to know what types of content you have, so that you can take a sample set of pages of each content type or layout type, and you’ll be able to validate that they look good enough, whatever that means in your context.

There is another option, though. You don’t have to deliver all the change at once. Can you (and should you) do a partial go-live with a redesign? Depending on how radical the redesign is, the attitudes to change and continuous delivery within your organisation and client, and the technology stack involved, it may make sense to deliver changes incrementally. In other words, put the new sections of the site live as they’re ready, and keep serving the old bits from the existing system. There may be brand consistency, user experience, and content management process reasons why you might not want to do this, but it is an option to consider, and it can work.

On one previous project, we were carrying out a simultaneous redesign and Drupal 6 to 7 upgrade, and we were able to split traffic between the old site and the new one. It made things a little bit more complicated in terms of handling user sessions, but it did give the client the freedom to decide when they thought we had enough of the new site for them to put it live. In the end, they decided that the answer was ‘almost all of it’.

So what’s the way forward?

In the case of my art gallery listings site, the redesign itself has a clear value, and with Drupal 6 being unsupported, I need to get the site onto Drupal 8 sooner rather than later. There’s definitely a point that will come fairly soon, even if I don’t get to spend as long as I’d like working on it, where the user experience will be improved by the new site, even though some of the functionality from the old site isn’t there, and isn’t likely to be ready for a while. I’m my own client on that project, so I’m tempted to just put the redesign live anyway.

In the case of my client, there are decisions to be made about which of the new features need to be included in the redesign. De-scoping some of the more complex changes will bring the project back into the realm of being a re-theming, the functional changes can go into subsequent releases, and hopefully we’ll hit the deadline.

A final point that I’d like to make is that we shouldn’t fall into the trap of thinking of redesigns as big-bang events that sit outside the day-to-day running of a site. Similarly, if you’re thinking about painting your house, you should also think about whether you also need to fix the roof, and when you’re going to schedule the cleaning. Once the painting is done, you’ll still be living there, and you’ll have the opportunity to do other jobs if and when you have the time, energy, and money to do so.

Along with software upgrades, redesigns should be considered as part of a business’s long-term strategy, and they should be just one part of a plan to keep making improvements through continuous delivery.

Apr 18 2016
Apr 18

What would a website be if it couldn’t send emails, even if just for password resets? Running your own mail server is a huge hassle, so many developers instead use a third party service to send transactional emails like password resets, new user welcome messages, and order summaries. One of the most popular services, in […]

The post Mandrill Alternatives for PHP Applications appeared first on php[architect].

Jun 02 2015
Jun 02

In April 2015, NASA unveiled a brand new look and user experience for NASA.gov. This release revealed a site modernized to 1) work across all devices and screen sizes (responsive web design), 2) eliminate visual clutter, and 3) highlight the continuous flow of news updates, images, and videos.

With its latest site version, NASA—already an established leader in the digital space—has reached even higher heights by being one of the first federal sites to use a “headless” Drupal approach. Though this model was used when the site was initially migrated to Drupal in 2013, this most recent deployment rounded out the endeavor by using the Services module to provide a REST interface, and ember.js for the client-side, front-end framework.

Implementing a “headless” Drupal approach prepares NASA for the future of content management systems (CMS) by:

  1. Leveraging the strength and flexibility of Drupal’s back-end to easily architect content models and ingest content from other sources. As examples:

  • Our team created the concept of an “ubernode”, a content type which homogenizes fields across historically varied content types (e.g., features, images, press releases, etc.). Implementing an “ubernode” enables easy integration of content in web services feeds, allowing developers to seamlessly pull multiple content types into a single, “latest news” feed. This approach also provides a foundation for the agency to truly embrace the “Create Once, Publish Everywhere” philosophy of content development and syndication to multiple channels, including mobile applications, GovDelivery, iTunes, and other third party applications.

  • Additionally, the team harnessed Drupal’s power to integrate with other content stores and applications, successfully ingesting content from blogs.nasa.gov, svs.gsfc.nasa.gov, earthobservatory.nasa.gov, www.spc.noaa.gov, etc., and aggregating the sourced content for publication.

  1. Optimizing the front-end by building with a client-side, front-end framework, as opposed to a theme. For this task, our team chose ember.js, distinguished by both its maturity as a framework and its emphasis of convention over configuration. Ember embraces model-view-controller (MVC), and also excels at performance by batching updates to the document object model (DOM) and bindings.

In another stride toward maximizing “Headless” Drupal’s massive potential, we configured the site so that JSON feed records are published to an Amazon S3 bucket as an origin for a content delivery network (CDN), ultimately allowing for a high-security, high-performance, and highly available site.

Below is an example of how the technology stack which we implemented works:

Using ember.js, the NASA.gov home page requests a list of nodes of the latest content to display. Drupal provides this list as a JSON feed of nodes:

Ember then retrieves specific content for each node. Again, Drupal provides this content as a JSON response stored on Amazon S3:

Finally, Ember distributes these results into the individual items for the home page:

The result? A NASA.gov architected for the future. It is worth noting that upgrading to Drupal 8 can be done without reconfiguring the ember front-end. Further, migrating to another front-end framework (such as Angular or Backbone) does not require modification of the Drupal CMS.

May 14 2015
May 14

As Drupal has evolved, it has become more than just a CMS. It is now a fully fledged Web Development Platform, enabling not just sophisticated content management and digital marketing capabilities but also any number of use cases involving data modelling and integration with an endless variety of applications and services. In fact, if you need to build something which responds to an HTTP request, then you can pretty much find a way to do it in Drupal.

“Just because you can, doesn’t mean you should.”

However, the old adage is true. Just because you can use use a sledgehammer to crack a nut, that doesn’t mean you’re going to get the optimal nut-consumption-experience at the end of it.

Drupal’s flexibility can lead to a number of different integration approaches, all of which will “work”, but some will give better experiences than others.

On the well trodden development path of Drupal 8, giant steps have been taken in making the best of what is outside of the Drupal community and “getting off the island”, and exciting things are happening in making Drupal less of a sledgehammer, and more of a finely tuned nutcracker capable of cracking a variety of different nuts with ease.

In this post, I want to explore ways in which Drupal can create complex systems, and some general patterns for doing so. You’ll see a general progression in line with that of the Drupal community in general. We’ll go from doing everything in Drupal, to making the most of external services. No option is more “right” than others, but considering all the options can help make sure you pick the approach that is right for you and your use case.

Build it in Drupal

One option, and probably the first that occurs to many developers, is to implement business logic, data structures and administration of a new applications or services using Drupal and its APIs. After all, Entity API and the schema system give us the ability to model custom objects and store them in the Drupal database; Views gives us the means to retrieve that data and display it in a myriad of ways. Modules like Rules; Features and CTools provide extensive options for implementing specific business rules to model your domain specific data and application needs.

This is all well and good, and uses the strengths of Drupal core and the wide range of community contributed modules to enable the construction of complex sites with limited amounts of coding required, and little need to look outside Drupal. The downside can come when you need to scale the solution. Depending on how the functionality has been implemented you could run into performance problems caused by large numbers of modules, sub-optimal queries, or simply the amount of traffic heading to your database - which despite caching strategies, tuning and clustering is always likely to end up being the performance bottleneck of your Drupal site.

It also means your implementation is tightly coupled to Drupal - and worse, most probably the specific version of Drupal you’ve implemented. With Drupal 8 imminent this means you’re most likely increasing the amount of re-work required when you come to upgrade or migrate between versions.

It’s all PHP

Drupal sites can benefit hugely from being part of the larger PHP ecosystem. With Drush make, the Libraries API, Composer Manager, and others providing the means of pulling external, non-Drupal PHP libraries into a Drupal site, there are huge opportunities for building complexity in your Drupal solution without tying yourself to specific Drupal versions, or even to Drupal at all. This could become particularly valuable as we enter the transition period between Drupal 7 and 8.

In this scenario, custom business logic can be provided in a framework agnostic PHP library and a Naked Module approach can be used to provide the glue between that library and Drupal - utilising Composer to download and install dependencies.

This approach is becoming more and more widespread in the Drupal community with Commerce Guys (among others) taking a libraries first approach to many components of Commerce 2.x which will have generic application outside of Drupal Commerce.

The major advantage of building framework agnostic libraries is that if you ever come to re-implement something in another framework, or a new version of Drupal, the effort of migrating should be much lower.

Integrate

Building on the previous two patterns, one of Drupal’s great strengths is how easy it is to integrate with other platforms and technologies. This gives us great opportunity to implement functionality in the most appropriate technology and then simply connect to it via web services or other means.

This can be particularly useful when integrating with “internal” services - services that you don’t intend to expose to the general public (but may still be external in the sense of being SaaS platforms or other partners in a multi-supplier ecosystem). It is also a useful way to start using Drupal as a new part of your ecosystem, consuming existing services and presenting them through Drupal to minimise the amount of architectural change taking place at one time.

Building a solution in this componentised and integrated manner gives several advantages:

  • Separation of concerns - the development, deployment and management of the service can be run by a completely separate team working in a different bounded context. It also ensures logic is nicely encapsulated and can be changed without requiring multiple front-end changes.
  • Horizontal scalability - implementing services in alternate technologies lets us pick the most appropriate for scalability and resilience.
  • Reduce complex computation taking place in the web tier and let Drupal focus on delivering top quality web experience to users. For example, rather than having Drupal publish and transform data to an external platform, push the raw data into a queue which can be consumed by “non-Drupal” processes to do the transform and send.
  • Enable re-use of business logic outside of the web tier, on other platforms or with alternative front ends.

Nearly-Headless Drupal

Headless Drupal is a phrase that has gained a lot of momentum in the Drupal community - the basic concept being that Drupal purely responds with RESTful endpoints, and completely independant front-end code using frameworks such as Angular.js is used to render the data and completely separate content from presentation.

Personally, I prefer to think of a “nearly headless” approach - where Drupal is still responsible for the initial instantiation of the page, and a framework like Angular is used to control the dynamic portion of the page. This lets Drupal manage the things it’s good at, like menus, page layout and content management, whilst the “app” part is dropped into the page as another re-usable component and only takes over a part of the page.

For an example use case, you may have business requirements to provide data from a service which is also provided as an API for consumption by external parties or mobile apps. Rather than building this service in Drupal, which while possible may not provide optimal performance and management opportunities, this could be implemented as a standalone service which is called by Drupal as just another consumer of the API.

From an Angular.js (or insert frontend framework of choice) app, you would then talk directly to the API, rendering the responses dynamically on the front end, but still use Drupal to build everything and render the remaining elements of the page.

Summing up

As we’ve seen, Drupal is an incredibly powerful solution, providing the capability for highly-consolidated architectures encapsulated in a single tool, a perfect enabler for projects with low resources and rapid development timescales. It’s also able to take its place as a mature part of an enterprise architecture, with integration capabilities and rich programming APIs able to make it the hub of a Microservices or Service Oriented Architecture.

Each pattern has pros and cons, and what is “right” will vary from project to project. What is certain though, is that Drupal’s true strength is in its ability to play well with others and do so to deliver first class digital experiences.

New features in Drupal 8 will only continue to make this the case, with more tools in core to provide the ability to build rich applications, RESTful APIs for entities out of the box allowing consumption of that data on other platforms (or in a headless front-end), improved HTTP request handling with Guzzle improving options for consuming services outside of Drupal, and much more.

Feb 27 2015
Feb 27

There are thousands of situations in which you do not want to reinvent the wheel. It is a well known principle in Software Engineering, but not always well applied/known into the Drupal world.

Let’s say for example, that you have a url that you want to convert from relative to absolute. It is a typical scenario when you are working with Web (but not just Web) crawlers. Well, you could start building your own library to achieve the functionality you are looking for, packaging all in a Drupal module format. It is an interesting challenge indeed but, unless for training or learning purposes, why wasting your time when someone else has already done it instead of just focussing on the real problem? Especially if your main app purpose is not that secondary problem (the url converter).

What’s more, if you reuse libraries and open source code, you’ll probably find yourself in the situation in which you could need an small improvement in that nice library you are using. Contributing your changes back you are closing the circle of the open source, the reason why the open source is here to stay and conquer the world (diabolical laugh here).

That’s another one of the main reasons why lot’s of projects are moving to the Composer/Symfony binomium, stop working as isolated projects and start working as global projects that can share code and knowledge between many other projects. It’s a pattern followed by Drupal, to name but one, and also by projects like like phpBB, ezPublish, Laravel, Magento,Piwik, …

Composer and friends

Coming back to our crawler and the de-relativizer library that we are going to need, at this point we get to know Composer. Composer is a great tool for using third party libraries and, of course, for contributing back those of your own. In our web crawler example, net_url2 does a the job just beautifully.

Nice, but at this point you must be wondering… What does this have to do with Drupal, if any at all? Well, in fact, as everyone knows, Drupal 8 is being (re)built following this same principle (DRY or don’t repeat yourself) with an strong presence of the great Symfony 2 components in the core. Advantages? Lots of them, as we were pointing out, but that’s the purpose of another discussion

The point here is that you don’t need to wait for Drupal 8, and what’s more, you can start applying some of this principles in your Drupal 7 libraries, making your future transition to Drupal 8 even easier.

Let’s rock and roll

So, using a php library or a Symfony component in Drupal 7 is quite simple. Just:

  1. Install composer manager
  2. Create a composer.json file in your custom module folder
  3. Place the content (which by the way, you’ll find quite familiar if you’ve already worked with Symfony / composer yaml’s):
    "require": {
      "pear/net_url2": "2.0.x-dev"
     }
    
  4. enable the custom module

And that’s it basically. At this point we simply need to tell drupal to generate the main composer.json. That’s basically a composer file generated from the composer.json found in each one of the modules that include a composer themselves.

Lets generate that file:

drush composer-rebuild

At this point we have the main composer file, normally in a vendor folder (if will depend on the composer manager settings).

Now, let’s make some composer magic :

drush composer update

At this point, inside the vendors folder we should now have a classmap, containing amongst others our newly included library.

Hopefully all has gone well, and just like magic, the class net_url2 is there to be used in our modules. Something like :

$base = new Net_URL2($absoluteURL);

Just remember to add the library to your class. Something like:

use Net_URL2;

In the next post we’ll be doing some more exciting stuff. We will create some code that will live in a php library, completely decoupled but at the same time fully integrated with Drupal. All using Composer magic to allow the integration.

Why? Again, many reasons like:

  1. Being ready for Drupal 8 (just lift libraries from D7 or D6 to D8),
  2. Decoupling things so we code things that are ready to use not just in Drupal, and
  3. Opening the door to other worlds to colaborate with our Drupal world, …
  4. Why not use Dependency Injection in Drupal (as it already happens in D8)? What about using the Symfony Service container? Or something more light like Pimple?
  5. Choose between many other reasons…

See you in my next article about Drupal, Composer and friends, on the meantime, be good :-).

Updated: Clarified that we are talking about PHP Libraries and / or Symfony components instead of bundles. Thanks to @drrotmos and @Ross for your comments.

Jan 14 2015
Jan 14

Up until Drupal 8 there has been little to encourage well organised code. It now has PSR-4 autoloading so your classes are automatically included. Even though Drupal 8 is just round the corner, a lot of us will still be using Drupal 7 for quite a while, however that doesn’t mean we can’t benefit from this structure in Drupal 7.

This post covers two parts:

  1. Autoloading class files.
  2. Avoiding extra plumbing to hook into your class methods.

You’re probably familiar with drupal_get_form(‘my_example_form’) which then looks for a function my_example_form(). The issue is that your form definition will no longer be in such a function but within a method in a class. To cover both these parts we will be using two modules:

  1. XAutoLoad - Which will autoload our class.
  2. Cool - Which allows us to abstract the usual functions into class methods.

Drupal 8 was originally using PSR-0 which has been deprecated in favour of PSR-4. As a consequence the Cool module uses PSR-0 in its examples although it does support PSR-4. We will create an example module called psr4_form.

The information on autoloading and folder structure for PSR-4 in Drupal 8 states that we should place our form class in psr4_form/src/Form/FormExample.php however the cool module instead loads from a FormControllers folder: psr4_form/src/FormControllers/FormExample.php.

We can get round this by providing our own hook_forms() as laid out in the Cool module:

/**
* Implements hook_forms().
*/
function psr4_form_forms($form_id, $args) {
  $classes = \Drupal\cool\Loader::mapImplementationsAvailable('Form', '\Drupal\cool\Controllers\FormController');
  unset($classes['Drupal\\cool\\BaseForm']);
  unset($classes['Drupal\\cool\\BaseSettingsForm']);
  $forms = array();
  foreach ($classes as $class_name) {
    $forms[$class_name::getId()] = array(
      'callback' => 'cool_default_form_callback',
      'callback arguments' => array($class_name),
    );
  }

  return $forms;
}

If you are ok placing your class in the FormControllers folder then you can omit the above function to keep your .module file simple or you could put the hook in another module. Potentially the Cool module could be updated to reflect this.

This class requires a namespace of the form Drupal\\Form. It also extends the BaseForm class provided by the Cool module so we don’t need to explicitly create our form functions:

namespace Drupal\psr4_form\Form;

class FormExample extends \Drupal\cool\BaseForm {
  ...
}

Within our FormExample class we need a method getId() to expose the form_id to Drupal:

public static function getId() {
  return 'psr4_form';
}

And of course we need the form builder:

public static function build() {
  $form = parent::build();
  $form['my_textfield'] = array(
     '#type' => 'textfield',
     '#title' => t('My textfield'),
   );

   return $form;
}

All that is left is to define your validate and submit methods following the Drupal 8 form API.

At the time of writing, the Cool module isn’t up to date with Drupal 8 Form API conventions. I started this blog post with the intention of a direct copy and paste of the src folder. Unfortunately the methods don’t quite follow the exact same conventions and they also need to be static:

Drupal 7 Drupal 8 getId getFormId build buildForm validate validateForm submit submitForm

This example module can be found at https://github.com/oliverpolden/psr4_form.

Drupal 8 is just round the corner but a lot of us will still be using Drupal 7 for the foreseeable future. Taking this approach allows us to learn and make use of Drupal 8 conventions as well as making it easier to migrate from Drupal 7. It would be nice to see the Cool module be brought up to date with the current API, perhaps something I will be helping with in the not so distant future.

Modules

Information

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web